id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
12,163,566 | https://en.wikipedia.org/wiki/Environmental%20emergency | An environmental emergency is defined as a "sudden-onset disaster or accident resulting from natural, technological or human-induced factors, or a combination of these, that causes or threatens to cause severe environmental damage as well as loss of human lives and property." (UNEP/GC.22/INF/5, 13 November 2002.)
Following a disaster or conflict, an environmental emergency can occur when people's health and livelihoods are at risk due to the release of hazardous and noxious substances, or because of significant damage to the ecosystem. Examples include fires, oil spills, chemical accidents, toxic waste dumping and groundwater pollution.
The environmental risks can be acute and life-threatening. According to the International Disaster Database (EM-DAT), between 2003 and 2013, there were 380 industrial accidents reported, affecting 207 668 people and resulting in over US$22 million in losses. Climate change is having an unprecedented effect on the occurrence of natural disasters and the associated risk of environmental emergencies. With climate change already stretching the disaster relief system, future climate-related emergency events will generate increased and more costly demands for assistance.
Context
All disasters have some environmental impacts.
Some of these may be immediate and life-threatening – for example, when an earthquake damages an industrial facility, which in turn releases hazardous materials. In such cases these so-called 'secondary impacts' may cause as much damage as the initial causal factor.
For example, Typhoon Haiyan/Yolanda that struck the Philippines in November 2013, caused massive destruction and had a huge human toll but also generated a spill of around 800,000 litres of heavy oil, when a power barge ran aground in Estancia, Iloilo province, at the height of the typhoon.
Disasters may also have longer-term impacts. For example, natural disasters may cause long-term waste management difficulties or ecosystem damage.
Major international conferences
The Environmental Emergencies Forum is a unique biennial international forum that brings together disaster managers and environmental experts from governments, UN agencies, industries, academies, NGOs and civil society to improve prevention, preparedness, response and overall resilience to environmental emergencies. It also provides guidance for the Joint UNEP/OCHA Environment Unit, which provides a Secretariat to the meeting. The most recent meeting was held in Norway in June 2015. The next meeting will be held in Nairobi in June 2017.
Relevant organizations
United Nations
The Joint United Nations Environment Programme (UNEP)/Office for the Coordination of Humanitarian Affairs (OCHA) Environment Unit (JEU): By pairing the UNEP's technical expertise with OCHA's humanitarian response network, the Joint UNEP/OCHA Environment Unit (JEU) mobilizes and coordinates a comprehensive response to environmental emergencies to protect lives, livelihoods, ecosystems and future generations. The JEU can be reached 24 hours/day, seven days/week, all year round and operates at the request of affected countries. The JEU can be called by member states when acute environmental risks to life and health as a result of conflicts, natural disasters and industrial accidents are suspected.
The JEU hosts the Environmental Emergencies Centre (www.eecentre.org), an online tool designed primarily to provide national responders with a one-stop-shop of all information relevant to the preparedness, prevention and response stages of an environmental emergency.
Website: www.unocha/org/unep; www.eecentre.org
See also
Disaster management
Environmental crime
Environmental disaster
Natural disasters
Risk management
UNISDR
Vulnerability
World Conference on Disaster Reduction
References
External links
UN OCHA Environmental Emergencies Section
Environmental Emergencies Centre
Green Star Awards
Global Alliance for Disaster Reduction
Global Disaster Information Network
APELL
UNEP/GC.22/INF/5
EM-DAT International Emergency Disaster Database at Centre for Research on the Epidemiology of Disasters.
Emergency management
Humanitarian aid
Natural disasters
Environmental disasters | Environmental emergency | Physics | 783 |
25,884,081 | https://en.wikipedia.org/wiki/Qubit%20fluorometer | The Qubit fluorometer is a laboratory instrument developed and distributed by Invitrogen, which is now a part of Thermo Fisher. It is used for the quantification of DNA, RNA, and protein.
Method
The Qubit fluorometer method is to use fluorescent dyes to determine the concentration of either nucleic acids or proteins in a sample. Specialized fluorescent dyes bind specifically to the substances of interest. A spectrophotometer is used in this method to measure the natural absorbance of light at 260 nm (for DNA and RNA) or 280 nm (for proteins).
Fluorescent dyes
The Qubit assays (formerly known as Quant-iT) were previously developed and manufactured by Molecular Probes (now part of Life Technologies). Each dye is specialized for one type of molecule (DNA, RNA, or protein). These dyes exhibit extremely low fluorescence until bound to their target molecule. Upon binding to DNA, the dye molecules assume a more rigid shape and increase in fluorescence by several orders of magnitude, most likely due to intercalation between the bases.
The Qubit fluorometer, a device designed to measure fluorescence signals from samples, operates by correlating these signals with known concentrations of probes. This process enables it to transform the fluorescence data into a quantified concentration measurement. The device uses this established relationship to accurately determine the concentration of a sample.
A specific instance of this technology is the Qubit 2.0 fluorometer, which is often used in conjunction with the "dsDNA BR Assay Kit." This kit, along with others in the Qubit quantification system, incorporates dyes. These dyes are sensitive to different biomolecules and their concentrations. In this context, "ds" denotes double-stranded and "ss" signifies single-stranded DNA, indicating the specific types of DNA that the dyes can detect.
Versions
The second generation, the Qubit 2.0 Fluorometer, was released in 2010, and the 3rd generation as Qubit 3.0 in 2014. The newest version is the 4th generation Qubit 4, introduced in 2017.
References
External links
Official Qubit Fluorometric Quantitation web site
A review of the Qubit fluorometer
Laboratory equipment
Spectroscopy
Fluorescence | Qubit fluorometer | Physics,Chemistry | 471 |
8,688,979 | https://en.wikipedia.org/wiki/Percy%20Nicholls%20Award | The Percy Nicholls Award is an American engineering prize.
It has been given annually since 1942 for "notable scientific or industrial achievement in the field of solid fuels". The prize is given jointly by the American Institute of Mining, Metallurgical, and Petroleum Engineers and American Society of Mechanical Engineers.
Recipients of this Prize
2023 - David G. Osborne
2022 - Michael A. Karmis
2021 - Not given
2019 - Not given
2018 - Not given
2017 - Not given
2016 - Not given
2015 - Yoginder Paul Chugh
2014 - Yiannis Levendis
2013 - Barbara J. Arnold
2012 - Not given
2011 - Sukumar Bandopadhyay
2010 - Ashwani K. Gupta
2009 - William Beck
2008 - George A. Richards
2007 - Peter J. Bethell
2006 - John L. Marion
2005 - Gerald H. Luttrell
2004 - Dr. Hisashi (Sho) Kobayashi
2003 - J. Brett Harvey
2002 - L. Douglas Smoot
2001 - Robert E. Murray
2000 - Klaus R. G. Hein
1999 - Peter T. Luckie
1998 - Not given
1997 - Frank F. Aplan
1996 - Adel F. Sarofim
1995 - Joseph W. Leonard, III
1994 - Robert H. Essenhigh
1993 - Robert L. Frantz
1992 - Richard W. Borio
1991 - Raja V. Ramani
1990 - Richard W. Bryers
1989 - Albert W. Duerbrouck
1988 - János M. Beér
1987 - Leonard G. Austin
1986 - Gordon H. Gronhovd
1985 - David A. Zegeer
1984 - George K. Lee
1983 - E. Minor Pace
1982 - James R. Jones
1981 - Jack A. Simon
1980 - George W. Land
1979 - William N. Poundstone
1978 - Albert F. Duzy
1977 - H. Beecher Charmbury
1976 - Richard B. Engdahl
1975 - Not given
1974 - George P. Cooper
1973 - Samuel M. Cassidy
1972 - Charles H. Sawyer
1971 - George E. Keller
1970 - Richard C. Corey
1969 - David R. Mitchell
1968 - W. T. Reid
1967 - Martin A. Elliott
1966 - C. T. Holland
1965 - L. F. Deming
1964 - Carroll F. Hardy
1963 - James R. Garvey
1962 - Charles E. Lawall
1961 - Otto de Lorenzi
1960 - Carl E. Lesher
1959 - Homer H. Lowry
1958 - Willibald Trinks
1957 - John Blizzard
1956 - Chester A. Reed
1955 - Ralph Hardgrove
1954 - John F. Barkley
1953 - Henry F. Hebley
1952 - Harry F. Yancey
1951 - Albert R. Humford
1950 - Julian E. Tobey
1949 - Lawrence A. Shipman
1948 - Ralph A. Sherman
1947 - Howard N. Eavenson
1946 - Arno C. Fieldner
1945 - Thomas A. Marsh
1944 - James B. Morrow
1943 - Henry Kreisinger
1942 - Ervin G. Bailey
See also
List of engineering awards
List of mechanical engineering awards
References
Percy Nicholls Award
Notes
Awards of the American Society of Mechanical Engineers
Awards of the American Institute of Mining, Metallurgical, and Petroleum Engineers
Combustion engineering awards
Awards established in 1942
1942 establishments in the United States | Percy Nicholls Award | Chemistry,Technology | 657 |
27,540,696 | https://en.wikipedia.org/wiki/Global%20Storage%20Architecture | GSA (Global Storage Architecture) is a distributed file system created by IBM to replace the Andrew File System and the DCE Distributed File System.
External links
GSA Presentation by Stanley Wood
Distributed file systems
Network file systems
Internet Protocol based network software | Global Storage Architecture | Technology | 50 |
1,843,913 | https://en.wikipedia.org/wiki/Desert%20ecology | Desert ecology is the study of interactions between both biotic and abiotic components of desert environments. A desert ecosystem is defined by interactions between organisms, the climate in which they live, and any other non-living influences on the habitat. Deserts are arid regions that are generally associated with warm temperatures; however, cold deserts also exist. Deserts can be found in every continent, with the largest deserts located in Antarctica, the Arctic, Northern Africa, and the Middle East.
Climate
Deserts experience a wide range of temperatures and weather conditions, and can be classified into four types: hot, semiarid, coastal, and cold. Hot deserts experience warm temperatures year round, and low annual precipitation. Low levels of humidity in hot deserts contribute to high daytime temperatures, and extensive night time heat loss. The average annual temperature in hot deserts is approximately 20 to 25 °C, however, extreme weather conditions can lead to temperatures ranging from -18 to 49 °C.
Rainfall generally occurs, followed by long periods of dryness. Semiarid deserts experience similar conditions to hot deserts, however, the maximum and minimum temperatures tend to be less extreme, and generally range from 10 to 38 °C. Coastal deserts are cooler than hot and semiarid deserts, with average summer temperatures ranging between 13 and 24 °C. They also feature higher total rainfall values. Cold deserts are similar in temperature to coastal deserts, however, they receive more annual precipitation in the form of snowfall. Deserts are most notable for their dry climates; usually a result from their surrounding geography. For example, rain-blocking mountain ranges, and distance from oceans are two geographic features that contribute to desert aridity. Rain-blocking mountain ranges create Rain Shadows. As air rises and cools, its relative humidity increases and some or most moisture rains out, leaving little to no water vapor to form precipitation on the other side of the mountain range.
Deserts occupy one-fifth of the Earth's land surface and occur in two belts: between 15° and 35° latitude in both the southern and northern hemispheres. These bands are associated with the high solar intensities that all areas in the tropics receive, and with the dry air brought down by the descending arms of both the Hadley and Ferell atmospheric circulation cells. Dry winds hold little moisture for these areas, and also tend to evaporate any water present.
Many desert ecosystems are limited by available water levels, rather than rates of radiation or temperature. Water flow in these ecosystems can be thought of as similar to energy flow; in fact, it is often useful to look at water and energy flow together when studying desert ecosystems and ecology.
Water availability in deserts may also be hindered by loose sediments. Dust clouds commonly form in windy, arid climates. Scientists have previously theorised that desert dust clouds would enhance rainfall, however, some more recent studies have shown that precipitation is actually inhibited by this phenomenon by absorbing moisture from the atmosphere. This absorption of atmospheric moisture can result in a positive feedback loop, which leads to further desertification.
Landscape
Desert landscapes can contain a wide variety of geological features, such as oases, rock outcrops, dunes, and mountains. Dunes are structures formed by wind moving sediments into mounds. Desert dunes are generally classified based on their orientation relative to wind directly. Possibly the most recognizable dune type are transverse dunes, characterized by crests transverse to the wind direction. Many dunes are considered to be active, meaning that they can travel and change over time due to the influence of the wind. However, some dunes can be anchored in place by vegetation or topography, preventing their movement. Some dunes may also be referred to as sticky. These types of dunes occur when individual grains of sand become cemented together. Sticky dunes tend to be more stable, and resistant to wind reworking than loose dunes. Barchan, and Seif dunes are among the most common of desert dunes. Barchan dunes are formed as winds continuously blow in the same direction, and are characterized by a crescent-shape atop the dune. Seif dunes are long and narrow, featuring a sharp crest, and are more common in the Sahara Desert.
Analysis of geological features in desert environments can reveal a lot about the geologic history of the area. Through observation and identification of rock deposits, geologists are able to interpret the order of events that occurred during desert formation. For example, research conducted on the surface geology of the Namib Desert allowed geologists to interpret ancient movements of the Kuiseb River based on rock ages and features identified in the area.
Organism adaptation
Animals
Deserts support diverse communities of plant and animals that have evolved resistance, and circumventing methods of extreme temperatures and arid conditions. For example, desert grasslands are more humid and slightly cooler than its surrounding ecosystems. Many animals obtain energy by eating the surrounding vegetation, however, desert plants are much more difficult for organisms to consume. To avoid intense temperatures, the majority of small desert mammals are nocturnal, living in burrows to avoid the intense desert sun during the daytime. These burrows prevent overheating and dehydration as they maintain an optimal temperature for the mammal. Desert ecology is characterized by dry, alkaline soils, low net production and opportunistic feeding patterns by herbivores and carnivores. Other organisms' survival tactics are physiologically based. Such tactics include the completion of life cycles ahead of anticipated drought seasons, and storing water with the help of specialized organs.
Desert climates are particularly demanding on endothermic organisms. However, endothermic organisms have adapted mechanisms to aid in water retention in habitats such as desert ecosystems which are commonly affected by drought. In environments where the external temperature is less than their body temperature, most endotherms are able to balance heat production and heat loss to maintain a comfortable temperature. However, in deserts where air and ground temperatures exceed body temperature, endotherms must be able to dissipate the large amounts of heat being absorbed in these environments. In order to cope with extreme conditions, desert endotherms have adapted through the means of avoidance, relaxation of homeostasis, and specializations. Nocturnal desert rodents, like the kangaroo rat, will spend the daytime in cool burrows deep underground, and emerge at night to seek food. Birds are much more mobile than ground-dwelling endotherms, and can therefore avoid heat-induced dehydration by flying between water sources. To prevent overheating, the body temperatures of many desert mammals have adapted to be much higher than non-desert mammals. Camels, for example, can maintain body temperatures that are about equal to typical desert air temperatures. This adaptations allows camels to retain large amounts of water for extended periods of time. Other examples of higher body temperature in desert mammals include the diurnal antelope ground squirrel, and the oryx. Certain desert endotherms have evolved very specific and unique characteristics to combat dehydration. Male sandgrouse have specialized belly feathers that are able to trap and carry water. This allows the sandgrouse to provide a source of hydration for their chicks, who do not yet have the ability to fly to water sources themselves.
Plants
Although deserts have severe climates, some plants still manage to grow. Plants that can survive in arid deserts are called xerophytes, meaning they are able to survive long dry periods. Such plants may close their stomata during the daytime and open them again at night. During the night, temperatures are much cooler, and plants will experience less water loss, and intake larger amounts of carbon dioxide for photosynthesis.
Adaptations in xerophytes include resistance to heat and water loss, increased water storage capabilities, and reduced surface area of leaves. One of the most common families of desert plants are the cacti, which are covered in sharp spines or bristles for defence against herbivory. The bristles on certain cacti also have the ability to reflect sunlight, such as those of the old man cactus. Certain xerophytes, like oleander, feature stomata that are recessed as a form of protection against hot, dry desert winds, which allows the leaves to retain water more effectively. Another unique adaptation can be found in xerophytes like ocotillo, which are "leafless during most of the year, thereby avoiding excessive water loss".
There are also plants called phreatophytes which have adapted to the harsh desert conditions by developing extremely long root systems, some of which are 80 ft. long; to reach the water table which ensures a water supply to the plant.
Exploration and research
The harsh climate of most desert regions is a major obstacle in conducting research into these ecosystems. In the environments requiring special adaptations to survive, it is often difficult or even impossible for researchers to spend extended periods of time investigating the ecology of such regions. To overcome the limitations imposed by desert climates, some scientists have used technological advancements in the area of remote sensing and robotics. One such experiment, conducted in 1997, had a specialized robot named Nomad travel through a portion of the Atacama Desert. During this expedition, Nomad travelled over 200 kilometers and provided the researchers with many photographs of sites visited along its path. In another experiment in 2004, named the United Arab Emirates Unified Aerosol Experiment, researchers used satellites and computer models to study emissions and their effect on the climate in the Arabian Desert.
See also
Aridisols
References
Deserts
Ecology
Ecology by biome
Habitats | Desert ecology | Biology | 1,913 |
5,053,014 | https://en.wikipedia.org/wiki/Tetramethylethylenediamine | Tetramethylethylenediamine (TMEDA or TEMED) is a chemical compound with the formula (CH3)2NCH2CH2N(CH3)2. This species is derived from ethylenediamine by replacement of the four amine hydrogens with four methyl groups. It is a colorless liquid, although old samples often appear yellow. Its odor is similar to that of rotting fish.
As a reagent in synthesis
TMEDA is widely employed as a ligand for metal ions. It forms stable complexes with many metal halides, e.g. zinc chloride and copper(I) iodide, giving complexes that are soluble in organic solvents. In such complexes, TMEDA serves as a bidentate ligand.
TMEDA has an affinity for lithium ions. When mixed with n-butyllithium, TMEDA's nitrogen atoms coordinate to the lithium, forming a cluster of higher reactivity than the tetramer or hexamer that n-butyllithium normally adopts. BuLi/TMEDA is able to metallate or even doubly metallate many substrates including benzene, furan, thiophene, N-alkylpyrroles, and ferrocene. Many anionic organometallic complexes have been isolated as their [Li(tmeda)2]+ complexes. In such complexes [Li(tmeda)2]+ behaves like a quaternary ammonium salt, such as [NEt4]+.
sec-Butyllithium/TMEDA is a useful combination in organic synthesis where the n-butyl analogue adds to substrate. TMEDA is still capable of forming a metal complex with Li in this case as mentioned above.
In molecular biology
TEMED is a common reagent in molecular biology laboratories, as a polymerizing agent for polyacrylamide gels in the protein analysis technique SDS-PAGE.
Other uses
The complexes (TMEDA)Ni(CH3)2 and [(TMEDA)Ni(o-tolyl)Cl] illustrate the use of tmeda to stabilize homogeneous catalysts.
Related compounds
, also referred to as tetramethylethylenediamine.
Bis(dimethylamino)methane,
References
Dimethylamino compounds
Chelating agents
Foul-smelling chemicals | Tetramethylethylenediamine | Chemistry | 497 |
14,445,588 | https://en.wikipedia.org/wiki/Prokineticin%20receptor%202 | Prokineticin receptor 2 (PKR2), is a dimeric G protein-coupled receptor encoded by the PROKR2 gene in humans.
Function
Prokineticins are secreted proteins that can promote angiogenesis and induce strong gastrointestinal smooth muscle contraction. The protein encoded by this gene is an integral membrane protein and G protein-coupled receptor for prokineticins. PKR2 is composed of 384 amino acids. Asparagine residues at position 7 and 27 undergo N-linked glycosylation. Cysteine residues at position 128 and 208 form a disulfide bond. The encoded protein is similar in sequence to GPR73, another G protein-coupled receptor for prokineticins. PKR2 is also linked to mammalian circadian rhythm. Levels of PKR2 mRNA fluctuate in the suprachiasmatic nucleus, increasing during the day and decreasing at night.
Mutations in the PROKR2 (also known as KAL3) gene have been implicated in hypogonadotropic hypogonadism and gynecomastia. Total loss of PKR2 in mice leads to spontaneous torpor usually beginning at dusk and lasting for 8 hours on average.
PKR2 functions as a G protein-coupled receptor, thus it has a signaling cascade when it's ligand binds. PKR2 is a Gq-coupled protein, so when the ligand binds, beta-type phospholipase C is activated which creates inositol triphosphate. This then triggers calcium release inside the cell.
See also
Prokineticin receptor
Kallmann syndrome
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Kallmann syndrome
G protein-coupled receptors | Prokineticin receptor 2 | Chemistry | 369 |
4,185,946 | https://en.wikipedia.org/wiki/Air%20Quality%20Modeling%20Group | The Air Quality Modeling Group (AQMG) is in the U.S. EPA's Office of Air and Radiation (OAR) and provides leadership and direction on the full range of air quality models, air pollution dispersion models and other mathematical simulation techniques used in assessing pollution control strategies and the impacts of air pollution sources.
The AQMG serves as the focal point on air pollution modeling techniques for other EPA headquarters staff, EPA regional Offices, and State and local environmental agencies. It coordinates with the EPA's Office of Research and Development (ORD) on the development of new models and techniques, as well as wider issues of atmospheric research. Finally, the AQMG conducts modeling analyses to support the policy and regulatory decisions of the EPA's Office of Air Quality Planning and Standards (OAQPS).
The AQMG is located in Research Triangle Park, North Carolina.
Projects maintained by the AQMG
The AQMG maintains the following specific projects:
Air Quality Analyses to Support Modeling
Air Quality Modeling Guidelines
Dispersion Modeling Computer Codes
Dispersion Modeling
Emissions Inventories For Regional Modeling
Guidance on Modeling for New NAAQS & Regional Haze
Meteorological Data Guidance and Modeling
Model Clearinghouse
Models-3/Community Multiscale Air Quality (CMAQ)
Models3 Applications Team, Outreach and Training Coordination
Multimedia Modeling
PM Data Analysis and PM Modeling
Preferred/Recommended Models Alternative Models Screening Models
Regional Ozone Modeling
Roadway Intersection Modeling
Support Center For Regulatory Air Models (SCRAM)
Urban Ozone Modeling
Visibility and Regional Haze Modeling
See also
Accidental release source terms
Bibliography of atmospheric dispersion modeling
Air Quality Modelling and Assessment Unit (AQMAU)
Air Resources Laboratory
AP 42 Compilation of Air Pollutant Emission Factors
Atmospheric dispersion modeling
Atmospheric Studies Group
:Category:Atmospheric dispersion modeling
List of atmospheric dispersion models
Met Office
UK Atmospheric Dispersion Modelling Liaison Committee
UK Dispersion Modelling Bureau
References
Further reading
www.crcpress.com
www.air-dispersion.com
External links
UK Dispersion Modelling Bureau web site
UK ADMLC web site
Air Resources Laboratory (ARL)
Air Quality Modeling Group
Met Office web site
Error propagation in air dispersion modeling
Air pollution in the United States
Air pollution organizations
Atmospheric dispersion modeling
United States Environmental Protection Agency | Air Quality Modeling Group | Chemistry,Engineering,Environmental_science | 462 |
31,016 | https://en.wikipedia.org/wiki/Terrestrial%20Time | Terrestrial Time (TT) is a modern astronomical time standard defined by the International Astronomical Union, primarily for time-measurements of astronomical observations made from the surface of Earth.
For example, the Astronomical Almanac uses TT for its tables of positions (ephemerides) of the Sun, Moon and planets as seen from Earth. In this role, TT continues Terrestrial Dynamical Time (TDT or TD), which succeeded ephemeris time (ET). TT shares the original purpose for which ET was designed, to be free of the irregularities in the rotation of Earth.
The unit of TT is the SI second, the definition of which is based currently on the caesium atomic clock, but TT is not itself defined by atomic clocks. It is a theoretical ideal, and real clocks can only approximate it.
TT is distinct from the time scale often used as a basis for civil purposes, Coordinated Universal Time (UTC). TT is indirectly the basis of UTC, via International Atomic Time (TAI). Because of the historical difference between TAI and ET when TT was introduced, TT is 32.184 s ahead of TAI.
History
A definition of a terrestrial time standard was adopted by the International Astronomical Union (IAU) in 1976 at its XVI General Assembly and later named Terrestrial Dynamical Time (TDT). It was the counterpart to Barycentric Dynamical Time (TDB), which was a time standard for Solar system ephemerides, to be based on a dynamical time scale. Both of these time standards turned out to be imperfectly defined. Doubts were also expressed about the meaning of 'dynamical' in the name TDT.
In 1991, in Recommendation IV of the XXI General Assembly, the IAU redefined TDT, also renaming it "Terrestrial Time". TT was formally defined in terms of Geocentric Coordinate Time (TCG), defined by the IAU on the same occasion. TT was defined to be a linear scaling of TCG, such that the unit of TT is the "SI second on the geoid", i.e. the rate approximately matched the rate of proper time on the Earth's surface at mean sea level. Thus the exact ratio between TT time and TCG time was , where was a constant and was the gravitational potential at the geoid surface, a value measured by physical geodesy. In 1991 the best available estimate of was .
In 2000, the IAU very slightly altered the definition of TT by adopting an exact value, .
Current definition
TT differs from Geocentric Coordinate Time (TCG) by a constant rate. Formally it is defined by the equation
where TT and TCG are linear counts of SI seconds in Terrestrial Time and Geocentric Coordinate Time respectively, is the constant difference in the rates of the two time scales, and is a constant to resolve the epochs (see below). is defined as exactly . Due to the term the rate of TT is very slightly slower than that of TCG.
The equation linking TT and TCG more commonly has the form given by the IAU,
where is the TCG time expressed as a Julian date (JD). The Julian Date is a linear transformation of the raw count of seconds represented by the variable TCG, so this form of the equation is not simplified. The use of a Julian Date specifies the epoch fully. The above equation is often given with the Julian Date for the epoch, but that is inexact (though inappreciably so, because of the small size of the multiplier ). The value is exactly in accord with the definition.
Time coordinates on the TT and TCG scales are specified conventionally using traditional means of specifying days, inherited from non-uniform time standards based on the rotation of Earth. Specifically, both Julian Dates and the Gregorian calendar are used. For continuity with their predecessor Ephemeris Time (ET), TT and TCG were set to match ET at around Julian Date More precisely, it was defined that TT instant and TCG instant exactly correspond to the International Atomic Time (TAI) instant This is also the instant at which TAI introduced corrections for gravitational time dilation.
TT and TCG expressed as Julian Dates can be related precisely and most simply by the equation
where is exactly.
Realizations
TT is a theoretical ideal, not dependent on a particular realization. For practical use, physical clocks must be measured and their readings processed to estimate TT. A simple offset calculation is sufficient for most applications, but in demanding applications, detailed modeling of relativistic physics and measurement uncertainties may be needed.
TAI
The main realization of TT is supplied by TAI. The BIPM TAI service, performed since 1958, estimates TT using measurements from an ensemble of atomic clocks spread over the surface and low orbital space of Earth. TAI is canonically defined retrospectively, in monthly bulletins, in relation to the readings shown by that particular group of atomic clocks at the time. Estimates of TAI are also provided in real time by the institutions that operate the participating clocks. Because of the historical difference between TAI and ET when TT was introduced, the TAI realization of TT is defined thus:
The offset 32.184 s arises from history. The atomic time scale A1 (a predecessor of TAI) was set equal to UT2 at its conventional starting date of 1 January 1958, when ΔT was about 32 seconds. The offset 32.184 seconds was the 1976 estimate of the difference between Ephemeris Time (ET) and TAI, "to provide continuity with the current values and practice in the use of Ephemeris Time".
TAI is never revised once published and TT(TAI) has small errors relative to TT(BIPM), on the order of 10-50 microseconds.
The GPS time scale has a nominal difference from atomic time , so that . This realization introduces up to a microsecond of additional error, as the GPS signal is not precisely synchronized with TAI, but GPS receiving devices are widely available.
TT(BIPM)
Approximately annually since 1992, the International Bureau of Weights and Measures (BIPM) has produced better realizations of TT based on reanalysis of historical TAI data. BIPM's realizations of TT are named in the form "TT(BIPM08)", with the digits indicating the year of publication. They are published in the form of a table of differences from TT(TAI), along with an extrapolation equation that may be used for dates later than the table. The latest is TT(BIPM23).
Pulsars
Researchers from the International Pulsar Timing Array collaboration have created a realization TT(IPTA16) of TT based on observations of an ensemble of pulsars up to 2012. This new pulsar time scale is an independent means of computing TT. The researchers observed that their scale was within 0.5 microseconds of TT(BIPM17), with significantly lower errors since 2003. The data used was insufficient to analyze long-term stability, and contained several anomalies, but as more data is collected and analyzed, this realization may eventually be useful to identify defects in TAI and TT(BIPM).
Other standards
TT is in effect a continuation of (but is more precisely uniform than) the former Ephemeris Time (ET). It was designed for continuity with ET, and it runs at the rate of the SI second, which was itself derived from a calibration using the second of ET (see, under Ephemeris time, Redefinition of the second and Implementations). The JPL ephemeris time argument Teph is within a few milliseconds of TT.
TT is slightly ahead of UT1 (a refined measure of mean solar time at Greenwich) by an amount known as ΔT was measured at +67.6439 seconds (TT ahead of UT1) at 0 h UTC on 1 January 2015; and by retrospective calculation, ΔT was close to zero about the year 1900. ΔT is expected to continue to increase, with UT1 becoming steadily (but irregularly) further behind TT in the future. In fine detail, ΔT is somewhat unpredictable, with 10-year extrapolations diverging by 2-3 seconds from the actual value.
Relativistic relationships
Observers in different locations, that are in relative motion or at different altitudes, can disagree about the rates of each other's clocks, owing to effects described by the theory of relativity. As a result, TT (even as a theoretical ideal) does not match the proper time of all observers.
In relativistic terms, TT is described as the proper time of a clock located on the geoid (essentially mean sea level).
However,
TT is now actually defined as a coordinate time scale.
The redefinition did not quantitatively change TT, but rather made the existing definition more precise. In effect it defined the geoid (mean sea level) in terms of a particular level of gravitational time dilation relative to a notional observer located at infinitely high altitude.
The present definition of TT is a linear scaling of Geocentric Coordinate Time (TCG), which is the proper time of a notional observer who is infinitely far away (so not affected by gravitational time dilation) and at rest relative to Earth. TCG is used to date mainly for theoretical purposes in astronomy. From the point of view of an observer on Earth's surface the second of TCG passes in slightly less than the observer's SI second. The comparison of the observer's clock against TT depends on the observer's altitude: they will match on the geoid, and clocks at higher altitude tick slightly faster.
See also
Barycentric Coordinate Time
Geocentric Coordinate Time
References
External links
BIPM technical services: Time Metrology
Time and Frequency from A to Z
Time scales
Earth
Time in astronomy | Terrestrial Time | Physics,Astronomy | 2,014 |
510,581 | https://en.wikipedia.org/wiki/R-parity | R-parity is a concept in particle physics. In the Minimal Supersymmetric Standard Model, baryon number and lepton number are no longer conserved by all of the renormalizable couplings in the theory. Since baryon number and lepton number conservation have been tested very precisely, these couplings need to be very small in order not to be in conflict with experimental data. R-parity is a symmetry acting on the Minimal Supersymmetric Standard Model (MSSM) fields that forbids these couplings and can be defined as
or, equivalently, as
where is spin, is baryon number, and is lepton number. All Standard Model particles have R-parity of +1 while supersymmetric particles have R-parity of −1.
Note that there are different forms of parity with different effects and principles, one should not confuse this parity with any other parity.
Dark matter candidate
With R-parity being preserved, the lightest supersymmetric particle (LSP) cannot decay. This lightest particle (if it exists) may therefore account for the observed missing mass of the universe that is generally called dark matter. In order to fit observations, it is assumed that this particle has a mass of to , is neutral and only interacts through weak interactions and gravitational interactions. It is often called a weakly interacting massive particle or WIMP.
Typically the dark matter candidate of the MSSM is a mixture of the electroweak gauginos and Higgsinos and is called a neutralino. In extensions to the MSSM it is possible to have a sneutrino be the dark matter candidate. Another possibility is the gravitino, which only interacts via gravitational interactions and does not require strict R-parity.
R-parity violating couplings of the MSSM
The renormalizable R-parity violating couplings of the MSSM are
violates by 1 unit
The strongest constraint involving this coupling alone is from the non-observation of neutron–antineutron oscillations.
violates by 1 unit
The strongest constraint involving this coupling alone is the violation universality of Fermi constant in quark and leptonic charged current decays.
violates by 1 unit
The strongest constraint involving this coupling alone is the violation universality of Fermi constant in leptonic charged current decays.
violates by 1 unit
The strongest constraint involving this coupling alone is that it leads to a large neutrino mass.
While the constraints on single couplings are reasonably strong, if multiple couplings are combined together, they lead to proton decay. Thus there are further maximal bounds on values of the couplings from maximal bounds on proton decay rate.
Proton decay
Without baryon and lepton number being conserved and taking couplings for the R-parity violating couplings, the proton can decay in approximately 10−2 seconds or if minimal flavor violation is assumed the proton lifetime can be extended to 1 year. Since the proton lifetime is observed to be greater than 1033 to 1034 years (depending on the exact decay channel), this would highly disfavour the model. R-parity sets all of the renormalizable baryon and lepton number violating couplings to zero and the proton is stable at the renormalizable level and the lifetime of the proton is increased to 1032 years and is nearly consistent with current observational data.
Because proton decay involves violating both lepton and baryon number simultaneously, no single renormalizable R-parity violating coupling leads to proton decay. This has motivated the study of R-parity violation where only one set of the R-parity violating couplings are non-zero which is sometimes called the single coupling dominance hypothesis.
Possible origins of R-parity
A very attractive way to motivate R-parity is with a continuous gauge symmetry which is spontaneously broken at a scale inaccessible to current experiments. A continuous forbids renormalizable terms which violate and . If is only broken by scalar vacuum expectation values (or other order parameters) that carry even integer values of , then there exist an exactly conserved discrete remnant subgroup which has the desired properties. The crucial issue is to determine whether the sneutrino (the supersymmetric partner of neutrino), which is odd under R-parity, develops a vacuum expectation value. It can be shown, on phenomenological grounds, that this cannot happen in any theory where is broken at a scale much above the electroweak one. This is true in any theory based on a large-scale seesaw mechanism. As a consequence, in such theories R-parity remains exact at all energies.
This phenomenon can arise as an automatic symmetry in SO(10) grand unified theories. This natural occurrence of R-parity is possible because in SO(10) the Standard Model fermions arise from the 16 dimensional spinor representation, while the Higgs arises from a 10 dimensional vector representation. In order to make an SO(10) invariant coupling, one must have an even number of spinor fields (i.e. there is a spinor parity). After GUT symmetry breaking, this spinor parity descends into R-parity so long as no spinor fields were used to break the GUT symmetry. Explicit examples of such SO(10) theories have been constructed.
See also
R-symmetry
References
External links
Particle physics
Supersymmetric quantum field theory | R-parity | Physics | 1,128 |
15,419,237 | https://en.wikipedia.org/wiki/Transformation%20design | In broad terms, transformation design is a human-centered, interdisciplinary process that seeks to create desirable and sustainable changes in behavior and form – of individuals, systems and organizations. It is a multi-stage, iterative process of applying design principles to large and complex systems.
Its practitioners examine problems holistically rather than reductively to understand relationships as well as components to better frame the challenge. They then prototype small-scale systems – composed of objects, services, interactions and experiences – that support people and organizations in achievement of a desired change. Successful prototypes are then scaled.
Because transformation design is about applying design skills in non-traditional territories, it often results in non-traditional design outputs.3 Projects have resulted in the creation of new roles, new organizations, new systems and new policies. These designers are just as likely to shape a job description, as they are a new product.3
This emerging field draws from a variety of design disciplines - service design, user-centered design, participatory design, concept design, information design, industrial design, graphic design, systems design, interactive design, experience design - as well as non-design disciplines including cognitive psychology and perceptual psychology, linguistics, cognitive science, architecture, haptics, information architecture, ethnography, storytelling and heuristics.
History
Though academics have written about the economic value of and need for transformations over the years7,8, its practice first emerged in 2004 when The Design Council, the UK's national strategic body for design, formed RED: a self-proclaimed "do-tank" challenged to bring design thinking to the transformation of public services.1
This move was in response to Prime Minister Tony Blair's desire to have public services "redesigned around the needs of the user, the patients, the passenger, the victim of crime".3
The RED team, led by Hilary Cottam, studied these big, complex problems to determine how design thinking and design techniques could help government rethink the systems and structures within public services and possibly redesign them from beginning to end.3
Between 2004 and 2006, the RED team, in collaboration with many other people and groups, developed techniques, processes and outputs that were able to "transform" social issues such as preventing illness, managing chronic illnesses, senior citizen care, rural transportation, energy conservation, re-offending prisoners and public education.
In 2015 Braunschweig University of Art / Germany has launched a new MA in Transformation Design. In 2016 The Glasgow School of Art launched another masters program "M.Des in Design Innovation and Transformation Design". In 2019 the University of Applied Sciences Augsburg / Germany launched a masters program in Transformation Design.
Process
Transformation design, like user-centered design, starts from the perspective of the end user. Designers spend a great deal of time not only learning how users currently experience the system and how they want to experience the system, but also co-creating with them the designed solutions.
Because transformation design tackles complex issues involving many stakeholders and components, more expertise beyond the user and the designer is always required. People such as, but not limited to, policy makers, sector analysts, psychologists, economists, private businesses, government departments and agencies, front-line workers and academics are invited to participate in the entire design process - from problem definition to solution development.6
With so many points-of-view brought into the process, transformation designers are not always 'designers.' Instead, they often play the role of moderator. Though varying methods of participation and co-creation, these moderating designers create hands-on, collaborative workshops (a.k.a. charrette) that make the design process accessible to the non-designers.
Ideas from workshops are rapidly prototyped and beta-tested in the real world with a group of real end users. Their experience with and opinions of the prototypes are recorded and fed back into the workshops and development of the next prototype.
See also
Human-centered design
Sources
RED's homepage
https://www.designcouncil.org.uk/ Design Council's homepage
White Paper published by RED which discusses transformation design
RED's website page which talks about transformation design
http://www.torinoworlddesigncapital.it/portale/en/content.php?sezioneID=10 Interview with Hilary Cottam at World Design Capital
https://web.archive.org/web/20070818190054/http://www.hilarycottam.com/html/RED_Paper%2001%20Health_Co-creating_services.pdf Whitepaper on co-creation
The Experience Economy, B.J. Pine and J. Gilmore, Harvard Business School Press 1999. Book discussing the economic value and importance of companies offering transformations
The Support Economy, S. Zuboff and J. Maxmin, Viking Press 2002. Book discussing the need for companies and governments to realign themselves with how people live
Transformationsdesign - Wege in eine zukunftsfähige Moderne, H. Welzer and B. Sommer, oekom 2014
Transformation Design - Perspectives on a new Design Attitude, W. Jonas, S. Zerwas and K. von Anshelm, Birkhäuser 2015
Design | Transformation design | Engineering | 1,076 |
54,603,848 | https://en.wikipedia.org/wiki/Brazilian%20Controlled%20Drugs%20and%20Substances%20Act | The Brazilian Controlled Drugs and Substances Act (), officially Portaria nº 344/1998, is Brazil's federal drug control statute, issued by the Ministry of Health through its National Health Surveillance Agency (Anvisa). The act also serves as the implementing legislation for the Single Convention on Narcotic Drugs, the Convention on Psychotropic Substances, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances in the country.
The list was last updated in May 2024.
Terminology:
Prescription notification - a standardized document intended for notifying Anvisa of the prescription of medications. Written by the doctor and retained by the drugstore
Prescription - a written medication order that provides usage instructions for the patient.
Special control prescription - a prescrition that is filled out in two copies, one that is retained by the drugstore, and another stays with the patient for usage guidance. It can be provided in a digital signed form.
Overview
Class A1
Acetylmethadol
Alphacetylmethadol
Alphameprodine
Alphamethadol
Alphaprodine
Alfentanil
Allylprodine
Anileridine
Bezitramide
Benzethidine
Benzylmorphine
Benzoylmorphine ()
Betaacetylmethadol
Betameprodine
Betamethadol
Betaprodine
Buprenorphine
Butorphanol
Clonitazene
Codoxime
Concentrated opium poppy straw
Dextromoramide
Diampromide
Diethylthiambutene
Diphenoxylate
Difenoxin
Dihydromorphine
Dimepheptanol
Dimenoxadol
Dimethylthiambutene
Dioxaphetyl butyrate
Dipipanone
Drotebanol
Ethylmethylthiambutene
Etonitazene
Etoxeridine
Phenadoxone
Phenampromide
Phenazocine
Phenomorphan
Phenoperidine
Fentanyl
Furethidine
Hydrocodone
Hydromorphinol
Hydromorphone
Hydroxypethidine
Methadone intermediate
Moramide intermediate
Pethidine intermediate A
Norpethidine (Pethidine intermediate B)
Pethidinic acid (Pethidine intermediate C)
Isomethadone
Levophenacylmorphan
Levomethorphan
Levomoramide
Levorphanol
Methadone
Metazocine
Methyldesorphine
Methyldihydromorphine
Metopon
Myrophine
Morpheridine
Morphine
Morinamide
Nicomorphine
Noracymethadol
Norlevorphanol
Normethadone
Normorphine
Norpipanone
Codeine-N-oxide
Morphine-N-oxide
Opium
Oripavine
Oxycodone
Oxymorphone
Pethidine
Piminodine
Piritramide
Proheptazine
Properidine
Racemethorphan
Racemoramide
Racemorphan
Remifentanil
Sufentanil
Tapentadol
Thebacon
Thebaine
Tilidine
Trimeperidine
Viminol
Class A2
Acetyldihydrocodeine
Codeine
Dextropropoxyphene
Dihydrocodeine
Ethylmorphine (Dionine)
Pholcodine
Nalbufina
Nalorphine
Nicocodine
Nicodicodine
Norcodeine
Propiram
Tramadol
Class A3
Amphetamine
Cathine
Chlorphentermine
Dexamphetamine
Dronabinol - Synthetic form with no other cannabinoid.
Phenmetrazine
Phencyclidine
Phenethylline
Levamphetamine
Lisdexamfetamine
Methylphenidate
Methylsynephrine
Thamphetamine
Cannabis sativa derivates containing up to 30mg/mL THC and 30mg/mL CBD
Class B1
Allobarbital
Alprazolam
Amineptine
Amobarbital
Aprobarbital
Armodafinil
Barbexaclone
Barbital
Bromazepam
Bromazolam
Brotizolam
Butabarbital
Butalbital
Camazepam
Ketamine
Ketazolam
Cyclobarbital
Clobazam
Clonazepam
Clonazolam
Chlorazepam (CAS: , , )
Clorazepate
Chlordiazepoxide
Ethyl chloride
Methylene chloride/dichloromethane
Clotiazepam
Cloxazolam
Delorazepam
Diazepam
Diclazepam
Esketamine
Estazolam
Eszopiclone
Ethchlorvynol
Ethylamphetamine (N-Ethylamphetamine)
Ethinamate
Etizolam
Phenazepam
Phenobarbital
Flualprazolam
Flubromazolam
Fludiazepam
Flunitrazepam
Flunitrazolam
Flurazepam
GBL
GHB
Glutethimide
Halazepam
Haloxazolam
Lefetamine
Ethyl loflazepate
Loprazolam
Lorazepam
Lormetazepam
Medazepam
Meprobamate
Mesocarb
Methylphenobarbital (prominal)
Methyprylon
Midazolam
Modafinil
Nimetazepam
Nitrazepam
Fencamfamin
Nordazepam
Oxazepam
Oxazolam
Pemoline
Pentazocine
Pentobarbital
Perampanel
Pinazepam
Pipradrol
Pyrovalerone
Prazepam
Prolintane
Propylhexedrine
Secbutabarbital
Secobarbital
Temazepam
Tetrazepam
Thiamylal
Thiopental
Triazolam
Trichlorethylene
Trihexyphenidyl
Vinylbital
Zaleplon
Zolpidem
Zopiclone
Cannabis derivates containing up to 0,2% THC
Class B2
Aminorex
Amfepramone
Fenproporex
Phendimetrazine
Phentermine
Mazindol
Mefenorex
Sibutramine
Class C1
Acepromazine
Valproic acid
Agomelatine
Amantadine
Amisulpride
Amitriptyline
Amoxapine
Aripiprazole
Asenapine
Atomoxetine
Azacyclonol
Beclamide
Benactyzine
Benfluorex
Benzydamine
Benzoctamine
Benzquinamide
Biperiden
Brexpiprazole
Brivaracetam
Bupropion
Buspirone
Butaperazine
Butriptyline
Cannabidiol - Synthetic form with no other cannabinoid.
Captodiame
Carbamazepine
Caroxazone
Celecoxib
Cyclarbamate
Cyclexedrine (CAS: , )
Cyclopentolate
Cisapride
Citalopram
Clomacran
Clomethiazole
Clomipramine
Chloralodol
Chlorpromazine
Chlorprothixene
Clotiapine
Clozapine
Dapoxetine
Desflurane
Desipramine
Desvenlafaxine
Deutetrabenazine
Dexetimide
Dexmedetomidine
Dibenzepine
Dimetacrine
Disopyramide
Disulfiram
Divalproex sodium
Dixyrazine
Donepezil
Doxepin
Droperidol
Duloxetine
Ectylurea (See Acylurea) (CAS: , )
Emylcamate
Enflurane
Entacapone
Escitalopram
Etomidate
Etoricoxib
Ethosuximide
Levophacetoperane
Phenprobamate
Phenaglycodol
Phenelzine
Pheniprazine
Phenytoin
Fluphenazine
Flumazenil
Fluoxetine
Flupentixol
Fluvoxamine
Gabapentin
Galantamine
Haloperidol
Halothane
Chloral hydrate
Etodroxizine (Hydrochlorbenzethylamine)
Hydroxydione
Homofenazine
Imiclopazine
Imipramine
Imipraminoxide
Iproclozide
Isocarboxazid
Isoflurane
Cenestil (Isopropyl-crotonyl-urea) (CAS: , )
Lacosamide
Lamotrigine
Leflunomide
Levetiracetam
Levomepromazine
Levomilnacipran
Lisuride
Lithium
Loperamide
Loxapine
Lumiracoxib
Lurasidone
Mavacamten
Maprotiline
Meclofenoxate
Mephenoxalone
Mefexamide
Memantine
Mepazine
Mesoridazine
Methylnaltrexone
Methylpentynol
Methysergide
Metixene
Methopromazine (CAS: , )
Methoxyflurane
Mianserin
Milnacipran
Miltefosine
Minaprine
Mirtazapine
Misoprostol - Hospital authorization only.
Moclobemide
Molnupiravir
Moperone
Naloxone
Naltrexone
Nefazodone
Nialamide
Isobutyl nitrite
Isopentyl nitrite
Isopropyl nitrite
Nomifensine
Nortriptyline
Noxiptiline
Olanzapine
Opipramol
Oxcarbazepine
Oxybuprocaine
Hydroxyphenamate
Oxypertine
Paliperidone
Parecoxib
Paroxetine
Penfluridol
Perphenazine
Pergolide
Pericyazine
Pimozide
Pipamperone
Pipotiazine
Pramipexole
Pregabalin
Primidone
Prochlorperazine
Promazine
Propanidid
Propiomazine
Propofol
Prothipendyl
Protriptyline
Proparacaine
Quetiapine
Ramelteon
Rasagiline
Reboxetine
Ribavirin
Rimonabant
Risperidone
Rivastigmine
Rofecoxib
Ropinirole
Rotigotine
Rufinamide
Selegiline
Sertraline
Sevoflurane
Sulpiride
Sultopride
Tacrine
Teriflunomide
Tetrabenazine
Tetracaine
Tiagabine
Tianeptine
Tiapride
Thioproperazine
Thioridazine
Thiothixene
Tolcapone
Topiramate
Tranylcypromine
Trazodone
Triclofos
Trifluoperazine
Trifluperidol
Trimipramine
Troglitazone
Valdecoxib
Sodium valproate
Venlafaxine
Veralipride
Vigabatrin
Vilazodone
Vortioxetine
Ziprasidone
Zotepine
Zuclopenthixol
Class C2
Acitretin
Adapalene
Bexarotene
Isotretinoin
Tretinoin
Class C3
Thalidomide
Lenalidomide
Pomalidomide
Class C4
List revoked in September 2016.
Class C5
Androstanolone
Bolasterone
Boldenone
Chlorodehydromethyltestosterone
Clostebol
Dehydrochlormethyltestosterone
Drostanolone
Stanolone
Stanozolol
Ethylestrenol
Fluoxymesterone
Formebolone
Gestrinone
Mesterolone
Methandienone
Methandranone
Methandriol
Methenolone
Methyltestosterone
Mibolerone
Nandrolone
Norethandrolone
Oxandrolone
Oxymesterone
Oxymetholone
Prasterone (dehydroepiandrosterone - DHEA)
Somapacitan
Somatrogon
Somatropin (Human Growth Hormone)
Testosterone
Trenbolone
Class D
Class D1
1-Boc-4-AP (CAS: , )
1-phenyl-2-propanone
3,4-MDP-2-P ethyl glycidate (PMK ethyl glycidate)
3,4-MDP-2-P methyl glycidic acid (PMK glycidic acid)
3,4-MDP-2-P methyl glycidate (PMK glycidate)
3,4-Methylenedioxyphenyl-2-propanone
4-AP (N-Phenyl-4-piperidinamine)
Anthranilic acid
Phenylacetic acid
Lysergic acid
N-Acetylanthranilic acid
Alpha-phenylacetoacetonitrile (APAAN) (CAS: , )
Alpha-phenylacetoacetamide (APAA) (CAS: , )
4-ANPP
Dihydroergometrine
Dihydroergotamine
Ephedrine
Ergometrine
Ergotamine
Etafedrine
Helional
Isosafrole
Methyl alpha-acetylphenylacetate (MAPA) - (CAS: , )
Norfentanyl
Sassafras oil
Long pepper oil
Piperidine
Piperonal
Pseudoephedrine
N-phenethyl-4-piperidinone (NPP)
Safrole
Class D2
Acetone
Hydrochloric acid
Sulfuric acid
Acetic anhydride
Ethyl chloride
Methylene Chloride
Chloroform
Ethyl Ether
Methyl ethyl ketone
Potassium permanganate
Sodium sulphate
Toluene
Trichlorethylene
Class E
Cannabis sativa - except for registed products under allowed dosage mentioned in Class A3.
Claviceps paspali (Ergot)
Datura suaveolens
Erythroxylum coca
Lophophora williamsii (Coto Peyote)
Mitragyna speciosa
Papaver somniferum
Prestonia amazonica
Salvia divinorum
Class F
Class F1
2F-Viminol
2-Methyl-AP-237 (methyl analogue of bucinnazine)
3-Methylfentanyl
3-Methylthiofentanyl
4-Fluoroisobutyrfentanyl
7-Hydroxymitragynine
Acetyl-alpha-methylfentanyl
Acetylfentanyl
Acetorphine
Acryloylfentanyl
AH-7921
Alpha-methylfentanyl
Alpha-methylthiofentanyl
Beta-hydroxy-3-methylfentanyl
Beta-hydroxyfentanyl
Brorphine
Butyrfentanyl
Butonitazene
Carfentanil
Ketobemidone
Cyclopropylfentanyl
Cocaine
Crotonylfentanyl
Desomorphine
Dihydroetorphine
Ecgonine
Etazene (Etodesnitazene)
Etonitazepyne
Etorphine
Furanylfentanyl
Heroin
Isotonitazene
MDPV
Metonitazene
Methoxyacetylfentanyl
Mitragynine
MPPP
MT-45
N-Desethyletonitazene
N-Pyrrolidino Metonitazene (, )
Ocfentanil
Orthofluorofentanyl
Parafluorobutyrfentanyl
Parafluorofentanyl
PEPAP
Protonitazene
Tetrahydrofuranylfentanyl
Thiofentanyl
U-47700
Valerylfentanyl
Class F2
LSD
1B-LSD
1cP-LSD
1P-LSD
2C-B
2C-C
2C-D
2C-E
2C-F
2C-I
2C-T-2
2C-T-7
2-MeO-Diphenidine
3-Fluorophenmetrazine
3-MeO-PCP
3-MMC
4-AcO-DMT
4-AcO-MET
4-Bromomethcathinone
4-Chloro-alpha-PVP
4-Chloromethcathinone (4-CMC)
4-Fluoroamphetamine (4-FA)
4-Fluoromethcathinone
4F-MDMB-BINACA
4-HO-MIPT
4-MEAPP
4-MEC
4-Methylaminorex
4-MTA
4,4'-DMAR
5-APB
5-APDB
5C-MDA-19 (BZO-POXIZID, CAS: , )
5-EAPB
5F-AB-FUPPYCA
5F-ADB
5F-AKB48
5F-AMB-PINACA
5F-MDA-19 (5F-BZO-POXIZID, )
5F-MDMB-PICA
5F-PB-22
5-IAI
5-MAPDB
5-MeO-AMT
5-MeO-DALT
5-MeO-DIPT
5-MeO-DMT
5-MeO-MIPT
25B-NBOH
25B-NBOMe
25C-NBF
25C-NBOH
25C-NBOMe
25D-NBOMe
25E-NBOH
25E-NBOMe
25H-NBOH
25H-NBOMe
25I-NBF
25I-NBOH
25I-NBOMe
25N-NBOMe
25P-NBOMe
25T2-NBOMe
25T4-NBOMe
25T7-NBOMe
30C-NBOMe
AB-CHMINACA
AB-FUBINACA
AB-PINACA
ADB-5Br-INACA (, )
ADB-BUTINACA
ADB-CHMINACA
ADB-FUBIATA
ADB-FUBINACA
ADB-INACA (CAS: , )
ALD-52
Alpha-D2Pv
Alpha-EAPP
Alpha-PHP
alpha-PiHP
Alpha-PVP
AKB48
AM-2201
AMT
Benzphetamine
Bk-DMBDB
Brolamphetamine
BZO-4en-POXIZID (4en-pentyl MDA-19)
BZO-CHMOXIZID
BZO-HEPOXIZID, (Z)-N'-(1-heptyl-2-oxoindolin-3-ylidene)benzohydrazide
BZP
Cathinone
CH-PIATA ()
Clobenzorex
CUMYL-4CN-BINACA
CUMYL-PEGACLONE
DET
Diphenidine
Dihydro-LSD (ChemSpider ID: 67024742) (8β)-N,N-Diethyl-6-methyl-9,10-didehydro-2,3-dihydroergoline-8-carboxamide
Dimethylone
DMA
DMAA
DMBA
DMHP
DMT
DOC
DOET
DOI
EAM-2201
Ergine
Eticyclidine
Ethylphenidate
Ethylone
Etryptamine
Eutylone
FUB-AMB
Isopropylbenzylamine
JWH-018
JWH-071
JWH-072
JWH-073
JWH-081
JWH-098
JWH-122
JWH-210
JWH-250
JWH-251
JWH-252
JWH-253
Levomethamphetamine
MAM-2201
MAM-2201 N-(4-hydroxypentyl) (CAS: , , )
MAM-2201 N-(5-chloropentyl) (CAS: , , )
MDMB-4en-PINACA
MDMB-5Br-INACA
MDMB-INACA (CAS: , )
mCPP
MDA-19 (BZO-HEXOXIZID)
MDAI
MDE
MDMA
Mecloqualone
Mephedrone
Mescaline
Methallylescaline
Methanphetamine
Methaqualone
Methcathinone
Methylone
Methiopropamine
MMDA
MXE
N-acetyl-3,4-MDMC (CAS: , )
N-Ethylcathinone
N-Ethylhexedrone (hexen)
N-Ethylpentylone
Parahexyl
Pentedrone
Pentylone
PMA
PMMA
Psilocybin
Psilocin
RH-34
Rolicyclidine (PCPy)
Salvinorin A
DOM (STP)
Tenamfetamine
Tenocyclidine
THC - except for registed products under allowed dosage mentioned in Class A3.
TH-PVP
TMA
TFMPP
UR-144
XLR-11
Zipeprol
Class F3
Phenylpropanolamine (PPA) or norephedrine
Class F4
Dexfenfluramine
Dinitrophenol
Strychnine
Etretinate
Fenfluramine
Lindane
Terfenadine
References
Brazilian legislation
Drug control law
Drug policy of Brazil | Brazilian Controlled Drugs and Substances Act | Chemistry | 4,362 |
7,579,113 | https://en.wikipedia.org/wiki/Wax%20carving | Wax carving is the shaping of wax using tools usually associated with machining: rotary tools, saws, files and burins or gravers. Actual knives can be used and most certainly are, but the hardness of the material is such that they are not the ideal tool, generally.
To carve wax, the proper size and shape of block or tube is chosen, in the preferred hardness, and cut to a rough size, as needed. Then the design is generally drawn or laid out on that, and saws, files or machine tools are used to work the wax into a finished product. The wax is easily taken to a fine finish in the end using a bit of nylon stocking or steel wool. After the wax product is finished, it may be molded or used in the lost wax casting process to create a final cast product.
Casting waxes
There are a wide variety of wax types used in the lost wax casting process. Generally they fall into three main types, soft, hard and injection waxes. Injection waxes are made and intended to be used for injecting wax under pressure into rubber or other types of molds. They can be carved and worked otherwise, but they are not specifically designed for that use. Their properties more often target good injection properties: flow, low shrinkage, pot life etc.
Soft waxes are sometimes called sculpting waxes, and generally have a consistency resembling clay. Generally the techniques used in working soft waxes are similar to those used with clay and involve the use of wooden or metal spatulas, direct molding with the fingers and the like.
Carving wax
Carving wax is a smooth, non-brittle wax designed for carving and/or machining. Although the formulas for most commercial waxes are proprietary, most suppliers will state that hard waxes are some blend of waxes and plastics. This family of waxes has a hardness and consistency of plastic or softer wood. They can be cut or carved with knives, files and rotary or machine tools. To illustrate the usefulness of this type of wax, if one were to get a candle, mount it on a lathe and feed a tool into it, the wax would slough off like butter, stick to the tool and make a mess. Hard wax, on the other hand, will machine more like soft aluminum, giving fine edges and a fine finish if worked properly.
Waxes come in a wide variety of shapes: blocks, sheets, rods and tubes, and in recent times there are even extruded shapes available. The rods are useful for lathe turning, among other things, and the tubes are useful for making rings in jewelry work. The tubes are available in various sizes, and also with a flat top, which is useful for signet rings.
Waxes
Sculpture materials
Casting (manufacturing) | Wax carving | Physics | 573 |
40,777,156 | https://en.wikipedia.org/wiki/Commutative%20ring%20spectrum | In algebraic topology, a commutative ring spectrum, roughly equivalent to a -ring spectrum, is a commutative monoid in a good category of spectra.
The category of commutative ring spectra over the field of rational numbers is Quillen equivalent to the category of differential graded algebras over .
Example: The Witten genus may be realized as a morphism of commutative ring spectra MString →tmf.
See also: simplicial commutative ring, highly structured ring spectrum and derived scheme.
Terminology
Almost all reasonable categories of commutative ring spectra can be shown to be Quillen equivalent to each other. Thus, from the point view of the stable homotopy theory, the term "commutative ring spectrum" may be used as a synonymous to an -ring spectrum.
Notes
References
Algebraic topology | Commutative ring spectrum | Mathematics | 172 |
39,924,732 | https://en.wikipedia.org/wiki/Sphingosine-1-phosphate%20receptor | The sphingosine-1-phosphate receptors are a class of G protein-coupled receptors that are targets of the lipid signalling molecule Sphingosine-1-phosphate (S1P). They are divided into five subtypes: S1PR1, S1PR2, S1PR3, S1PR4 and S1PR5.
Discovery
In 1990, S1PR1 was the first member of the S1P receptor family to be cloned from endothelial cells. Later, S1PR2 and S1PR3 were cloned from rat brain and a human genomic library respectively. Finally, S1P4 and S1PR5 were cloned from in vitro differentiated human dendritic cells and rat cDNA library.
Function
The sphingosine-1-phosphate receptors regulate fundamental biological processes such as cell proliferation, angiogenesis, migration, cytoskeleton organization, endothelial cell chemotaxis, immune cell trafficking and mitogenesis. Sphingosine-1-phosphate receptors are also involved in immune-modulation and directly involved in suppression of innate immune responses from T cells.
Subtypes
Sphingosine-1-phosphate (S1P) receptors are divided into five subtypes: S1PR1, S1PR2, S1PR3, S1PR4 and S1PR5.
They are expressed in a wide variety of tissues, with each subtype exhibiting a different cell specificity, although they are found at their highest density on leukocytes. S1PR1, 2 and 3 receptors are expressed ubiquitously. The expression of S1PR4 and S1PR5 are less widespread. S1PR4 is confined to lymphoid and hematopoietic tissues whereas S1PR5 primarily located in the white matter of the central nervous system (CNS) and spleen.
G protein interactions and selective ligands
The sphingosine-1-phosphate (S1P) is the endogenous agonist for the five subtypes.
References
G protein-coupled receptors | Sphingosine-1-phosphate receptor | Chemistry | 437 |
37,255,377 | https://en.wikipedia.org/wiki/Chi%20Leonis | Chi Leonis, Latinized from χ Leonis, is a double star in the constellation Leo. It is visible to the naked eye with an apparent visual magnitude of 4.63. The distance to this star, as determined using parallax measurements, is around 95 light years. It has an annual proper motion of 346 mas.
This is most likely a binary star system. The primary component is an evolved, F-type giant star with a stellar classification of F2III-IVv. It has an estimated 162% of the Sun's mass and nearly twice the Sun's radius. The companion is a magnitude 11.0 star at an angular separation of 4.1″ along a position angle of 264°, as of 1990.
References
F-type giants
Leo (constellation)
Leonis, Chi
Leonis, 63
096097
054182
4310
Durchmusterung objects
Double stars | Chi Leonis | Astronomy | 189 |
6,469,973 | https://en.wikipedia.org/wiki/De%20Gua%27s%20theorem | In mathematics, De Gua's theorem is a three-dimensional analog of the Pythagorean theorem named after Jean Paul de Gua de Malves. It states that if a tetrahedron has a right-angle corner (like the corner of a cube), then the square of the area of the face opposite the right-angle corner is the sum of the squares of the areas of the other three faces:
De Gua's theorem can be applied for proving a special case of Heron's formula.
Generalizations
The Pythagorean theorem and de Gua's theorem are special cases () of a general theorem about n-simplices with a right-angle corner, proved by P. S. Donchian and H. S. M. Coxeter in 1935. This, in turn, is a special case of a yet more general theorem by Donald R. Conant and William A. Beyer (1974), which can be stated as follows.
Let U be a measurable subset of a k-dimensional affine subspace of (so ). For any subset with exactly k elements, let be the orthogonal projection of U onto the linear span of , where and is the standard basis for . Then
where is the k-dimensional volume of U and the sum is over all subsets with exactly k elements.
De Gua's theorem and its generalisation (above) to n-simplices with right-angle corners correspond to the special case where k = n−1 and U is an (n−1)-simplex in with vertices on the co-ordinate axes. For example, suppose , and U is the triangle in with vertices A, B and C lying on the -, - and -axes, respectively. The subsets of with exactly 2 elements are , and . By definition, is the orthogonal projection of onto the -plane, so is the triangle with vertices O, B and C, where O is the origin of . Similarly, and , so the Conant–Beyer theorem says
which is de Gua's theorem.
The generalisation of de Gua's theorem to n-simplices with right-angle corners can also be obtained as a special case from the Cayley–Menger determinant formula.
De Gua's theorem can also be generalized to arbitrary tetrahedra and to pyramids, similarly to how the law of cosines generalises Pythagoras' theorem.
History
Jean Paul de Gua de Malves (1713–1785) published the theorem in 1783, but around the same time a slightly more general version was published by another French mathematician, Charles de Tinseau d'Amondans (1746–1818), as well. However the theorem had also been known much earlier to Johann Faulhaber (1580–1635) and René Descartes (1596–1650).
See also
Vector area and projected area
Bivector
Notes
References
Sergio A. Alvarez: Note on an n-dimensional Pythagorean theorem, Carnegie Mellon University.
Theorems in geometry
Euclidean geometry | De Gua's theorem | Mathematics | 642 |
3,811,707 | https://en.wikipedia.org/wiki/Influenza%20Genome%20Sequencing%20Project | The Influenza Genome Sequencing Project (IGSP), initiated in early 2004, seeks to investigate influenza evolution by providing a public data set of complete influenza genome sequences from collections of isolates representing diverse species distributions.
The project is funded by the National Institute of Allergy and Infectious Diseases (NIAID), a division of the National Institutes of Health (NIH), and has been operating out of the NIAID Microbial Sequencing Center at The Institute for Genomic Research (TIGR, which in 2006 became The Venter Institute).
Sequence information generated by the project has been continually placed into the public domain through GenBank.
Origins
In late 2003, David Lipman, Lone Simonsen, Steven Salzberg, and a consortium of other scientists wrote a proposal to begin sequencing large numbers of influenza viruses at The Institute for Genomic Research (TIGR). Prior to this project, only a handful of flu genomes were publicly available. Their proposal was approved by the National Institutes of Health (NIH), and would later become the IGSP. New technology development led by Elodie Ghedin began at TIGR later that year, and the first publication describing > 100 influenza genomes appeared in 2005 in the journal Nature
Research goals
The project makes all sequence data publicly available through GenBank, an international, NIH-funded, searchable online database.
This research helps to provide international researchers with the information needed to develop new vaccines, therapies and diagnostics, as well as improve understanding of the overall molecular evolution of Influenza and other genetic factors that determine their virulence. Such knowledge could not only help mitigate the impact of annual influenza epidemics, but could also improve scientific knowledge of the emergence of pandemic influenza viruses.
Results
The project completed its first genomes in March 2005 and has rapidly accelerated since. By mid-2008, over 3000 isolates had been completely sequenced from influenza viruses that are endemic in human ("human flu") avian ("bird flu") and swine ("swine flu") populations, including many strains of H3N2 (human), H1N1 (human), and H5N1 (avian).
Affiliations
The project is funded by the National Institute of Allergy and Infectious Diseases (NIAID) which is a component of the NIH, which is an agency of the United States Department of Health and Human Services.
The IGSP has expanded to include a growing list of collaborators, who have contributed both expertise and valuable collections of influenza isolates. Key early contributors included Peter Palese of the Mount Sinai School of Medicine in New York, Jill Taylor of the Wadsworth Center at the New York State Department of Health, Lance Jennings of Canterbury Health Laboratories (New Zealand), Jeff Taubenberger of the Armed Forces Institute of Pathology (who later moved to NIH), Richard Slemons of Ohio State University and Rob Webster of St. Jude's Children's Hospital in Memphis, Tennessee.
In 2006 the project was joined by Ilaria Capua of the Istituto Zooprofilattico Sperimentale delle Venezie (in Italy), who contributed a valuable collection of avian flu isolates (including multiple H5N1 strains). Some of these avian isolates were described in a publication in Emerging Infectious Diseases in 2007.
Nancy Cox from the Centers for Disease Control and Prevention (CDC) and Robert Couch from Baylor College of Medicine also joined the project in 2006, contributing over 150 influenza B isolates.
The project began prospective studies of the 2007 influenza season with collaborators Florence Bourgeois and Kenneth Mandl of Children's Hospital Boston and the Harvard School of Public Health and Laurel Edelman of Surveillance Data Inc.
References
External links
Influenza Sequencing Project home page at JCVI
Influenza Genome Sequencing Project home page at NIAID
Influenza virus resource at NCBI (NIH)
Influenza Research Database Database of influenza sequences and related information.
National Institutes of Health
Genome projects
Influenza | Influenza Genome Sequencing Project | Biology | 813 |
276,390 | https://en.wikipedia.org/wiki/Chloral%20hydrate | Chloral hydrate is a geminal diol with the formula . It was first used as a sedative and hypnotic in Germany in the 1870s. Over time it was replaced by safer and more effective alternatives but it remained in usage in the United States until at least the 1970s. It sometimes finds usage as a laboratory chemical reagent and precursor. It is derived from chloral (trichloroacetaldehyde) by the addition of one equivalent of water.
Uses
Hypnotic
Chloral hydrate has not been approved by the FDA in the United States nor the EMA in the European Union for any medical indication and is on the FDA list of unapproved drugs that are still prescribed by clinicians. Usage of the drug as a sedative or hypnotic may carry some risk given the lack of clinical trials. However, chloral hydrate products, licensed for short-term management of severe insomnia, are available in the United Kingdom. Chloral hydrate was voluntarily removed from the market by all manufacturers in the United States in 2012. Prior to that, chloral hydrate may have been sold as a "legacy" or "grandfathered" drug; that is, a drug that existed prior to the time certain FDA regulations took effect and therefore, some pharmaceutical companies have argued, has never required FDA approval. New drugs did not have to be approved for safety until Congress passed the Federal Food, Drug, and Cosmetic Act (the "FD&C Act") in 1938. Further, a new drug did not have to be proven effective until 1962, when Congress amended the Act. Manufacturers contend that such "legacy drugs", by virtue of the fact that they have been prescribed for decades, have gained a history of safety and efficacy.
Chloral hydrate was used for the short-term treatment of insomnia and as a sedative before minor medical or dental treatment. It was largely displaced in the mid-20th century by barbiturates and subsequently by benzodiazepines. It was also formerly used in veterinary medicine as a general anesthetic but is not considered acceptable for anesthesia or euthanasia of small animals due to adverse effects. It is also still used as a sedative prior to EEG procedures, as it is one of the few available sedatives that does not suppress epileptiform discharges.
In therapeutic doses for insomnia, chloral hydrate is effective within 20 to 60 minutes. In humans it is metabolized within 7 hours into trichloroethanol and trichloroethanol glucuronide by erythrocytes and plasma esterases and into trichloroacetic acid in 4 to 5 days. It has a very narrow therapeutic window making this drug difficult to use. Higher doses can depress respiration and blood pressure. Tolerance to the drug develops after a few days of use.
In organic synthesis
Chloral hydrate is a starting point for the synthesis of other organic compounds. It is the starting material for the production of chloral, which is produced by the distillation of a mixture of chloral hydrate and sulfuric acid, which serves as the desiccant.
Notably, it is used to synthesize isatin. In this synthesis, chloral hydrate reacts with aniline and hydroxylamine to give a condensation product which cyclicizes in sulfuric acid to give the target compound:
Moreover, chloral hydrate is used as a reagent for the deprotection of acetals, dithioacetals and tetrahydropyranyl ethers in organic solvents.
The compound can be crystallized in a variety of polymorphs.
Botany and mycology
Hoyer's mounting medium
Chloral hydrate is also an ingredient used for Hoyer's solution, a mounting medium for microscopic observation of diverse plant types such as bryophytes, ferns, seeds, and small arthropods (especially mites). Other ingredients may include gum arabic and glycerol. An advantage of this medium includes a high refractive index and clearing (macerating) properties of small specimens (especially advantageous if specimens require observation with differential interference contrast microscopy).
Because of its status as a regulated substance, chloral hydrate can be difficult to obtain. This has led to chloral hydrate being replaced by alternative reagents in microscopy procedures.
Melzer's reagent
Chloral hydrate is an ingredient used to make Melzer's reagent, an aqueous solution that is used to identify certain species of fungi. The other ingredients are potassium iodide, and iodine. Whether tissue or spores react to this reagent is vital for the correct identification of some mushrooms.
Safety
Chloral hydrate was routinely administered in gram quantities. Prolonged exposure to its vapors is unhealthy, with an LD50 for 4-hour exposure of 440 mg/m3. Long-term use of chloral hydrate is associated with a rapid development of tolerance to its effects and possible addiction as well as adverse effects including rashes, gastric discomfort and severe kidney, heart, and liver failure.
Acute overdosage is often characterized by nausea, vomiting, confusion, convulsions, slow and irregular breathing, cardiac arrhythmia, and coma. The plasma, serum or blood concentrations of chloral hydrate and/or trichloroethanol, its major active metabolite, may be measured to confirm a diagnosis of poisoning in hospitalized patients or to aid in the forensic investigation of fatalities. Accidental overdosage of young children undergoing simple dental or surgical procedures has occurred. Hemodialysis has been used successfully to accelerate clearance of the drug in poisoning victims. It is listed as having a "conditional risk" of causing torsades de pointes.
Production
Chloral hydrate is produced from chlorine and ethanol in acidic solution.
In basic conditions the haloform reaction takes place and chloral hydrate is decomposed by hydrolysis to form chloroform.
Pharmacology
Pharmacodynamics
Chloral hydrate is metabolized in vivo to trichloroethanol, which is responsible for secondary physiological and psychological effects. The metabolite of chloral hydrate exerts its pharmacological properties via enhancing the GABA receptor complex and therefore is similar in action to benzodiazepines, nonbenzodiazepines and barbiturates. It can be moderately addictive, as chronic use is known to cause dependency and withdrawal symptoms. The chemical can potentiate various anticoagulants and is weakly mutagenic in vitro and in vivo.
Chloral hydrate inhibits liver alcohol dehydrogenase in vitro. This could be an explanation of the synergeric effect seen with alcohol.
Chloral hydrate is structurally and somewhat pharmacodynamically similar to ethchlorvynol, a pharmaceutical developed during the 1950s that was marketed as both a sedative and a hypnotic under the trade name Placidyl. In 1999, Abbott, the sole manufacturer of the drug in the United States at the time, decided to discontinue the product. After Abbott ceased production, the drug remained available for about a year. Despite the fact that it could have been manufactured generically, no other company in the United States chose to do so.
Pharmacokinetics
Chloral hydrate is metabolized to both 2,2,2-Trichloroethanol (TCE) and 2,2,2-Trichloroacetic acid (TCA) by alcohol dehydrogenase. TCE is further converted to its glucoronide. 2,2-Dichloroacetatic acid (DCA) has been detected as a metabolite in children, but how it gets made is unknown. TCE glucoronide, TCA, and a very small amount of free TCE are excreted in urine in male human adults. This study did not detect significant amounts of DCA; the authors noted that DCA can form during inappropriate sample preparation. Both TCA and DCA cause liver tumors in mice.
TCA is cleared by the kidneys at a rate slower than the expected filtration rate, suggesting that efficient reabsorption of filtered-out TCA happens.
Legal status
In the United States, chloral hydrate is a schedule IV controlled substance and requires a physician's prescription. Its properties have sometimes led to its use as a date rape drug. The phrase, "slipping a mickey," originally referred specifically to adding chloral hydrate to a person's (alcoholic) drink without the person's knowledge.
History
Chloral hydrate was first synthesized by the chemist Justus von Liebig in 1832 at the University of Giessen. Liebig discovered the molecule when a chlorination (halogenation) reaction was performed on ethanol. Its sedative properties were observed by Rudolf Buchheim in 1861, but described in detail and published only in 1869 by Oscar Liebreich; subsequently, because of its easy synthesis, its use became widespread. Through experimentation, physiologist Claude Bernard clarified that the chloral hydrate was hypnotic as opposed to an analgesic. It was the first of a long line of sedatives, most notably the barbiturates, manufactured and marketed by the German pharmaceutical industry. Historically, chloral hydrate was utilized primarily as a psychiatric medication. In 1869, German physician and pharmacologist Oscar Liebreich began to promote its use to calm anxiety, especially when it caused insomnia. Chloral hydrate had certain advantages over morphine for this application, as it worked quickly without injection and had a consistent strength.
The compound achieved wide use in both asylums and the homes of those socially refined enough to avoid asylums. Upper- and middle-class women, well-represented in the latter category, were particularly susceptible to chloral hydrate addiction. After the 1904 invention of barbital, the first of the barbiturate family, chloral hydrate began to disappear from use among those with means. It remained common in asylums and hospitals until the Second World War as it was quite cheap. Chloral hydrate had some other important advantages that kept it in use for five decades despite the existence of more advanced barbiturates. It was the safest available sedative until the middle of the twentieth century, and thus was particularly favored for children. It also left patients much more refreshed after a deep sleep than more recently invented sedatives. Its frequency of use made it an early and regular feature in The Merck Manual.
Chloral hydrate was also a significant object of study in various early pharmacological experiments. In 1875, Claude Bernard tried to determine if chloral hydrate exerted its action through a metabolic conversion to chloroform. This was not only the first attempt to determine whether different drugs were converted to the same metabolite in the body but also the first to measure the concentration of a particular pharmaceutical in the blood. The results were inconclusive. In 1899 and 1901 Hans Horst Meyer and Ernest Overton respectively made the major discovery that the general anaesthetic action of a drug was strongly correlated to its lipid solubility. However, chloral hydrate was quite polar but nonetheless a potent hypnotic. Overton was unable to explain this mystery. Thus, chloral hydrate remained one of the major and persistent exceptions to this breakthrough discovery in pharmacology. This anomaly was eventually resolved in 1948, when Claude Bernard's experiment was repeated. While chloral hydrate was converted to a different metabolite than chloroform, it was found that it was converted into the more lipophilic molecule 2,2,2-trichloroethanol. This metabolite fit much better with the Meyer–Overton correlation than chloral had. Prior to this, it had not been demonstrated that general anesthetics could undergo chemical changes to exert their action in the body.
Chloral hydrate was the first hypnotic to be used intravenously as a general anesthetic. In 1871, Pierre-Cyprien Oré began experiments on animals, followed by humans. While a state of general anesthesia could be achieved, the technique never caught on because its administration was more complex and less safe than the oral administration of chloral hydrate, and less safe for intravenous use than later general anesthetics were found to be.
Society and culture
Chloral hydrate was used as one of the earliest synthetic drugs to treat insomnia. In 1912, Bayer introduced the drug phenobarbital under the brand name Luminal. In the 1930s, pentobarbital and secobarbital (better known by their original brand names Nembutal and Seconal, respectively) were synthesized. Chloral hydrate was still prescribed, although its predominance as a sedative and a hypnotic was largely eclipsed by barbiturates.
Chloral hydrate is soluble in both water and ethanol, readily forming concentrated solutions. A solution of chloral hydrate in ethanol called "knockout drops" was used to prepare a Mickey Finn.
In 1897, Bram Stoker's epistolary novel Dracula, one of its characters, Doctor John Seward, recorded his use and his molecular formula in his phonographic diary:
I cannot but think of Lucy, and how different things might have been. If I don't sleep at once, chloral, the modern Morpheus — ! I should be careful not to let it grow into a habit. No I shall take none to-night! I have thought of Lucy, and I shall not dishonor her by mixing the two.
In the conclusion of Edith Wharton's 1905 novel The House of Mirth, Lily Bart, the novel's heroine, becomes addicted to chloral hydrate and overdoses on the substance:
She put out her hand and measured the soothing drops into a glass; but as she did so, she knew they would be powerless against the supernatural lucidity of her brain. She had long since raised the dose to its highest limit, but to-night she felt she must increase it. She knew she took a slight risk in doing so; she remembered the chemist's warning. If sleep came at all, it might be a sleep without waking.In the James Bond films From Russia With Love and The Living Daylights, chloral hydrate is used as a knockout drug.
Notable users
King Chulalongkorn of Thailand (1853–1910) used the drug for a period after 1893 to relieve what may have been a mix of depression and unspecified illnesses. He was reported by his doctor to have been taking one bottle per day during July 1894 although this was reduced after this time.
Montgomery Clift (1920–1966), American actor.
André Gide (1869–1951) was given chloral hydrate as a boy for sleep problems by a physician named Lizart. Gide states in his autobiography If It Die... that "all my later weaknesses of will or memory I attribute to him."
William James (1842–1910), psychologist and philosopher, used the drug for insomnia and sedation due to chronic neurosis.
The Jonestown mass murder-suicides in 1978 involved the communal drinking of Flavor Aid poisoned with diazepam, chloral hydrate, cyanide, and promethazine.
Mary Todd Lincoln (1818–1882), wife of American president Abraham Lincoln, became addicted in the years after her husband's death and was committed to an asylum.
Marilyn Monroe (1926–1962) died from an overdose of chloral hydrate and pentobarbital (Nembutal).
Friedrich Nietzsche (1844–1900) regularly used chloral hydrate in the years leading up to his nervous breakdown, according to Lou Salomé and other associates. Whether the drug contributed to his insanity is a point of controversy.
Dante Gabriel Rossetti (1828–1882) became addicted to chloral, with whisky chasers, after the death of his wife Elizabeth Siddal from a laudanum overdose in 1862. He had a mental breakdown in 1872. He lived out the last ten years of his life addicted to chloral and alcohol, in part to mask the pain of botched surgery to an enlarged testicle in 1877.
Oliver Sacks (1933–2015) abused chloral hydrate in 1965 as a depressed insomniac. He found himself taking fifteen times the usual dose of chloral hydrate every night before he eventually ran out, causing violent withdrawal symptoms.
Anna Nicole Smith (1967–2007) died of "combined drug intoxication" with chloral hydrate as the "major component".
John Tyndall (1820–1893), an Irish physicist, died of an accidental overdose of chloral administered by his wife.
Evelyn Waugh (1903–1966), insomniac for much of his adult life, for which "in later life ... he became so deleteriously dependent on chloral". Waugh's novel, The Ordeal of Gilbert Pinfold, is largely a fictionalised account of an episode Waugh himself experienced as a result of excessive use of chloral in combination with bromide and alcohol. Waugh's friend and biographer Christopher Sykes observed that Waugh's description of D. G. Rossetti's demise under the effects of excessive use of chloral in his 1928 biography of the artist "is a fairly exact description of how [Waugh's own] life ended in 1966".
Hank Williams (1923–1953) died from a combination of chloral hydrate, morphine and whiskey.
Renée Vivien (1877-1909), a prominent lesbian poet during the Belle Époque, abused chloral hydrate for much of her life.
Environmental
It is, together with chloroform, a minor side-product of the chlorination of water when organic residues such as humic acids are present. It has been detected in drinking water at concentrations of up to 100 micrograms per litre (μg/L) but concentrations are normally found to be below 10 μg/L. Levels are generally found to be higher in surface water than in ground water.
See also
Chloral cyanohydrin
Chlorobutanol
Chloroform
Disulfiram-like drug
Trichloroethanol, metabolite
Trichloroethylene, industrial chemical that metabolizes to chloral hydrate
References
Notes
Sources
External links
Acetaldehyde dehydrogenase inhibitors
Aldehydes
GABAA receptor positive allosteric modulators
Glycine receptor agonists
Hydrates
Hypnotics
Sedatives
Organochlorides
Prodrugs
Trichloromethyl compounds
Geminal diols
Jonestown
IARC Group 2A carcinogens | Chloral hydrate | Chemistry,Biology | 4,002 |
804,939 | https://en.wikipedia.org/wiki/Kitbashing | Kitbashing or model bashing is the practice of making a new scale model by taking pieces out of kits. These pieces may be added to a custom project or to another kit. For professional modelmakers, kitbashing is used to create concept models for detailing movie special effects. Commercial model kits are a ready source of "detailing", providing any number of identical, mass-produced components that can be used to add fine detail to an existing model. Professionals often kitbash to build prototype parts which are then recreated with lightweight materials.
Purposes, history, and methods
For the hobbyist, kitbashing saves time that would be spent scratch building an entire model. Hobbyists may kitbash to create a model of a subject (real or imaginary) for which there is not a commercial kit.
Although it has a long history, kitbashing came to the attention of a wider public via the fine modelwork seen in TV series such as Thunderbirds, Star Trek and the films 2001: A Space Odyssey and Star Wars Episode IV: A New Hope. Many of the spaceship models created for these programs incorporated details from tank, speedboat and car kits. Another example is the Batmobile from the 2005 film Batman Begins, as seen in the special features disc of the film's DVD.
It is not uncommon for parts to be cut and filed into shapes leaving gaps that are later filled with putty to hide defects. Textural details known as greebles may be added to enhance a model.
Sometimes, kitbashing has been used to create works of art. The Toronto sculptor Kim Adams has used HO gauge freight cars, containers, detail parts, figures and scenery to create artistic landscapes. American artist Kris Kuksi also uses kitbashing to detail his maximalist sculptures.
Genres
A popular venue for kitbashing is diecast emergency vehicles such as fire apparatuses. Kitbashers often use models from manufacturers such as Code 3 and Corgi. The kitbash in such cases can be as simple as painting or redecaling a model, or as complex as tearing the model down and adding scratch-built components, followed by custom decals.
An important aspect of kitbashing in model railroading is the reconfiguration of structure kits, most often to fit the geometry of a specific space. Walls can be shortened or lengthened, and corner angles can be changed, to fit a given location on the layout. Another application is to use the wall parts to create a "flat", or shallow relief model to be displayed against the backdrop. In this configuration the parts for the rear wall of a structure, often an industrial building, can instead be abutted to the front to double the length of the building. Plain sheet styrene or other material is typically added to the rear to strengthen the resulting model.
In model rocketry, kitbashing refers simply to using the pieces from one kit to build a different model. This is typically used to create unusual or especially complex models.
With radio-controlled aircraft, such kitbashing can be done to kitted aircraft as they are being built, or, more often, to so-called "almost-ready-to-fly" (ARF) aircraft to change their appearance or flight characteristics to suit the owner. This can even extend to "plans-bashing", where a plans-built model has its construction plans partially re-drawn by the builder, either by hand or with computer-aided design software before any part of the model's airframe has been fabricated from raw materials.
See also
Mashup (disambiguation)
Greeble
References
Scale modeling
Special effects
Toy collecting | Kitbashing | Physics | 748 |
992,412 | https://en.wikipedia.org/wiki/Wireless%20distribution%20system | A wireless distribution system (WDS) is a system enabling the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the traditional requirement for a wired backbone to link them. The notable advantage of WDS over other solutions is that it preserves the MAC addresses of client frames across links between access points.
An access point can be either a main, relay, or remote base station.
A main base station is typically connected to the (wired) Ethernet.
A relay base station relays data between remote base stations, wireless clients, or other relay stations; to either a main, or another relay base station.
A remote base station accepts connections from wireless clients and passes them on to relay stations or to main stations. Connections between "clients" are made using MAC addresses.
All base stations in a wireless distribution system must be configured to use the same radio channel, method of encryption (none, WEP, WPA or WPA2) and the same encryption keys. They may be configured to different service set identifiers (SSIDs). WDS also requires every base station to be configured to forward to others in the system.
WDS may also be considered a repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). However, with the repeater method, throughput is halved for all clients connected wirelessly. This is because Wi-Fi is an inherently half duplex medium and therefore any Wi-Fi device functioning as a repeater must use the Store and forward method of communication.
WDS may be incompatible between different products (even occasionally from the same vendor) since the IEEE 802.11-1999 standard does not define how to construct any such implementations or how stations interact to arrange for exchanging frames of this format. The IEEE 802.11-1999 standard merely defines the 4-address frame format that makes it possible.
Technical
WDS may provide two modes of access point-to-access point (AP-to-AP) connectivity:
Wireless bridging, in which WDS APs (AP-to-AP on local routers AP) communicate only with each other and don't allow wireless stations (STA, also known as wireless clients) to access them
Wireless repeating, in which APs (WDS on local routers) communicate with each other and with wireless STAs
Two disadvantages to using WDS are:
The maximum wireless effective throughput may be halved after the first retransmission (hop) being made. For example, in the case of two APs connected via WDS, and communication is made between a computer which is plugged into the Ethernet port of AP A and a laptop which is connected wirelessly to AP B. The throughput is halved, because AP B has to retransmit the information during the communication of the two sides. However, in the case of communications between a computer which is plugged into the Ethernet port of AP A and a computer which is plugged into the Ethernet port of AP B, the throughput is not halved since there is no need to retransmit the information. Dual band/radio APs may avoid this problem, by connecting to clients on one band/radio, and making a WDS network link with the other.
Dynamically assigned and rotated encryption keys are usually not supported in a WDS connection. This means that dynamic Wi-Fi Protected Access (WPA) and other dynamic key assignment technology in most cases cannot be used, though WPA using pre-shared keys is possible. This is due to the lack of standardization in this field, which may be resolved with the upcoming 802.11s standard. As a result, only static WEP or WPA keys may be used in a WDS connection, including any STAs that associate to a WDS repeating AP.
OpenWRT, a universal third party router firmware, supports WDS with WPA-PSK, WPA2-PSK, WPA-PSK/WPA2-PSK Mixed-Mode encryption modes. Recent Apple base stations allow WDS with WPA, though in some cases firmware updates are required. Firmware for the Renasis SAP36g super access point and most third party firmware for the Linksys WRT54G(S)/GL support AES encryption using WPA2-PSK mixed-mode security, and TKIP encryption using WPA-PSK, while operating in WDS mode. However, this mode may not be compatible with other units running stock or alternate firmware.
Example
Suppose one has a Wi-Fi-capable game console. This device needs to send one packet to a WAN host, and receive one packet in reply.
Network 1: A wireless base station acting as a simple (non-WDS) wireless router. The packet leaves the game console, goes over-the-air to the router, which then transmits it across the WAN. One packet comes back, through the router, which transmits it wirelessly to the game console. Total packets sent over-the-air: 2.
Network 2: Two wireless base stations employing WDS: WAN connects to the master base station. The master base station connects over-the-air to the remote base station. The Remote base station connects over-the-air to the game console. The game console sends one packet over-the-air to the remote base station, which forwards it over-the-air to the master base station, which forwards it to the WAN. The reply packet comes from the WAN to the master base station, over-the-air to the remote, and then over-the-air again to the game console. Total packets sent over-the-air: 4.
Network 3: Two wireless base stations employing WDS, but this time the game console connects by Ethernet cable to the remote base station. One packet is sent from the game console over the Ethernet cable to the remote, from there by air to the master, and on to the WAN. Reply comes from WAN to master, over-the-air to remote, over cable to game console. Total packets sent over-the-air: 2.
Notice that network 1 (non-WDS) and network 3 (WDS) send the same number of packets over-the-air. The only slowdown is the potential halving due to the half-duplex nature of Wi-Fi.
Network 2 gets an additional halving because the remote base station uses double the air time because it is re-transmitting over-the-air packets that it has just received over-the-air. This is the halving that is usually attributed to WDS, but that halving only happens when the route through a base station uses over-the-air links on both sides of it. That does not always happen in a WDS, and can happen in non-WDS.
Important Note: This "double hop" (one wireless hop from the main station to the remote station, and a second hop from the remote station to the wireless client [game console]) is not necessarily twice as slow. End to end latency introduced here is in the "store and forward" delay associated with the remote station forwarding packets. In order to accurately identify the true latency contribution of relaying through a wireless remote station vs. simply increasing the broadcast power of the main station, more comprehensive tests specific to the environment would be required.
See also
Ad hoc wireless network
Network bridge
Wireless mesh network
References
External links
Swallow-Wifi Wiki (WDS Network dashboard for DD-WRT devices)
Alternative Wireless Signal-repeating Scheme with DD-WRT and AutoAP
What is Third Generation Mesh? Review of three generation of mesh networking architectures.
Wi-Fi Range Extender Vs Mesh Network System Explanation how wifi extender and mesh network works.
How to Extend Your Wireless Network with Tomato-Powered Routers
Polarcloud.com (How Do I Use WDS)
Me
IEEE 802.11 | Wireless distribution system | Technology,Engineering | 1,664 |
1,866,743 | https://en.wikipedia.org/wiki/Linear%20code | In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf. syndrome decoding).
Linear codes are used in forward error correction and are applied in methods for transmitting symbols (e.g., bits) on a communications channel so that, if errors occur in the communication, some errors can be corrected or detected by the recipient of a message block. The codewords in a linear block code are blocks of symbols that are encoded using more symbols than the original value to be sent. A linear code of length n transmits blocks containing n symbols. For example, the [7,4,3] Hamming code is a linear binary code which represents 4-bit messages using 7-bit codewords. Two distinct codewords differ in at least three bits. As a consequence, up to two errors per codeword can be detected while a single error can be corrected. This code contains 24 = 16 codewords.
Definition and parameters
A linear code of length n and dimension k is a linear subspace C with dimension k of the vector space where is the finite field with q elements. Such a code is called a q-ary code. If q = 2 or q = 3, the code is described as a binary code, or a ternary code respectively. The vectors in C are called codewords. The size of a code is the number of codewords and equals qk.
The weight of a codeword is the number of its elements that are nonzero and the distance between two codewords is the Hamming distance between them, that is, the number of elements in which they differ. The distance d of the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. A linear code of length n, dimension k, and distance d is called an [n,k,d] code (or, more precisely, code).
We want to give the standard basis because each coordinate represents a "bit" that is transmitted across a "noisy channel" with some small probability of transmission error (a binary symmetric channel). If some other basis is used then this model cannot be used and the Hamming metric does not measure the number of errors in transmission, as we want it to.
Generator and check matrices
As a linear subspace of , the entire code C (which may be very large) may be represented as the span of a set of codewords (known as a basis in linear algebra). These basis codewords are often collated in the rows of a matrix G known as a generating matrix for the code C. When G has the block matrix form , where denotes the identity matrix and P is some matrix, then we say G is in standard form.
A matrix H representing a linear function whose kernel is C is called a check matrix of C (or sometimes a parity check matrix). Equivalently, H is a matrix whose null space is C. If C is a code with a generating matrix G in standard form, , then is a check matrix for C. The code generated by H is called the dual code of C. It can be verified that G is a matrix, while H is a matrix.
Linearity guarantees that the minimum Hamming distance d between a codeword c0 and any of the other codewords c ≠ c0 is independent of c0. This follows from the property that the difference c − c0 of two codewords in C is also a codeword (i.e., an element of the subspace C), and the property that d(c, c0) = d(c − c0, 0). These properties imply that
In other words, in order to find out the minimum distance between the codewords of a linear code, one would only need to look at the non-zero codewords. The non-zero codeword with the smallest weight has then the minimum distance to the zero codeword, and hence determines the minimum distance of the code.
The distance d of a linear code C also equals the minimum number of linearly dependent columns of the check matrix H.
Proof: Because , which is equivalent to , where is the column of . Remove those items with , those with are linearly dependent. Therefore, is at least the minimum number of linearly dependent columns. On another hand, consider the minimum set of linearly dependent columns where is the column index set. . Now consider the vector such that if . Note because . Therefore, we have , which is the minimum number of linearly dependent columns in . The claimed property is therefore proven.
Example: Hamming codes
As the first class of linear codes developed for error correction purpose, Hamming codes have been widely used in digital communication systems. For any positive integer , there exists a Hamming code. Since , this Hamming code can correct a 1-bit error.
Example : The linear block code with the following generator matrix and parity check matrix is a Hamming code.
Example: Hadamard codes
Hadamard code is a linear code and is capable of correcting many errors. Hadamard code could be constructed column by column : the column is the bits of the binary representation of integer , as shown in the following example. Hadamard code has minimum distance and therefore can correct errors.
Example: The linear block code with the following generator matrix is a Hadamard code:
.
Hadamard code is a special case of Reed–Muller code. If we take the first column (the all-zero column) out from , we get simplex code, which is the dual code of Hamming code.
Nearest neighbor algorithm
The parameter d is closely related to the error correcting ability of the code. The following construction/algorithm illustrates this (called the nearest neighbor decoding algorithm):
Input: A received vector v in
Output: A codeword in closest to , if any.
Starting with , repeat the following two steps.
Enumerate the elements of the ball of (Hamming) radius around the received word , denoted .
For each in , check if in . If so, return as the solution.
Increment . Fail only when so enumeration is complete and no solution has been found.
We say that a linear is -error correcting if there is at most one codeword in , for each in .
Popular notation
Codes in general are often denoted by the letter C, and a code of length n and of rank k (i.e., having n code words in its basis and k rows in its generating matrix) is generally referred to as an (n, k) code. Linear block codes are frequently denoted as [n, k, d] codes, where d refers to the code's minimum Hamming distance between any two code words.
(The [n, k, d] notation should not be confused with the (n, M, d) notation used to denote a non-linear code of length n, size M (i.e., having M code words), and minimum Hamming distance d.)
Singleton bound
Lemma (Singleton bound): Every linear [n,k,d] code C satisfies .
A code C whose parameters satisfy k +d = n + 1 is called maximum distance separable or MDS. Such codes, when they exist, are in some sense best possible.
If C1 and C2 are two codes of length n and if there is a permutation p in the symmetric group Sn for which (c1,...,cn) in C1 if and only if (cp(1),...,cp(n)) in C2, then we say C1 and C2 are permutation equivalent. In more generality, if there is an monomial matrix which sends C1 isomorphically to C2 then we say C1 and C2 are equivalent.
Lemma: Any linear code is permutation equivalent to a code which is in standard form.
Bonisoli's theorem
A code is defined to be equidistant if and only if there exists some constant d such that the distance between any two of the code's distinct codewords is equal to d. In 1984 Arrigo Bonisoli determined the structure of linear one-weight codes over finite fields and proved that every equidistant linear code is a sequence of dual Hamming codes.
Examples
Some examples of linear codes include:
Repetition code
Parity code
Cyclic code
Hamming code
Golay code, both the binary and ternary versions
Polynomial codes, of which BCH codes are an example
Reed–Solomon codes
Reed–Muller code
Algebraic geometry code
Binary Goppa code
Low-density parity-check codes
Expander code
Multidimensional parity-check code
Toric code
Turbo code
Locally recoverable code
Generalization
Hamming spaces over non-field alphabets have also been considered, especially over finite rings, most notably Galois rings over Z4. This gives rise to modules instead of vector spaces and ring-linear codes (identified with submodules) instead of linear codes. The typical metric used in this case the Lee distance. There exist a Gray isometry between (i.e. GF(22m)) with the Hamming distance and (also denoted as GR(4,m)) with the Lee distance; its main attraction is that it establishes a correspondence between some "good" codes that are not linear over as images of ring-linear codes from .
Some authors have referred to such codes over rings simply as linear codes as well.
See also
Decoding methods
References
Bibliography
Chapter 5 contains a more gentle introduction (than this article) to the subject of linear codes.
External links
q-ary code generator program
Code Tables: Bounds on the parameters of various types of codes, IAKS, Fakultät für Informatik, Universität Karlsruhe (TH)]. Online, up to date table of the optimal binary codes, includes non-binary codes.
The database of Z4 codes Online, up to date database of optimal Z4 codes.
Coding theory
Finite fields | Linear code | Mathematics | 2,118 |
23,473,696 | https://en.wikipedia.org/wiki/Differential%20privacy | Differential privacy (DP) is a mathematically rigorous framework for releasing statistical information about datasets while protecting the privacy of individual data subjects. It enables a data holder to share aggregate patterns of the group while limiting information that is leaked about specific individuals. This is done by injecting carefully calibrated noise into statistical computations such that the utility of the statistic is preserved while provably limiting what can be inferred about any individual in the dataset.
Another way to describe differential privacy is as a constraint on the algorithms used to publish aggregate information about a statistical database which limits the disclosure of private information of records in the database. For example, differentially private algorithms are used by some government agencies to publish demographic information or other statistical aggregates while ensuring confidentiality of survey responses, and by companies to collect information about user behavior while controlling what is visible even to internal analysts.
Roughly, an algorithm is differentially private if an observer seeing its output cannot tell whether a particular individual's information was used in the computation. Differential privacy is often discussed in the context of identifying individuals whose information may be in a database. Although it does not directly refer to identification and reidentification attacks, differentially private algorithms provably resist such attacks.
ε-differential privacy
The 2006 Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith article introduced the concept of ε-differential privacy, a mathematical definition for the privacy loss associated with any data release drawn from a statistical database. (Here, the term statistical database means a set of data that are collected under the pledge of confidentiality for the purpose of producing statistics that, by their production, do not compromise the privacy of those individuals who provided the data.)
The definition of ε-differential privacy requires that a change to one entry in a database only creates a small change in the probability distribution of the outputs of measurements, as seen by the attacker. The intuition for the definition of ε-differential privacy is that a person's privacy cannot be compromised by a statistical release if their data are not in the database. In differential privacy, each individual is given roughly the same privacy that would result from having their data removed. That is, the statistical functions run on the database should not be substantially affected by the removal, addition, or change of any individual in the data.
How much any individual contributes to the result of a database query depends in part on how many people's data are involved in the query. If the database contains data from a single person, that person's data contributes 100%. If the database contains data from a hundred people, each person's data contributes just 1%. The key insight of differential privacy is that as the query is made on the data of fewer and fewer people, more noise needs to be added to the query result to produce the same amount of privacy. Hence the name of the 2006 paper, "Calibrating noise to sensitivity in private data analysis."
Definition
Let ε be a positive real number and be a randomized algorithm that takes a dataset as input (representing the actions of the trusted party holding the data). Let denote the image of .
The algorithm is said to provide (ε, δ)-differential privacy if, for all datasets and that differ on a single element (i.e., the data of one person), and all subsets of :
where the probability is taken over the randomness used by the algorithm. This definition is sometimes called "approximate differential privacy", with "pure differential privacy" being a special case when . In the latter case, the algorithm is commonly said to satisfy ε-differential privacy (i.e., omitting ).
Differential privacy offers strong and robust guarantees that facilitate modular design and analysis of differentially private mechanisms due to its composability, robustness to post-processing, and graceful degradation in the presence of correlated data.
Example
According to this definition, differential privacy is a condition on the release mechanism (i.e., the trusted party releasing information about the dataset) and not on the dataset itself. Intuitively, this means that for any two datasets that are similar, a given differentially private algorithm will behave approximately the same on both datasets. The definition gives a strong guarantee that presence or absence of an individual will not affect the final output of the algorithm significantly.
For example, assume we have a database of medical records where each record is a pair (Name, X), where is a Boolean denoting whether a person has diabetes or not. For example:
Now suppose a malicious user (often termed an adversary) wants to find whether Chandler has diabetes or not. Suppose he also knows in which row of the database Chandler resides. Now suppose the adversary is only allowed to use a particular form of query that returns the partial sum of the first rows of column in the database. In order to find Chandler's diabetes status the adversary executes and , then computes their difference. In this example, and , so their difference is 1. This indicates that the "Has Diabetes" field in Chandler's row must be 1. This example highlights how individual information can be compromised even without explicitly querying for the information of a specific individual.
Continuing this example, if we construct by replacing (Chandler, 1) with (Chandler, 0) then this malicious adversary will be able to distinguish from by computing for each dataset. If the adversary were required to receive the values via an -differentially private algorithm, for a sufficiently small , then he or she would be unable to distinguish between the two datasets.
Composability and robustness to post processing
Composability refers to the fact that the joint distribution of the outputs of (possibly adaptively chosen) differentially private mechanisms satisfies differential privacy.
Sequential composition. If we query an ε-differential privacy mechanism times, and the randomization of the mechanism is independent for each query, then the result would be -differentially private. In the more general case, if there are independent mechanisms: , whose privacy guarantees are differential privacy, respectively, then any function of them: is -differentially private.
Parallel composition. If the previous mechanisms are computed on disjoint subsets of the private database then the function would be -differentially private instead.
The other important property for modular use of differential privacy is robustness to post processing. This is defined to mean that for any deterministic or randomized function defined over the image of the mechanism , if satisfies ε-differential privacy, so does .
The property of composition permits modular construction and analysis of differentially private mechanisms and motivates the concept of the privacy loss budget. If all elements that access sensitive data of a complex mechanisms are separately differentially private, so will be their combination, followed by arbitrary post-processing.
Group privacy
In general, ε-differential privacy is designed to protect the privacy between neighboring databases which differ only in one row. This means that no adversary with arbitrary auxiliary information can know if one particular participant submitted their information. However this is also extendable. We may want to protect databases differing in rows, which amounts to an adversary with arbitrary auxiliary information knowing if particular participants submitted their information. This can be achieved because if items change, the probability dilation is bounded by instead of , i.e., for D1 and D2 differing on items:Thus setting ε instead to achieves the desired result (protection of items). In other words, instead of having each item ε-differentially private protected, now every group of items is ε-differentially private protected (and each item is -differentially private protected).
Hypothesis testing interpretation
One can think of differential privacy as bounding the error rates in a hypothesis test. Consider two hypotheses:
: The individual's data is not in the dataset.
: The individual's data is in the dataset.
Then, there are two error rates:
False Positive Rate (FPR):
False Negative Rate (FNR):
Ideal protection would imply that both error rates are equal, but for a fixed (ε, δ) setting, an attacker can achieve the following rates:
ε-differentially private mechanisms
Since differential privacy is a probabilistic concept, any differentially private mechanism is necessarily randomized. Some of these, like the Laplace mechanism, described below, rely on adding controlled noise to the function that we want to compute. Others, like the exponential mechanism and posterior sampling sample from a problem-dependent family of distributions instead.
An important definition with respect to ε-differentially private mechanisms is sensitivity. Let be a positive integer, be a collection of datasets, and be a function. One definition of the sensitivity of a function, denoted , can be defined by:where the maximum is over all pairs of datasets and in differing in at most one element and denotes the L1 norm. In the example of the medical database below, if we consider to be the function , then the sensitivity of the function is one, since changing any one of the entries in the database causes the output of the function to change by either zero or one. This can be generalized to other metric spaces (measures of distance), and must be to make certain differentially private algorithms work, including adding noise from the Gaussian distribution (which requires the L2 norm) instead of the Laplace distribution.
There are techniques (which are described below) using which we can create a differentially private algorithm for functions, with parameters that vary depending on their sensitivity.
Laplace mechanism
The Laplace mechanism adds Laplace noise (i.e. noise from the Laplace distribution, which can be expressed by probability density function , which has mean zero and standard deviation ). Now in our case we define the output function of as a real valued function (called as the transcript output by ) as where and is the original real valued query/function we planned to execute on the database. Now clearly can be considered to be a continuous random variable, where
which is at most . We can consider to be the privacy factor . Thus follows a differentially private mechanism (as can be seen from the definition above). If we try to use this concept in our diabetes example then it follows from the above derived fact that in order to have as the -differential private algorithm we need to have . Though we have used Laplace noise here, other forms of noise, such as the Gaussian Noise, can be employed, but they may require a slight relaxation of the definition of differential privacy.
Randomized response
A simple example, especially developed in the social sciences, is to ask a person to answer the question "Do you own the attribute A?", according to the following procedure:
Toss a coin.
If heads, then toss the coin again (ignoring the outcome), and answer the question honestly.
If tails, then toss the coin again and answer "Yes" if heads, "No" if tails.
(The seemingly redundant extra toss in the first case is needed in situations where just the act of tossing a coin may be observed by others, even if the actual result stays hidden.) The confidentiality then arises from the refutability of the individual responses.
But, overall, these data with many responses are significant, since positive responses are given to a quarter by people who do not have the attribute A and three-quarters by people who actually possess it. Thus, if p is the true proportion of people with A, then we expect to obtain (1/4)(1-p) + (3/4)p = (1/4) + p/2 positive responses. Hence it is possible to estimate p.
In particular, if the attribute A is synonymous with illegal behavior, then answering "Yes" is not incriminating, insofar as the person has a probability of a "Yes" response, whatever it may be.
Although this example, inspired by randomized response, might be applicable to microdata (i.e., releasing datasets with each individual response), by definition differential privacy excludes microdata releases and is only applicable to queries (i.e., aggregating individual responses into one result) as this would violate the requirements, more specifically the plausible deniability that a subject participated or not.
Stable transformations
A transformation is -stable if the Hamming distance between and is at most -times the Hamming distance between and for any two databases . If there is a mechanism that is -differentially private, then the composite mechanism is -differentially private.
This could be generalized to group privacy, as the group size could be thought of as the Hamming distance between
and (where contains the group and does not). In this case is -differentially private.
Research
Early research leading to differential privacy
In 1977, Tore Dalenius formalized the mathematics of cell suppression. Tore Dalenius was a Swedish statistician who contributed to statistical privacy through his 1977 paper that revealed a key point about statistical databases, which was that databases should not reveal information about an individual that is not otherwise accessible. He also defined a typology for statistical disclosures.
In 1979, Dorothy Denning, Peter J. Denning and Mayer D. Schwartz formalized the concept of a Tracker, an adversary that could learn the confidential contents of a statistical database by creating a series of targeted queries and remembering the results. This and future research showed that privacy properties in a database could only be preserved by considering each new query in light of (possibly all) previous queries. This line of work is sometimes called query privacy, with the final result being that tracking the impact of a query on the privacy of individuals in the database was NP-hard.
21st century
In 2003, Kobbi Nissim and Irit Dinur demonstrated that it is impossible to publish arbitrary queries on a private statistical database without revealing some amount of private information, and that the entire information content of the database can be revealed by publishing the results of a surprisingly small number of random queries—far fewer than was implied by previous work. The general phenomenon is known as the Fundamental Law of Information Recovery, and its key insight, namely that in the most general case, privacy cannot be protected without injecting some amount of noise, led to development of differential privacy.
In 2006, Cynthia Dwork, Frank McSherry, Kobbi Nissim and Adam D. Smith published an article formalizing the amount of noise that needed to be added and proposing a generalized mechanism for doing so. This paper also created the first formal definition of differential privacy. Their work was a co-recipient of the 2016 TCC Test-of-Time Award and the 2017 Gödel Prize.
Since then, subsequent research has shown that there are many ways to produce very accurate statistics from the database while still ensuring high levels of privacy.
Adoption in real-world applications
To date there are over 12 real-world deployments of differential privacy, the most noteworthy being:
2008: U.S. Census Bureau, for showing commuting patterns.
2014: Google's RAPPOR, for telemetry such as learning statistics about unwanted software hijacking users' settings.
2015: Google, for sharing historical traffic statistics.
2016: Apple iOS 10, for use in Intelligent personal assistant technology.
2017: Microsoft, for telemetry in Windows.
2020: Social Science One and Facebook, a 55 trillion cell dataset for researchers to learn about elections and democracy.
2021: The US Census Bureau uses differential privacy to release redistricting data from the 2020 Census.
Public purpose considerations
There are several public purpose considerations regarding differential privacy that are important to consider, especially for policymakers and policy-focused audiences interested in the social opportunities and risks of the technology:
Data utility and accuracy. The main concern with differential privacy is the trade-off between data utility and individual privacy. If the privacy loss parameter is set to favor utility, the privacy benefits are lowered (less “noise” is injected into the system); if the privacy loss parameter is set to favor heavy privacy, the accuracy and utility of the dataset are lowered (more “noise” is injected into the system). It is important for policymakers to consider the trade-offs posed by differential privacy in order to help set appropriate best practices and standards around the use of this privacy preserving practice, especially considering the diversity in organizational use cases. It is worth noting, though, that decreased accuracy and utility is a common issue among all statistical disclosure limitation methods and is not unique to differential privacy. What is unique, however, is how policymakers, researchers, and implementers can consider mitigating against the risks presented through this trade-off.
Data privacy and security. Differential privacy provides a quantified measure of privacy loss and an upper bound and allows curators to choose the explicit trade-off between privacy and accuracy. It is robust to still unknown privacy attacks. However, it encourages greater data sharing, which if done poorly, increases privacy risk. Differential privacy implies that privacy is protected, but this depends very much on the privacy loss parameter chosen and may instead lead to a false sense of security. Finally, though it is robust against unforeseen future privacy attacks, a countermeasure may be devised that we cannot predict.
Attacks in practice
Because differential privacy techniques are implemented on real computers, they are vulnerable to various attacks not possible to compensate for solely in the mathematics of the techniques themselves. In addition to standard defects of software artifacts that can be identified using testing or fuzzing, implementations of differentially private mechanisms may suffer from the following vulnerabilities:
Subtle algorithmic or analytical mistakes.
Timing side-channel attacks. In contrast with timing attacks against implementations of cryptographic algorithms that typically have low leakage rate and must be followed with non-trivial cryptanalysis, a timing channel may lead to a catastrophic compromise of a differentially private system, since a targeted attack can be used to exfiltrate the very bit that the system is designed to hide.
Leakage through floating-point arithmetic. Differentially private algorithms are typically presented in the language of probability distributions, which most naturally lead to implementations using floating-point arithmetic. The abstraction of floating-point arithmetic is leaky, and without careful attention to details, a naive implementation may fail to provide differential privacy. (This is particularly the case for ε-differential privacy, which does not allow any probability of failure, even in the worst case.) For example, the support of a textbook sampler of the Laplace distribution (required, for instance, for the Laplace mechanism) is less than 80% of all double-precision floating point numbers; moreover, the support for distributions with different means are not identical. A single sample from a naïve implementation of the Laplace mechanism allows distinguishing between two adjacent datasets with probability more than 35%.
Timing channel through floating-point arithmetic. Unlike operations over integers that are typically constant-time on modern CPUs, floating-point arithmetic exhibits significant input-dependent timing variability. Handling of subnormals can be particularly slow, as much as by ×100 compared to the typical case.
See also
Implementations of differentially private analyses – deployments of differential privacy
Quasi-identifier
Exponential mechanism (differential privacy) – a technique for designing differentially private algorithms
k-anonymity
Differentially private analysis of graphs
Protected health information
Local differential privacy
Privacy
References
Further reading
Publications
Calibrating noise to sensitivity in private data analysis, Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. In Proceedings of the Third conference on Theory of Cryptography (TCC'06). Springer-Verlag, Berlin, Heidelberg, 265–284. https://doi.org/10.1007/11681878_14 (This is the original publication of Differential Privacy, and not the eponymous article by Dwork that was published the same year.)
Differential Privacy: A Survey of Results by Cynthia Dwork, Microsoft Research, April 2008 (Presents what was discovered during the first two years of research on differential privacy.)
Differential Privacy: A Primer for a Non-Technical Audience, Alexandra Wood, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, et al, Vanderbilt Journal of Entertainment & Technology LawVanderbilt Journal of Entertainment, Volume 21, Issue 1, Fall 2018. (A good introductory document, but definitely *not* for non-technical audiences!)
Technology Factsheet: Differential Privacy by Raina Gandhi and Amritha Jayanti, Belfer Center for Science and International Affairs, Fall 2020
Differential Privacy and the 2020 US Census, MIT Case Studies in Social and Ethical Responsibilities of Computing, no. Winter 2022 (January). https://doi.org/10.21428/2c646de5.7ec6ab93.
Differential Privacy, Simson L. Garfinkel, MIT Press Essential Knowledge Series, March 2025.
Bowen, Claire McKay and Simson Garfinkel, The Philosophy of Differential Privacy, AMS Notices, November 2021.
Tutorials
A Practical Beginner's Guide To Differential Privacy by Christine Task, Purdue University, April 2012
Theory of cryptography
Information privacy | Differential privacy | Engineering | 4,349 |
20,961,273 | https://en.wikipedia.org/wiki/M134%20bomblet | The M134 bomblet was a U.S. chemical cluster munition designed for use in the MGR-1 Honest John rocket during the 1950s. The weapon was never mass-produced and was supplanted in 1964 by an improved design, the M139.
History
The M134 bomblet, developed as the E130, or E130R1 bomblet, began development in the early 1950s. While the weapon was not yet battle-ready, or ready for mass-production, in 1960, work on the bomblet dated back to at least 1953. By 1964 the bomblet design had been improved and the smaller M139 was adapted for use with the rocket warheads utilized by the M134. Thus, the M134 was never mass-produced; by the time the missile warhead and the M134 were ready for production they had been supplanted.
Specifications
The M134 bomblet was designed for the M190 Honest John rocket warhead. The bomblets carried sarin nerve agent and after the missile was fired the bomblets were released above their target. At the time of the sub-munitions' release a mechanical time fuze would cut the warhead's skin and the bomblets were released. The weapon could effectively saturate an area in diameter with chemical agent.
The Honest John held 356 of the 115 mm M134s. The spherical M134 was 4.5 inches around and constructed of ribbed steel. Its interior held about of sarin (GB). The U.S. Army Chemical Corps originally planned to use the M134 as a VX dispersement method also, but later regarded this use as ineffective and scrapped the plan.
Issues
The warhead meant to carry the M134 was classified and went into production on April 14, 1960; the M134, however, was not yet ready for production. The M134 had a host of issues which impeded its development. Problems with the fuze system, and a tendency toward unacceptable pressure build-up in filled munitions were among the problems encountered during development. The problems with the M134 delayed the rocket-delivered nerve agent program. By 1964 the successor M139 bomblet was ready for production. The M139 was superior to the M134: its glide angle of 22° allowed it better agent coverage.
See also
M139 bomblet
M143 bomblet
References
Chemical weapon delivery systems
Submunitions
Chemical weapons of the United States | M134 bomblet | Chemistry | 497 |
73,912,077 | https://en.wikipedia.org/wiki/Schei%C3%9Ftag | Scheißtage (literally "shitting days") referred to the additional one to three unpaid working days in Southern Germany and Austria for peasants and servants to compensate for the time they needed to defecate during their agreed employment.
This practice existed in the 18th and 19th centuries, and occasionally even until the early 20th century. The "Scheißtage" were performed after the expiration of the employment contract, usually after Candlemas, or at the end of each year on the 29th. or December 30.
Nowadays, the term Scheißtag is used in a vulgar-colloquial sense to mean a bad day.
Literature
Entry in Johann Andreas Schmeller, Georg Karl Frommann: Bayerisches Wörterbuch. 2nd edition, revised and supplemented by G. Karl Fromann. Volume 2, containing part III and IV of the first edition. Munich 1877, p. 475 (digital edition).
scheisztag. In: Jacob Grimm, Wilhelm Grimm (Hrsg.): Deutsches Wörterbuch. Band 14: R–Schiefe – (VIII). S. Hirzel, Leipzig 1893 (woerterbuchnetz.de). – refers to the entry in Schmeller's work.
References
Economic history of Germany
Economic history of Austria
Days
Defecation | Scheißtag | Biology | 270 |
888,711 | https://en.wikipedia.org/wiki/Mathematical%20statistics | Mathematical statistics is the application of probability theory and other mathematical concepts to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques that are commonly used in statistics include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.
Introduction
Statistical data collection is concerned with the planning of studies, especially with the design of randomized experiments and with the planning of surveys using random sampling. The initial analysis of the data often follows the study protocol specified prior to the study being conducted. The data from a study can also be analyzed to consider secondary hypotheses inspired by the initial results, or to suggest new studies. A secondary analysis of the data from a planned study uses tools from data analysis, and the process of doing this is mathematical statistics.
Data analysis is divided into:
descriptive statistics – the part of statistics that describes data, i.e. summarises the data and their typical properties.
inferential statistics – the part of statistics that draws conclusions from data (using some model for the data): For example, inferential statistics involves selecting a model for the data, checking whether the data fulfill the conditions of a particular model, and with quantifying the involved uncertainty (e.g. using confidence intervals).
While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data. For example, from natural experiments and observational studies, in which case the inference is dependent on the model chosen by the statistician, and so subjective.
Topics
The following are some of the important topics in mathematical statistics:
Probability distributions
A probability distribution is a function that assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are found in experiments whose sample space is non-numerical, where the distribution would be a categorical distribution; experiments whose sample space is encoded by discrete random variables, where the distribution can be specified by a probability mass function; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a probability density function. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.
A probability distribution can either be univariate or multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector—a set of two or more random variables—taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution.
Special distributions
Normal distribution, the most common continuous distribution
Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no)
Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences
Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs
Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution, where the number of successes is one.
Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair die)
Continuous uniform distribution, for continuously distributed values
Poisson distribution, for the number of occurrences of a Poisson-type event in a given period of time
Exponential distribution, for the time before the next Poisson-type event occurs
Gamma distribution, for the time before the next k Poisson-type events occur
Chi-squared distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normally distributed samples (see chi-squared test)
Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normally distributed samples with unknown variance (see Student's t-test)
Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution
Statistical inference
Statistical inference is the process of drawing conclusions from data that are subject to random variation, for example, observational errors or sampling variation. Initial requirements of such a system of procedures for inference and induction are that the system should produce reasonable answers when applied to well-defined situations and that it should be general enough to be applied across a range of situations. Inferential statistics are used to test hypotheses and make estimations using sample data. Whereas descriptive statistics describe a sample, inferential statistics infer predictions about a larger population that the sample represents.
The outcome of statistical inference may be an answer to the question "what should be done next?", where this might be a decision about making further experiments or surveys, or about drawing a conclusion before implementing some organizational or governmental policy.
For the most part, statistical inference makes propositions about populations, using data drawn from the population of interest via some form of random sampling. More generally, data about a random process is obtained from its observed behavior during a finite period of time. Given a parameter or hypothesis about which one wishes to make inference, statistical inference most often uses:
a statistical model of the random process that is supposed to generate the data, which is known when randomization has been used, and
a particular realization of the random process; i.e., a set of data.
Regression
In statistics, regression analysis is a statistical process for estimating the relationships among variables. It includes many ways for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a probability distribution.
Many techniques for carrying out regression analysis have been developed. Familiar methods, such as linear regression, are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data (e.g. using ordinary least squares). Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional.
Nonparametric statistics
Nonparametric statistics are values calculated from data in a way that is not based on parameterized families of probability distributions. They include both descriptive and inferential statistics. The typical parameters are the expectations, variance, etc. Unlike parametric statistics, nonparametric statistics make no assumptions about the probability distributions of the variables being assessed.
Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of measurement, non-parametric methods result in "ordinal" data.
As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust.
One drawback of non-parametric methods is that since they do not rely on assumptions, they are generally less powerful than their parametric counterparts. Low power non-parametric tests are problematic because a common use of these methods is for when a sample has a low sample size. Many parametric methods are proven to be the most powerful tests through methods such as the Neyman–Pearson lemma and the Likelihood-ratio test.
Another justification for the use of non-parametric methods is simplicity. In certain cases, even when the use of parametric methods is justified, non-parametric methods may be easier to use. Due both to this simplicity and to their greater robustness, non-parametric methods are seen by some statisticians as leaving less room for improper use and misunderstanding.
Statistics, mathematics, and mathematical statistics
Mathematical statistics is a key subset of the discipline of statistics. Statistical theorists study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions.
Mathematicians and statisticians like Gauss, Laplace, and C. S. Peirce used decision theory with probability distributions and loss functions (or utility functions). The decision-theoretic approach to statistical inference was reinvigorated by Abraham Wald and his successors and makes extensive use of scientific computing, analysis, and optimization; for the design of experiments, statisticians use algebra and combinatorics. But while statistical practice often relies on probability and decision theory, their application can be controversial
See also
Asymptotic theory (statistics)
References
Further reading
Borovkov, A. A. (1999). Mathematical Statistics. CRC Press.
Virtual Laboratories in Probability and Statistics (Univ. of Ala.-Huntsville)
StatiBot, interactive online expert system on statistical tests.
Statistical theory
Actuarial science | Mathematical statistics | Mathematics | 2,056 |
27,157,933 | https://en.wikipedia.org/wiki/Nucleic%20acid%20secondary%20structure | Nucleic acid secondary structure is the basepairing interactions within a single nucleic acid polymer or between two polymers. It can be represented as a list of bases which are paired in a nucleic acid molecule.
The secondary structures of biological DNAs and RNAs tend to be different: biological DNA mostly exists as fully base paired double helices, while biological RNA is single stranded and often forms complex and intricate base-pairing interactions due to its increased ability to form hydrogen bonds stemming from the extra hydroxyl group in the ribose sugar.
In a non-biological context, secondary structure is a vital consideration in the nucleic acid design of nucleic acid structures for DNA nanotechnology and DNA computing, since the pattern of basepairing ultimately determines the overall structure of the molecules.
Fundamental concepts
Base pairing
In molecular biology, two nucleotides on opposite complementary DNA or RNA strands that are connected via hydrogen bonds are called a base pair (often abbreviated bp). In the canonical Watson-Crick base pairing, adenine (A) forms a base pair with thymine (T) and guanine (G) forms one with cytosine (C) in DNA. In RNA, thymine is replaced by uracil (U). Alternate hydrogen bonding patterns, such as the wobble base pair and Hoogsteen base pair, also occur—particularly in RNA—giving rise to complex and functional tertiary structures. Importantly, pairing is the mechanism by which codons on messenger RNA molecules are recognized by anticodons on transfer RNA during protein translation. Some DNA- or RNA-binding enzymes can recognize specific base pairing patterns that identify particular regulatory regions of genes.
Hydrogen bonding is the chemical mechanism that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. DNA with high GC-content is more stable than DNA with low GC-content, but contrary to popular belief, the hydrogen bonds do not stabilize the DNA significantly and stabilization is mainly due to stacking interactions.
The larger nucleobases, adenine and guanine, are members of a class of doubly ringed chemical structures called purines; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of singly ringed chemical structures called pyrimidines. Purines are only complementary with pyrimidines: pyrimidine-pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine-purine pairings are energetically unfavorable because the molecules are too close, leading to overlap repulsion. The only other possible pairings are GT and AC; these pairings are mismatches because the pattern of hydrogen donors and acceptors do not correspond. The GU wobble base pair, with two hydrogen bonds, does occur fairly often in RNA.
Nucleic acid hybridization
Hybridization is the process of complementary base pairs binding to form a double helix. Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes, or physical force. Melting occurs preferentially at certain points in the nucleic acid. T and A rich sequences are more easily melted than C and G rich regions. Particular base steps are also susceptible to DNA melting, particularly T A and T G base steps. These mechanical features are reflected by the use of sequences such as TATAA at the start of many genes to assist RNA polymerase in melting the DNA for transcription.
Strand separation by gentle heating, as used in PCR, is simple providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. The cell avoids this problem by allowing its DNA-melting enzymes (helicases) to work concurrently with topoisomerases, which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. Helicases unwind the strands to facilitate the advance of sequence-reading enzymes such as DNA polymerase.
Secondary structure motifs
Nucleic acid secondary structure is generally divided into helices (contiguous base pairs), and various kinds of loops (unpaired nucleotides surrounded by helices). Frequently these elements, or combinations of them, are further classified into additional categories including, for example, tetraloops, pseudoknots, and stem-loops. Topological approaches can be used to categorize and compare complex structures that arise from combining these elements in various arrangements.
Double helix
The double helix is an important tertiary structure in nucleic acid molecules which is intimately connected with the molecule's secondary structure. A double helix is formed by regions of many consecutive base pairs.
The nucleic acid double helix is a spiral polymer, usually right-handed, containing two nucleotide strands which base pair together. A single turn of the helix constitutes about ten nucleotides, and contains a major groove and minor groove, the major groove being wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to DNA do so through the wider major groove. Many double-helical forms are possible; for DNA the three biologically relevant forms are A-DNA, B-DNA, and Z-DNA, while RNA double helices have structures similar to the A form of DNA.
Stem-loop structures
The secondary structure of nucleic acid molecules can often be uniquely decomposed into stems and loops. The stem-loop structure (also often referred to as an "hairpin"), in which a base-paired helix ends in a short unpaired loop, is extremely common and is a building block for larger structural motifs such as cloverleaf structures, which are four-helix junctions such as those found in transfer RNA. Internal loops (a short series of unpaired bases in a longer paired helix) and bulges (regions in which one strand of a helix has "extra" inserted bases with no counterparts in the opposite strand) are also frequent.
There are many secondary structure elements of functional importance to biological RNAs; some famous examples are the Rho-independent terminator stem-loops and the tRNA cloverleaf. Active research is on-going to determine the secondary structure of RNA molecules, with approaches including both experimental and computational methods (see also the List of RNA structure prediction software).
Pseudoknots
A pseudoknot is a nucleic acid secondary structure containing at least two stem-loop structures in which half of one stem is intercalated between the two halves of another stem. Pseudoknots fold into knot-shaped three-dimensional conformations but are not true topological knots. The base pairing in pseudoknots is not well nested; that is, base pairs occur that "overlap" one another in sequence position. This makes the presence of general pseudoknots in nucleic acid sequences impossible to predict by the standard method of dynamic programming, which uses a recursive scoring system to identify paired stems and consequently cannot detect non-nested base pairs with common algorithms. However, limited subclasses of pseudoknots can be predicted using modified dynamic programs.
Newer structure prediction techniques such as stochastic context-free grammars are also unable to consider pseudoknots.
Pseudoknots can form a variety of structures with catalytic activity and several important biological processes rely on RNA molecules that form pseudoknots. For example, the RNA component of the human telomerase contains a pseudoknot that is critical for its activity. The hepatitis delta virus ribozyme is a well known example of a catalytic RNA with a pseudoknot in its active site. Though DNA can also form pseudoknots, they are generally not present in standard physiological conditions.
Secondary structure prediction
Most methods for nucleic acid secondary structure prediction rely on a nearest neighbor thermodynamic model. A common method to determine the most probable structures given a sequence of nucleotides makes use of a dynamic programming algorithm that seeks to find structures with low free energy. Dynamic programming algorithms often forbid pseudoknots, or other cases in which base pairs are not fully nested, as considering these structures becomes computationally very expensive for even small nucleic acid molecules. Other methods, such as stochastic context-free grammars can also be used to predict nucleic acid secondary structure.
For many RNA molecules, the secondary structure is highly important to the correct function of the RNA — often more so than the actual sequence. This fact aids in the analysis of non-coding RNA sometimes termed "RNA genes". One application of bioinformatics uses predicted RNA secondary structures in searching a genome for noncoding but functional forms of RNA. For example, microRNAs have canonical long stem-loop structures interrupted by small internal loops.
RNA secondary structure applies in RNA splicing in certain species. In humans and other tetrapods, it has been shown that without the U2AF2 protein, the splicing process is inhibited. However, in zebrafish and other teleosts the RNA splicing process can still occur on certain genes in the absence of U2AF2. This may be because 10% of genes in zebrafish have alternating TG and AC base pairs at the 3' splice site (3'ss) and 5' splice site (5'ss) respectively on each intron, which alters the secondary structure of the RNA. This suggests that secondary structure of RNA can influence splicing, potentially without the use of proteins like U2AF2 that have been thought to be required for splicing to occur.
Secondary structure determination
RNA secondary structure can be determined from atomic coordinates (tertiary structure) obtained by X-ray crystallography, often deposited in the Protein Data Bank. Current methods include 3DNA/DSSR and MC-annotate.
See also
DNA nanotechnology
Molecular models of DNA
DiProDB. The database is designed to collect and analyse thermodynamic, structural and other dinucleotide properties.
RNA CoSSMos
References
External links
MDDNA: Structural Bioinformatics of DNA
Abalone — Commercial software for DNA modeling
DNAlive: a web interface to compute DNA physical properties. Also allows cross-linking of the results with the UCSC Genome browser and DNA dynamics.
DNA
Biophysics
Molecular structure
RNA | Nucleic acid secondary structure | Physics,Chemistry,Biology | 2,188 |
1,028,589 | https://en.wikipedia.org/wiki/Normal%20basis | In mathematics, specifically the algebraic theory of fields, a normal basis is a special kind of basis for Galois extensions of finite degree, characterised as forming a single orbit for the Galois group. The normal basis theorem states that any finite Galois extension of fields has a normal basis. In algebraic number theory, the study of the more refined question of the existence of a normal integral basis is part of Galois module theory.
Normal basis theorem
Let be a Galois extension with Galois group . The classical normal basis theorem states that there is an element such that forms a basis of K, considered as a vector space over F. That is, any element can be written uniquely as for some elements
A normal basis contrasts with a primitive element basis of the form , where is an element whose minimal polynomial has degree .
Group representation point of view
A field extension with Galois group G can be naturally viewed as a representation of the group G over the field F in which each automorphism is represented by itself. Representations of G over the field F can be viewed as left modules for the group algebra F[G]. Every homomorphism of left F[G]-modules is of form for some . Since is a linear basis of F[G] over F, it follows easily that is bijective iff generates a normal basis of K over F. The normal basis theorem therefore amounts to the statement saying that if is finite Galois extension, then as left -module. In terms of representations of G over F, this means that K is isomorphic to the regular representation.
Case of finite fields
For finite fields this can be stated as follows: Let denote the field of q elements, where is a prime power, and let denote its extension field of degree . Here the Galois group is with a cyclic group generated by the q-power Frobenius automorphism with Then there exists an element such that
is a basis of K over F.
Proof for finite fields
In case the Galois group is cyclic as above, generated by with the normal basis theorem follows from two basic facts. The first is the linear independence of characters: a multiplicative character is a mapping χ from a group H to a field K satisfying ; then any distinct characters are linearly independent in the K-vector space of mappings. We apply this to the Galois group automorphisms thought of as mappings from the multiplicative group . Now as an F-vector space, so we may consider as an element of the matrix algebra Mn(F); since its powers are linearly independent (over K and a fortiori over F), its minimal polynomial must have degree at least n, i.e. it must be .
The second basic fact is the classification of finitely generated modules over a PID such as . Every such module M can be represented as , where may be chosen so that they are monic polynomials or zero and is a multiple of . is the monic polynomial of smallest degree annihilating the module, or zero if no such non-zero polynomial exists. In the first case , in the second case . In our case of cyclic G of size n generated by we have an F-algebra isomorphism where X corresponds to , so every -module may be viewed as an -module with multiplication by X being multiplication by . In case of K this means , so the monic polynomial of smallest degree annihilating K is the minimal polynomial of . Since K is a finite dimensional F-space, the representation above is possible with . Since we can only have , and as F[X]-modules. (Note this is an isomorphism of F-linear spaces, but not of rings or F-algebras.) This gives isomorphism of -modules that we talked about above, and under it the basis on the right side corresponds to a normal basis of K on the left.
Note that this proof would also apply in the case of a cyclic Kummer extension.
Example
Consider the field over , with Frobenius automorphism . The proof above clarifies the choice of normal bases in terms of the structure of K as a representation of G (or F[G]-module). The irreducible factorization
means we have a direct sum of F[G]-modules (by the Chinese remainder theorem):
The first component is just , while the second is isomorphic as an F[G]-module to under the action (Thus as F[G]-modules, but not as F-algebras.)
The elements which can be used for a normal basis are precisely those outside either of the submodules, so that and . In terms of the G-orbits of K, which correspond to the irreducible factors of:
the elements of are the roots of , the nonzero elements of the submodule are the roots of , while the normal basis, which in this case is unique, is given by the roots of the remaining factor .
By contrast, for the extension field in which is divisible by , we have the F[G]-module isomorphism
Here the operator is not diagonalizable, the module L has nested submodules given by generalized eigenspaces of , and the normal basis elements β are those outside the largest proper generalized eigenspace, the elements with .
Application to cryptography
The normal basis is frequently used in cryptographic applications based on the discrete logarithm problem, such as elliptic curve cryptography, since arithmetic using a normal basis is typically more computationally efficient than using other bases.
For example, in the field above, we may represent elements as bit-strings:
where the coefficients are bits Now we can square elements by doing a left circular shift, , since squaring β4 gives . This makes the normal basis especially attractive for cryptosystems that utilize frequent squaring.
Proof for the case of infinite fields
Suppose is a finite Galois extension of the infinite field F. Let , , where . By the primitive element theorem there exists such and . Let us write . 's (monic) minimal polynomial f over K is the irreducible degree n polynomial given by the formula
Since f is separable (it has simple roots) we may define
In other words,
Note that and for . Next, define an matrix A of polynomials over K and a polynomial D by
Observe that , where k is determined by ; in particular iff . It follows that is the permutation matrix corresponding to the permutation of G which sends each to . (We denote by the matrix obtained by evaluating at .) Therefore, . We see that D is a non-zero polynomial, and therefore it has only a finite number of roots. Since we assumed F is infinite, we can find such that . Define
We claim that is a normal basis. We only have to show that are linearly independent over F, so suppose for some . Applying the automorphism yields for all i. In other words, . Since , we conclude that , which completes the proof.
It is tempting to take because . But this is impermissible because we used the fact that to conclude that for any F-automorphism and polynomial over the value of the polynomial at a equals .
Primitive normal basis
A primitive normal basis of an extension of finite fields is a normal basis for that is generated by a primitive element of E, that is a generator of the multiplicative group K×. (Note that this is a more restrictive definition of primitive element than that mentioned above after the general normal basis theorem: one requires powers of the element to produce every non-zero element of K, not merely a basis.) Lenstra and Schoof (1987) proved that every extension of finite fields possesses a primitive normal basis, the case when F is a prime field having been settled by Harold Davenport.
Free elements
If is a Galois extension and x in K generates a normal basis over F, then x is free in . If x has the property that for every subgroup H of the Galois group G, with fixed field KH, x is free for , then x is said to be completely free in . Every Galois extension has a completely free element.
See also
Dual basis in a field extension
Polynomial basis
Zech's logarithm
References
Linear algebra
Field (mathematics)
Abstract algebra
Cryptography | Normal basis | Mathematics,Engineering | 1,705 |
36,900,213 | https://en.wikipedia.org/wiki/Tablet%20hardness%20testing | Tablet hardness testing is a laboratory technique used by the pharmaceutical industry to determine the breaking point and structural integrity of a tablet and find out how it changes "under conditions of storage, transportation, packaging and handling before usage"
The breaking point of a tablet is based on its shape. It is similar to friability testing, but they are not the same thing.
Tablet hardness testers first appeared in the 1930s. In the 1950s, the Strong-Cobb tester was introduced. It was patented by Robert Albrecht on July 21, 1953. and used an air pump. The tablet breaking force was based on arbitrary units referred to as Strong-Cobbs. The new one gave readings that were inconsistent to those given by the older testers. Later, electro-mechanical testing machines were introduced. They often include mechanisms like motor drives, and the ability to send measurements to a computer or printer.
There are 2 main processes to test tablet hardness: compression testing and 3 point bend testing. For compression testing, the analyst generally aligns the tablet in a repeatable way, and the tablet is squeezed between a fixed and a moving jaw. The first machines continually applied force with a spring and screw thread until the tablet started to break. When the tablet fractured, the hardness was read with a sliding scale.
List of common hardness testers
There are several devices used to perform this task:
The Monsanto tester was developed 50 years ago. The design consists of "a barrel containing a compressible spring held between 2 plungers". The tablet is placed on the lower plunger, and the upper plunger is lowered onto it.
The Strong-Cobb tester forces an anvil against a stationary platform. Results are viewed from a hydraulic gauge. The results are very similar to that of the Monsanto tester.
The Pfizer tester compresses tablet between a holding anvil and a piston connected to a force-reading gauge when its plier-like handles are gripped.
The Erweka tester tests a tablet placed on the lower anvil and a weight moving along a rail transmits pressure slowly to the tablet.
The Dr.Schleuniger Pharmatron tester operates in a horizontal position. An electric motor drives an anvil to compress a tablet at a constant rate. The tablet is pushed against a stationary anvil until it fractures. A reading is taken from a scale indicator.
Kraemer Elektronik's tablet testing system was the first automatic tablet hardness testing system for auto-regulation at tablet presses, invented by German mechanical engineer Mr. Norbert Kraemer in Darmstadt, Germany. The tablets are separated by a patented feeder chute and moved on a horizontal starwheel through different testing stations. The Kraemer Elektronik automatic tablet testing system measures weight, thickness, diameter/length, width and hardness of tablets and capsules.
Units of measurement
According to the International System of Units, the units of measurement of tablet hardness mostly follow standards used in materials testing.
Kilogram (kg) – The kilogram is recognized by the SI system as the primary unit of mass.
Newton (N) – The Newton is the SI unit of force; the standard for tablet hardness testing. 9.807 Newtons = 1 kilogram (at one G, earth surface gravity).
Pound (lb) – Technically a unit of force but can also be used for mass under earth gravity. Sometimes used for tablet strength testing in North America, but it is not an SI unit. 1 kilogram = 2.204 pounds.
Kilopond (kp) – Not to be confused with a pound. A unit of force also called a kilogram of force. Still used today in some applications, but not recognized by the SI system. 1 kilopond = 1 kgf.
Strong-Cobb (SC) – An ad hoc unit of force which is a legacy of one of the first tablet hardness testing machines. Although the SC is arbitrary, it was recognized as the international standard from the 1950s to the 1980s. 1 Strong-Cobb represented roughly 0.7 kilogram of force or about 7 newtons. Although the Strong-Cobb unit is arbitrarily based on the dial reading of the Strong Cobb hardness tester, it became an international standard for tablet hardness in the 1950s until it was superseded by testers using SI units in the 1980s. The Strong-Cobb is a unit with a very unusual name for a unit of measurement since it is named after the company, Strong-Cobb Inc. The inventor of the hardness tester was Robert Albrecht, the plant engineer for the Strong-Cobb Company. He sold the patent to the company for $1.00.
Sources
Further reading
American Society for the Testing of Materials (ASTM), Designation: E4–07, 'Standard Practices for Force Verification of Testing Machines'.
Hardness tests
Measuring instruments
Laboratory techniques
Pharmacy | Tablet hardness testing | Chemistry,Materials_science,Technology,Engineering | 995 |
21,786,078 | https://en.wikipedia.org/wiki/Chlorine-37 | Chlorine-37 (), is one of the stable isotopes of chlorine, the other being chlorine-35 (). Its nucleus contains 17 protons and 20 neutrons for a total of 37 nucleons. Chlorine-37 accounts for 24.23% of natural chlorine, chlorine-35 accounting for 75.77%, giving chlorine atoms in bulk an apparent atomic weight of .
Remarkably, solar neutrinos were discovered by an experiment (Homestake Experiment) using a radiochemical method based on chlorine-37 transmutation.
Neutrino detection
One of the historically important radiochemical methods of solar neutrino detection is based on inverse electron capture triggered by the absorption of an electron neutrino. Chlorine-37 transmutes into argon-37 via the reaction
+ → + .
Argon-37 then decays via electron capture (half-life 35 d) into chlorine-37 via the reaction
+ → + .
These last reactions involve Auger electrons of specific energies. The detection of these electrons confirms that a neutrino event occurred. Detection methods involve several hundred thousand liters of carbon tetrachloride (CCl4) or tetrachloroethylene (C2Cl4) stored in underground tanks.
Occurrence
The representative terrestrial abundance of chlorine-37 is 24.22(4)% of chlorine atoms, with a normal range of 24.14–24.36% of chlorine atoms. When measuring deviations in isotopic composition, the usual reference point is "Standard Mean Ocean Chloride" (SMOC), although a NIST Standard Reference Material (975a) also exists. SMOC is known to be around 24.219% chlorine-37 and to have an atomic weight of around 35.4525.
There is a known variation in the isotopic abundance of chlorine-37. This heavier isotope tends to be more prevalent in chloride minerals than in aqueous solutions such as seawater, although the isotopic composition of organochlorine compounds can vary in either direction from the SMOC standard in the range of several parts per thousand.
See also
Beta decay
Neutrino detection
Isotopic tracer
Isotopes of chlorine
References
Isotopes of chlorine | Chlorine-37 | Chemistry | 486 |
10,222,264 | https://en.wikipedia.org/wiki/Aircraft%20cabin | An aircraft cabin is the section of an aircraft in which passengers travel. Most modern commercial aircraft are pressurized, as cruising altitudes are high enough such that the surrounding atmosphere is too thin for passengers and crew to breathe.
In commercial air travel, particularly in airliners, cabins may be divided into several parts. These can include travel class sections in medium and large aircraft, areas for flight attendants, the galley, and storage for in-flight service. Seats are mostly arranged in rows and aisles. The higher the travel class, the more space is provided. Cabins of the different travel classes are often divided by curtains, sometimes called class dividers. Passengers are not usually allowed to visit higher travel class cabins in commercial flights.
Some aircraft cabins contain passenger entertainment systems. Short and medium haul cabins tend to have no or shared screens whereas long and ultra-long haul flights often contain personal screens.
Evolution
Business class is almost replacing first class: 70% of 777s had first-class cabins before 2008 while 22% of new 777s and 787s had one in 2017.
Full-flat seats in business-class rose from 65% of 777 deliveries in 2008 to nearly 100% of the 777s and 787s delivered in 2017, excepted for low-cost carriers having 10% premium cabin on their widebodies.
First-class seats were halved over the past 5–10 years, typically from eight to four.
To differentiate from business class, high-end first class move to full-height enclosures like Singapore Airlines, Emirates, and Etihad.
Business class became the equivalent of what first class was a few years ago.
In 2017, 80% of the 777s and 787s delivered had a separate premium economy with one or two fewer seats across than regular economy class.
In economy class, slimmer seats with composite frames and thinner upholstery can add legroom or allow more seating.
While ground or more often satellite internet connection is available at lower cost due to competition, only 25–30% of carriers outside U.S. offer inflight connectivity.
LED lighting can support different scenarios like boarding, food service, shopping, branding or chronobiology through simulated sunset or sunrise.
First- and business-class are refurbished every 5–7 years compared to 6–10 years for economy.
A 337 seats cabin (36 business, 301 economy) in a 787-10 for Singapore Airlines costs $ million each.
Emirates invested over $15 million each to refurbish its 777-200LR in a new two-class configuration in 55 days initially then 35 days.
Mezzanine seating
In the mid-2000s, Formation Design Group proposed using the taller wide-body cabins to layer the bed and seat arrangements for higher density.
Revealed at Aircraft Interiors Expo 2012, Factorydesign devised a double-deck system of pods for 30% more density, between premium economy and business class.
In 2015, Airbus filed a patent for a double-deck business class cabin, to monetize the vertical space.
Cabin pressurization
Cabin pressurization is the active pumping of compressed air into the cabin of an aircraft in order to ensure the safety and comfort of the occupants. It becomes necessary whenever the aircraft reaches a certain altitude, since the natural atmospheric pressure would be too low to supply sufficient oxygen to the passengers. Without pressurization, one could suffer from altitude sickness including hypoxia.
If a pressurized aircraft suffers a pressurization failure above , then it could be deemed as an emergency. Should this situation occur, the aircraft should begin an emergency descent and oxygen masks should be activated for all occupants. In the majority of passenger aircraft, the passengers' oxygen masks are activated automatically if the cabin pressure falls below the atmospheric pressure equivalent of .
Travel class
First class
The first class section of an airplane is the class with the best service, and it is typically the highest priced. The services offered are superior to those in business class, and they are available on only a small number of long flights.
First class is characterized by having a larger amount of space between seats (including those that can be converted into beds), a personal TV set, high quality food and drink, personalized service, privacy, and providing travelers with complimentary items (ex. pajamas, shoes and toiletries). Passengers in this class have a separate check-in, access to the airline's first-class lounge, preferred boarding, or private transportation between the terminal and the plane. Due to its high cost, there are few airlines that offer this service.
Business class
Business class is more expensive, but it also offers more amenities to travelers than the classes below it. These may include better food, wider entertainment options, more comfortable seats with more room to recline and more legroom, among others.
Premium economy class
Premium economy class is a travel class offered by some airlines in order to provide a better flying experience to the economy traveler, but for much less money than business class.
It is often limited to a few extras such as more legroom, as well as complimentary food and drinks.
On board Air Canada, Premium Economy comes with wider seats (3 inches on the Boeing 777-300) (2 inches on the Boeing 787), more recline (3 inches more than economy), a fold-down foot rest, an amenity kit, premium food and drinks on long-haul international flights, and much more legroom.
Economy class
Economy class is the airline travel class with the lowest ticket price, as the level of comfort is lower than that of the other classes. This class is primarily characterized by the short distance between each seat, and a smaller variety of food and entertainment.
VIP configuration
VIP configuration of an aircraft has enclosed separated sections for use by select passenger(s) for use as an office space, meeting area and notably sleeping quarters from seated passengers.
The most notable is Air Force One, with a private sleeping area, office space and conference rooms for the president of the United States.
See also
Gaspers
Shirt-sleeve environment
Uncontrolled decompression
Wide-body aircraft interiors
References
External links
Rooms | Aircraft cabin | Engineering | 1,247 |
25,777,451 | https://en.wikipedia.org/wiki/Chloroplast%20DNA | Chloroplast DNA (cpDNA), also known as plastid DNA (ptDNA) is the DNA located in chloroplasts, which are photosynthetic organelles located within the cells of some eukaryotic organisms. Chloroplasts, like other types of plastid, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. The first complete chloroplast genome sequences were published in 1986, Nicotiana tabacum (tobacco) by Sugiura and colleagues and Marchantia polymorpha (liverwort) by Ozeki et al. Since then, tens of thousands of chloroplast genomes from various species have been sequenced.
Molecular structure
Chloroplast DNAs are circular, and are typically 120,000–170,000 base pairs long. They can have a contour length of around 30–60 micrometers, and have a mass of about 80–130 million daltons.
Most chloroplasts have their entire chloroplast genome combined into a single large ring, though those of dinophyte algae are a notable exception—their genome is broken up into about forty small plasmids, each 2,000–10,000 base pairs long. Each minicircle contains one to three genes, but blank plasmids, with no coding DNA, have also been found.
Chloroplast DNA has long been thought to have a circular structure, but some evidence suggests that chloroplast DNA more commonly takes a linear shape. Over 95% of the chloroplast DNA in corn chloroplasts has been observed to be in branched linear form rather than individual circles.
Inverted repeats
Many chloroplast DNAs contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC).
The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each. Inverted repeats in plants tend to be at the upper end of this range, each being 20,000–25,000 base pairs long.
The inverted repeat regions usually contain three ribosomal RNA and two tRNA genes, but they can be expanded or reduced to contain as few as four or as many as over 150 genes.
While a given pair of inverted repeats are rarely completely identical, they are always very similar to each other, apparently resulting from concerted evolution.
The inverted repeat regions are highly conserved among land plants, and accumulate few mutations. Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceæ), suggesting that they predate the chloroplast, though some chloroplast DNAs like those of peas and a few red algae have since lost the inverted repeats. Others, like the red alga Porphyra flipped one of its inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast DNAs which have lost some of the inverted repeat segments tend to get rearranged more.
Nucleoids
Each chloroplast contains around 100 copies of its DNA in young leaves, declining to 15–20 copies in older leaves. They are usually packed into nucleoids which can contain several identical chloroplast DNA rings. Many nucleoids can be found in each chloroplast.
Though chloroplast DNA is not associated with true histones, in red algae, a histone-like chloroplast protein (HC) coded by the chloroplast DNA that tightly packs each chloroplast DNA ring into a nucleoid has been found.
In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of a chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma.
Gene content and plastid gene expression
More than 5000 chloroplast genomes have been sequenced and are accessible via the NCBI organelle genome database. The first chloroplast genomes were sequenced in 1986, from tobacco (Nicotiana tabacum) and liverwort (Marchantia polymorpha). Comparison of the gene sequences of the cyanobacteria Synechocystis to those of the chloroplast genome of Arabidopsis provided confirmation of the endosymbiotic origin of the chloroplast. It also demonstrated the significant extent of gene transfer from the cyanobacterial ancestor to the nuclear genome.
In most plant species, the chloroplast genome encodes approximately 120 genes. The genes primarily encode core components of the photosynthetic machinery and factors involved in their expression and assembly. Across species of land plants, the set of genes encoded by the chloroplast genome is fairly conserved. This includes four ribosomal RNAs, approximately 30 tRNAs, 21 ribosomal proteins, and 4 subunits of the plastid-encoded RNA polymerase complex that are involved in plastid gene expression. The large Rubisco subunit and 28 photosynthetic thylakoid proteins are encoded within the chloroplast genome.
Chloroplast genome reduction and gene transfer
Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called endosymbiotic gene transfer.
As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. The parasitic Pilostyles have even lost their plastid genes for tRNA. Contrarily, there are only a few known instances where genes have been transferred to the chloroplast from various donors, including bacteria.
Endosymbiotic gene transfer is how we know about the lost chloroplasts in many chromalveolate lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor (probably the ancestor of all chromalveolates too) had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast.
In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in Arabidopsis, corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants.
Proteins encoded by the chloroplast
Of the approximately three-thousand proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called retrograde signaling.
Protein synthesis
Protein synthesis within chloroplasts relies on an RNA polymerase coded by the chloroplast's own genome, which is related to RNA polymerases found in bacteria. Chloroplasts also contain a mysterious second RNA polymerase that is encoded by the plant's nuclear genome. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes.
RNA editing in plastids
RNA editing is the insertion, deletion, and substitution of nucleotides in a mRNA transcript prior to translation to protein. The highly oxidative environment inside chloroplasts increases the rate of mutation so post-transcription repairs are needed to conserve functional sequences. The chloroplast editosome substitutes C -> U and U -> C at very specific locations on the transcript. This can change the codon for an amino acid or restore a non-functional pseudogene by adding an AUG start codon or removing a premature UAA stop codon.
The editosome recognizes and binds to cis sequence upstream of the editing site. The distance between the binding site and editing site varies by gene and proteins involved in the editosome. Hundreds of different PPR proteins from the nuclear genome are involved in the RNA editing process. These proteins consist of 35-mer repeated amino acids, the sequence of which determines the cis binding site for the edited transcript.
Basal land plants such as liverworts, mosses and ferns have hundreds of different editing sites while flowering plants typically have between thirty and forty. Parasitic plants such as Epifagus virginiana show a loss of RNA editing resulting in a loss of function for photosynthesis genes.
DNA replication
Leading model of cpDNA replication
The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Replication starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to replicate the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes.
In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine (H). Hypoxanthine can bind to cytosine, and when the HC base pair is replicated, it becomes a GC (thus, an A → G base change).
In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures.
Alternative model of replication
One of the main competing models for cpDNA asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more still contain complex structures that scientists do not yet understand; however, the predominant view today is that most cpDNA is circular. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not explain the multiple A → G gradients seen in plastomes. This shortcoming is one of the biggest for the linear structure theory.
Protein targeting and import
The movement of so many chloroplast genes to the nucleus means that many chloroplast proteins that were supposed to be translated in the chloroplast are now synthesized in the cytoplasm. This means that these proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes.
Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway (though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell, because to reach the chloroplast from the cytosol, you have to cross the cell membrane, just like if you were headed for the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway).
Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle.
Cytoplasmic translation and N-terminal transit sequences
Polypeptides, the precursors of proteins, are chains of amino acids. The two ends of a polypeptide are called the N-terminus, or amino end, and the C-terminus, or carboxyl end. For many (but not all) chloroplast proteins encoded by nuclear genes, cleavable transit peptides are added to the N-termini of the polypeptides, which are used to help direct the polypeptide to the chloroplast for import (N-terminal transit peptides are also used to direct polypeptides to plant mitochondria).
N-terminal transit sequences are also called presequences because they are located at the "front" end of a polypeptide—ribosomes synthesize polypeptides from the N-terminus to the C-terminus.
Chloroplast transit peptides exhibit huge variation in length and amino acid sequence. They can be from 20 to 150 amino acids long—an unusually long length, suggesting that transit peptides are actually collections of domains with different functions. Transit peptides tend to be positively charged, rich in hydroxylated amino acids such as serine, threonine, and proline, and poor in acidic amino acids like aspartic acid and glutamic acid. In an aqueous solution, the transit sequence forms a random coil.
Not all chloroplast proteins include a N-terminal cleavable transit peptide though. Some include the transit sequence within the functional part of the protein itself. A few have their transit sequence appended to their C-terminus instead. Most of the polypeptides that lack N-terminal targeting sequences are the ones that are sent to the outer chloroplast membrane, plus at least one sent to the inner chloroplast membrane.
Phosphorylation, chaperones, and transport
After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, ATP energy can be used to phosphorylate, or add a phosphate group to many (but not all) of them in their transit sequences. Serine and threonine (both very common in chloroplast transit sequences—making up 20–30% of the sequence) are often the amino acids that accept the phosphate group. The enzyme that carries out the phosphorylation is specific for chloroplast polypeptides, and ignores ones meant for mitochondria or peroxisomes.
Phosphorylation changes the polypeptide's shape, making it easier for 14-3-3 proteins to attach to the polypeptide. In plants, 14-3-3 proteins only bind to chloroplast preproteins. It is also bound by the heat shock protein Hsp70 that keeps the polypeptide from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized and imported into the chloroplast.
The heat shock protein and the 14-3-3 proteins together form a cytosolic guidance complex that makes it easier for the chloroplast polypeptide to get imported into the chloroplast.
Alternatively, if a chloroplast preprotein's transit peptide is not phosphorylated, a chloroplast preprotein can still attach to a heat shock protein or Toc159. These complexes can bind to the TOC complex on the outer chloroplast membrane using GTP energy.
The translocon on the outer chloroplast membrane (TOC)
The TOC complex, or translocon on the outer chloroplast membrane, is a collection of proteins that imports preproteins across the outer chloroplast envelope. Five subunits of the TOC complex have been identified—two GTP-binding proteins Toc34 and Toc159, the protein import tunnel Toc75, plus the proteins Toc64 and Toc12.
The first three proteins form a core complex that consists of one Toc159, four to five Toc34s, and four Toc75s that form four holes in a disk 13 nanometers across. The whole core complex weighs about 500 kilodaltons. The other two proteins, Toc64 and Toc12, are associated with the core complex but are not part of it.
Toc34 and 33
Toc34 is an integral protein in the outer chloroplast membrane that's anchored into it by its hydrophobic C-terminal tail. Most of the protein, however, including its large guanosine triphosphate (GTP)-binding domain projects out into the stroma.
Toc34's job is to catch some chloroplast preproteins in the cytosol and hand them off to the rest of the TOC complex. When GTP, an energy molecule similar to ATP attaches to Toc34, the protein becomes much more able to bind to many chloroplast preproteins in the cytosol. The chloroplast preprotein's presence causes Toc34 to break GTP into guanosine diphosphate (GDP) and inorganic phosphate. This loss of GTP makes the Toc34 protein release the chloroplast preprotein, handing it off to the next TOC protein. Toc34 then releases the depleted GDP molecule, probably with the help of an unknown GDP exchange factor. A domain of Toc159 might be the exchange factor that carry out the GDP removal. The Toc34 protein can then take up another molecule of GTP and begin the cycle again.
Toc34 can be turned off through phosphorylation. A protein kinase drifting around on the outer chloroplast membrane can use ATP to add a phosphate group to the Toc34 protein, preventing it from being able to receive another GTP molecule, inhibiting the protein's activity. This might provide a way to regulate protein import into chloroplasts.
Arabidopsis thaliana has two homologous proteins, AtToc33 and AtToc34 (The At stands for Arabidopsis thaliana), which are each about 60% identical in amino acid sequence to Toc34 in peas (called psToc34). AtToc33 is the most common in Arabidopsis, and it is the functional analogue of Toc34 because it can be turned off by phosphorylation. AtToc34 on the other hand cannot be phosphorylated.
Toc159
Toc159 is another GTP binding TOC subunit, like Toc34. Toc159 has three domains. At the N-terminal end is the A-domain, which is rich in acidic amino acids and takes up about half the protein length. The A-domain is often cleaved off, leaving an 86 kilodalton fragment called Toc86. In the middle is its GTP binding domain, which is very similar to the homologous GTP-binding domain in Toc34. At the C-terminal end is the hydrophilic M-domain, which anchors the protein to the outer chloroplast membrane.
Toc159 probably works a lot like Toc34, recognizing proteins in the cytosol using GTP. It can be regulated through phosphorylation, but by a different protein kinase than the one that phosphorylates Toc34. Its M-domain forms part of the tunnel that chloroplast preproteins travel through, and seems to provide the force that pushes preproteins through, using the energy from GTP.
Toc159 is not always found as part of the TOC complex—it has also been found dissolved in the cytosol. This suggests that it might act as a shuttle that finds chloroplast preproteins in the cytosol and carries them back to the TOC complex. There isn't a lot of direct evidence for this behavior though.
A family of Toc159 proteins, Toc159, Toc132, Toc120, and Toc90 have been found in Arabidopsis thaliana. They vary in the length of their A-domains, which is completely gone in Toc90. Toc132, Toc120, and Toc90 seem to have specialized functions in importing stuff like nonphotosynthetic preproteins, and can't replace Toc159.
Toc75
Toc75 is the most abundant protein on the outer chloroplast envelope. It is a transmembrane tube that forms most of the TOC pore itself. Toc75 is a β-barrel channel lined by 16 β-pleated sheets. The hole it forms is about 2.5 nanometers wide at the ends, and shrinks to about 1.4–1.6 nanometers in diameter at its narrowest point—wide enough to allow partially folded chloroplast preproteins to pass through.
Toc75 can also bind to chloroplast preproteins, but is a lot worse at this than Toc34 or Toc159.
Arabidopsis thaliana has multiple isoforms of Toc75 that are named by the chromosomal positions of the genes that code for them. AtToc75 III is the most abundant of these.
The translocon on the inner chloroplast membrane (TIC)
The TIC translocon, or translocon on the inner chloroplast membrane translocon is another protein complex that imports proteins across the inner chloroplast envelope. Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space.
Like the TOC translocon, the TIC translocon has a large core complex surrounded by some loosely associated peripheral proteins like Tic110, Tic40, and Tic21.
The core complex weighs about one million daltons and contains Tic214, Tic100, Tic56, and Tic20 I, possibly three of each.
Tic20
Tic20 is an integral protein thought to have four transmembrane α-helices. It is found in the 1 million dalton TIC complex. Because it is similar to bacterial amino acid transporters and the mitochondrial import protein Tim17 (translocase on the inner mitochondrial membrane), it has been proposed to be part of the TIC import channel. There is no in vitro evidence for this though. In Arabidopsis thaliana, it is known that for about every five Toc75 proteins in the outer chloroplast membrane, there are two Tic20 I proteins (the main form of Tic20 in Arabidopsis) in the inner chloroplast membrane.
Unlike Tic214, Tic100, or Tic56, Tic20 has homologous relatives in cyanobacteria and nearly all chloroplast lineages, suggesting it evolved before the first chloroplast endosymbiosis. Tic214, Tic100, and Tic56 are unique to chloroplastidan chloroplasts, suggesting that they evolved later.
Tic214
Tic214 is another TIC core complex protein, named because it weighs just under 214 kilodaltons. It is 1786 amino acids long and is thought to have six transmembrane domains on its N-terminal end. Tic214 is notable for being coded for by chloroplast DNA, more specifically the first open reading frame ycf1. Tic214 and Tic20 together probably make up the part of the one million dalton TIC complex that spans the entire membrane. Tic20 is buried inside the complex while Tic214 is exposed on both sides of the inner chloroplast membrane.
Tic100
Tic100 is a nuclear encoded protein that's 871 amino acids long. The 871 amino acids collectively weigh slightly less than 100 thousand daltons, and since the mature protein probably doesn't lose any amino acids when itself imported into the chloroplast (it has no cleavable transit peptide), it was named Tic100. Tic100 is found at the edges of the 1 million dalton complex on the side that faces the chloroplast intermembrane space.
Tic56
Tic56 is also a nuclear encoded protein. The preprotein its gene encodes is 527 amino acids long, weighing close to 62 thousand daltons; the mature form probably undergoes processing that trims it down to something that weighs 56 thousand daltons when it gets imported into the chloroplast. Tic56 is largely embedded inside the 1 million dalton complex.
Tic56 and Tic100 are highly conserved among land plants, but they don't resemble any protein whose function is known. Neither has any transmembrane domains.
See also
List of sequenced plastomes
Mitochondrial DNA
References
Cell anatomy
Chromosomes
DNA
Plant genes
Photosynthesis | Chloroplast DNA | Chemistry,Biology | 5,898 |
12,461,912 | https://en.wikipedia.org/wiki/C4H2O4 | {{DISPLAYTITLE:C4H2O4}}
The molecular formula C4H2O4 (molar mass: 114.06 g/mol, exact mass: 113.9953 u) may refer to:
Acetylenedicarboxylic acid, or butynedioic acid
Squaric acid, or quadratic acid
Molecular formulas | C4H2O4 | Physics,Chemistry | 78 |
691,626 | https://en.wikipedia.org/wiki/Irrationality | Irrationality is cognition, thinking, talking, or acting without rationality.
Irrationality often has a negative connotation, as thinking and actions that are less useful or more illogical than other more rational alternatives. The concept of irrationality is especially important in Albert Ellis's rational emotive behavior therapy, where it is characterized specifically as the tendency and leaning that humans have to act, emote and think in ways that are inflexible, unrealistic, absolutist and most importantly self-defeating and socially defeating and destructive.
However, irrationality is not always viewed as a negative. Much subject matter in literature can be seen as an expression of human longing for the irrational. The Romantics valued irrationality over what they perceived as the sterile, calculating and emotionless philosophy which they thought to have been brought about by the Age of Enlightenment and the Industrial Revolution. Dada Surrealist art movements embraced irrationality as a means to "reject reason and logic". André Breton, for example, argued for a rejection of pure logic and reason which are seen as responsible for many contemporary social problems.
See also
Absurdism
Irrationalism
Notes
References
Stuart Sutherland Irrationality: Why We Don't Think Straight, 1992, reissued 2007 by Pinter & Martin
Lisa Bortolotti, Irrationality, Cambridge, Polity Press, 2014
External links
Craig R. M. McKenzie. Rational models as theories – not standards – of behavior. Trends in Cognitive Sciences Vol.7 No.9 September 2003
REBT-CBT NET – Internet Guide to Rational Emotive Behavior Therapy
Reasoning
Human behavior
Error
Symptoms and signs of mental disorders | Irrationality | Biology | 331 |
67,179,878 | https://en.wikipedia.org/wiki/Karen%20Hao | Karen Hao is an American journalist and data scientist. Currently a contributing writer for The Atlantic and previously a foreign correspondent based in Hong Kong for The Wall Street Journal and senior artificial intelligence editor at the MIT Technology Review, she is best known for her coverage on AI research, technology ethics and the social impact of AI. Hao also co-produces the podcast In Machines We Trust and writes the newsletter The Algorithm.
Previously, she worked at Quartz as a tech reporter and data scientist and was an application engineer at the first startup to spin out of X Development. Hao's writing has also appeared in Mother Jones, Sierra Magazine, The New Republic, and other publications.
Early life and education
Hao graduated from The Lawrenceville School in 2011. She studied at the Massachusetts Institute of Technology, graduating with a B.S. in mechanical engineering and a minor in energy studies in 2015. She is a native speaker in both English and Mandarin Chinese.
Career
Hao is known in the technology world for her coverage of new AI research findings and their societal and ethical impacts. Her writing has spanned research and issues regarding big tech data privacy, misinformation, deepfakes, facial recognition, and AI healthcare tools.
In March 2021, Hao published a piece that uncovered previously unknown information about how attempts to combat misinformation by different teams at Facebook's using machine learning were impeded and constantly at odds by Facebook's drive to grow user engagement. Upon its release, leaders at Facebook including Mike Schroepfer and Yann LeCun immediately criticized the piece through Twitter responses. AI researchers and AI ethics experts Timnit Gebru and Margaret Mitchell responded in support of Hao's writing and advocated for more change and improvement for all.
Hao also co-produces the podcast In Machines We Trust, which discusses the rise of AI with people developing, researching, and using AI technologies. The podcast won the 2020 Front Page Award in investigative reporting.
As a data scientist, Hao occasionally creates data visualizations that have been featured in her work at the MIT Technology Review and elsewhere. In 2018, her "What is AI?" flowchart visualization was exhibited as an installation at the Museum of Applied Arts in Vienna.
She has been an invited speaker at TEDxGateway, the United Nations Foundation, EmTech, WNPR, and many other conferences and podcasts. Her TEDx talk discussed the importance of democratizing how AI is built.
In March 2022, she was hired by The Wall Street Journal to cover China technology and society, while being based in Hong Kong.
Selected awards and honors
2019 Webby Award nominee for best newsletter, as a writer of The Algorithm
2021 Front Page Award in investigative reporting, as a co-producer for In Machines We Trust
2021 Ambies Award nominee for best knowledge and science podcast, as a co-producer for In Machines We Trust
2021 Webby Award nominee for best technology podcast, as a co-producer for In Machines We Trust
References
Living people
Massachusetts Institute of Technology alumni
21st-century American newspaper editors
21st-century American women journalists
American newspaper reporters and correspondents
American women journalists of Asian descent
American women non-fiction writers
American journalists of Asian descent
Artificial intelligence people
Ethics of science and technology
Year of birth missing (living people)
Lawrenceville School alumni
The Wall Street Journal people | Karen Hao | Technology | 676 |
48,779,942 | https://en.wikipedia.org/wiki/Security%20visualisation | Security visualisation is a subject that broadly covers aspects of big data, visualisation, human perception and security. Each day, we are collecting more and more data in the form of log files and it is often meaningless if the data is not analyzed thoroughly. Big data mining techniques like Map Reduce help narrow down the search for meaning in vast data. Data visualisation is a data analytics technique, which is used to engage the human brain into finding patterns in data.
Recognition and cognition of patterns will also lead to the identification of anomalous patterns. Security visualisation helps a security analyst identify imminent vulnerability and attacks in a network. Simple visualisations like bar charts and pie charts are naïve and unintuitive when it comes to big data. Special, customized visual techniques like a choropleth map and hive plot are often desired for effective communication of big data. The book Applied Security Visualisation is an in-depth study of the correlation between Security and Data Visualisation.
Sophisticated visualisations
Choropleth
Choropleth is a visualization that depicts the intensity of a quantity through color shading. It can be useful in finding areas of interest through the variations in color and therefore a human readers attention will be drawn to the area that requires security attention. A Choropleth map is a geographical map in which the states or counties are shaded to depict region of interest.
Hive plot
Computer networks are often very troublesome to visualize because they end up looking complicated and difficult to understand. A force Diagram that is used to depict a computer network often ends up looking like a ball of hair when the number of nodes is large. Hence, making force diagrams unsuitable for unorganised big data. A hive plot is considered an improvement to force-directed graph drawing especially suited for big data. Nodes are arranged along three or more axes and edges between nodes are drawn as Bézier curves.
Heatmap
A heatmap is a visual technique similar to the choropleth map. However, a heatmap is shaded with gradient colors, which are usually computed using a normalized heatmap function. These maps can be used to recognize areas that require attention through varying shades and patterns of color gradient.
ELISHA
ELISHA is a visual anomaly detection system. The tool aims at identifying multiple origin autonomous system (MOAS) conflicts in a Border Gateway Protocol network. A MOAS conflict is identified by changes in color of the connected nodes in a BGP network.
References
External links
Expert-interviews led analysis of EEVi — A model for effective visualization in cyber-security by Aneesha Sethi and Gary Wills. DOI:10.1109/VIZSEC.2017.8062195
EEVi – Framework for Evaluating the Effectiveness of Visualization in Cyber-Security by Aneesha Sethi, Federica Paci and Gary Wills
Big data | Security visualisation | Technology | 582 |
42,427,166 | https://en.wikipedia.org/wiki/Tomahawk%20%28software%29 | Tomahawk was a free, open-source cross-platform music player for Windows, macOS and Linux. An Android beta client version was launched in June 2016. It focuses on the conglomeration of the user's music library across local and network collections as well as streaming services. The project was marked as abandoned by their authors on May 10, 2017.
About
Tomahawk has a familiar iTunes-like interface. The left column offers access to playlists, search history, favorite tracks, charts, and other categories.
Features
Tomahawk allows to install plug-ins for several different music services. These include:
Spotify
YouTube
Jamendo
Grooveshark
Last.fm
SoundCloud
ownCloud
4shared
Dilandau
Official.fm
Ampache
Subsonic
Google Play Music
Beats Music
Beets
Rdio (currently Android only)
Deezer (currently Android only)
Toma.hk and Hatchet
In 2013, Tomahawk launched Toma.hk, a website that generates embeddable HTML code for songs and artists, allowing direct links to playable tracks online.
In March 2014, Tomahawk launched its cross-platform sync and social platform called "Hatchet" in beta. It provides users the ability to sync playlists and "loved" tracks across multiple devices. The service was planned to allow users to see what other users are listening to and share playlists through the Tomahawk application.
The last build was released in April 2015, after which progress stalled. In May 2017, developer Anton Romanov confirmed that the project is abandoned.
See also
List of Linux audio software
References
External links
Free audio software
Audio software
Linux media players
MacOS media players
Windows media players
Online music database clients
Audio player software that uses Qt | Tomahawk (software) | Engineering | 354 |
62,624,523 | https://en.wikipedia.org/wiki/Epichlo%C3%AB%20hordelymi | Epichloë hordelymi is a hybrid asexual species in the fungal genus Epichloë.
A systemic and seed-transmissible grass symbiont first described in 2013, Epichloë hordelymi is a natural allopolyploid of Epichloë bromicola and a strain in the Epichloë typhina complex.
Epichloë hordelymi is found in Europe, where it has been identified in the grass species Hordelymus europaeus.
References
hordelymi
Fungi described in 2013
Fungi of Europe
Fungus species | Epichloë hordelymi | Biology | 113 |
9,865,334 | https://en.wikipedia.org/wiki/Astron%20%28wristwatch%29 | The Astron wristwatch, formally known as the Seiko Quartz-Astron 35SQ, was the world's first "quartz clock" wristwatch. It is now registered on the List of IEEE Milestones as a key advance in electrical engineering.
History
The Astron was unveiled in Tokyo on December 25, 1969, after ten years of research and development at Suwa Seikosha (currently named Seiko Epson), a manufacturing company of Seiko Group. Within one week 100 gold watches had been sold, at a retail price of 450,000 yen () each (at the time, equivalent to the price of a medium-sized car). Essential elements included a XY-type quartz oscillator of (8192 = 213), a hybrid integrated circuit, and a phase locked ultra-small stepping motor to turn its hands. According to Seiko, Astron was accurate to ±5 seconds per month or one minute per year, and its battery life was 1 year or longer.
Anniversaries
In March 2010, at the Baselworld watch fair and trade show in Switzerland, Seiko previewed a limited edition new version of the watch and related designs of the original Astron watch, commemorating the fortieth anniversary in December 2009 of the debut of the Astron watch.
Second generation
Seiko used the "Astron" trademark again as "Seiko Astron" when it released a satellite radio-wave solar-powered wristwatch using GPS satellites in 2012.
50th Anniversary Model
In 2019, Seiko released several limited edition Astron models to commemorate the 50th anniversary of the quartz Astron. Among them, the model produced in a limited edition of 50 pieces (3.8 million yen) mimics the original case design and has a rough engraving pattern by craftsmen belonging to Epson's "Micro Artist Workshop".
References
Further reading
"Seiko Quartz 35 SQ: The Seiko 35 SQ Astron was the first quartz watch on the market", Smithsonian Institution website.
Thompson, Joe, "1969: Seiko’s Breakout Year", WatchTime Magazine, December 20, 2009
External links
(alternate version )
Products introduced in 1969
History of electronic engineering
Watch models
Seiko
Epson
Japanese inventions | Astron (wristwatch) | Engineering | 455 |
57,512,705 | https://en.wikipedia.org/wiki/Aissa%20Wade | Aissa Wade is a Professor of Mathematics at the Pennsylvania State University. She was the President of the African Institute for Mathematical Sciences centre in Senegal (from 2016 to 2018).
Early life and education
Wade was born in Dakar, Senegal. She studied mathematics at Cheikh Anta Diop University and graduated in 1993. She had to leave Senegal to earn a Ph.D. as there were no opportunities in Africa. Wade earned her Ph.D. at the University of Montpellier in 1996. Her thesis, "Normalisation formelle de structures de Poisson", considered symplectic geometry. Her doctoral advisor was Jean Paul Dufour.
Career
Wade became a postdoctoral researcher at the Abdus Salam International Centre for Theoretical Physics, where she worked on conformal Dirac structures. She held visiting faculty positions at University of North Carolina at Chapel Hill, African University of Science and Technology and Paul Sabatier University. Wade joined Pennsylvania State University and was appointed full professor in 2016.
She served as a managing editor of The African Diaspora Journal of Mathematics. She is editor of Afrika Mathematika. She is on the scientific committee of the NextEinstein forum, an initiative to connect science, society and policy in Africa. As the President of the African Institute for Mathematical Sciences, Wade was the first woman to hold this role. She has been awarded funding from the National Science Foundation to support the Senegal Workshop on Geometric Structures. She has been involved with American Association for the Advancement of Science activities to enhance African STEM research, including the provision of evidence-based metrics, case studies and policy recommendations. In 2017 Wade was named a fellow of the African Academy of Sciences.
Wade's accomplishments earned her recognition by Mathematically Gifted & Black, where she was featured as a Black History Month 2020 Honoree.
References
1967 births
Living people
Senegalese mathematicians
Fellows of the African Academy of Sciences
Women mathematicians
People from Dakar
Cheikh Anta Diop University alumni
University of Montpellier alumni
University of North Carolina at Chapel Hill faculty
Pennsylvania State University faculty
21st-century American mathematicians
Geometers | Aissa Wade | Mathematics | 419 |
30,278,857 | https://en.wikipedia.org/wiki/Vitexin%20%28data%20page%29 | This page provides supplementary chemical data on vitexin.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as eChemPortal, and follow its directions.
Sigma Aldrich MSDS from SDSdata.org
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Vitexin (data page) | Chemistry | 86 |
15,281,113 | https://en.wikipedia.org/wiki/Shield%20budding | Shield budding, also known as T-budding, is a technique of grafting to change varieties of fruit trees. Typically used in fruit tree propagation, it can also be used for many other kinds of nursery stock. An extremely sharp knife is necessary; specialty budding knives are on the market. A budding knife is a small knife with a type of spatula at the other end of the handle. The rootstock or stock plant may be cut off above the bud at budding, or one may wait until it is certain that the bud is growing.
Fruit tree budding is done when the bark "slips," i.e. the cambium is moist and actively growing. Rootstocks are young trees, either seedlings as Mazzard cherries for many cherry varieties, or clonal rootstocks (usually propagated by layering) when one wants highly consistent plants with well defined characteristics. The popular Malling-Merton series of rootstocks for apples was developed in England, and are used today for the majority of the commercial apple orchard trees.
T-budding is the most common style, whereby a T-shaped slit is made in the stock plant, and the knife is flexed from side to side in the lower slit to loosen up the bark. Scion wood is selected from the chosen variety, as young, actively growing shoots. Usually, buds at the tip, or at the older parts of the shoot are discarded, and only two to four buds are taken for use. The buds are in the leaf axils. They may be so tiny as to be almost unnoticeable.
Holding the petiole of the leaf as a handle, an oval of the main stem is sliced off, including the petiole and the bud. This is immediately slid into the T on the rootstock, before it can dry out. The joined bud and rootstock are held by a winding of rubber band, which will hold it until sealed, but the band will deteriorate in the sunlight so it soon breaks and does not pinch new growth, girdling the shoot.
The percentage of "take" of the buds depends on the natural compatibility of the stock and scion, the sharpness of the knife, and the skill of the budder; even the experts will have some buds die.
References
Plant reproduction
ka:მყნობა | Shield budding | Biology | 482 |
2,113,111 | https://en.wikipedia.org/wiki/PerkinElmer | PerkinElmer, Inc., previously styled Perkin-Elmer, is an American global corporation that was founded in 1937 and originally focused on precision optics. Over the years it went into and out of several different businesses via acquisitions and divestitures; these included defense products, semiconductors, computer systems, and others. By the 21st century, PerkinElmer was focused in the business areas of diagnostics, life science research, food, environmental and industrial testing. Its capabilities include detection, imaging, informatics, and service. It produced analytical instruments, genetic testing and diagnostic tools, medical imaging components, software, instruments, and consumables for multiple end markets. PerkinElmer was part of the S&P 500 Index and operated in 190 countries.
Over its history, PerkinElmer has been split in two twice. In 1999, PerkinElmer merged with EG&G, with the ongoing Analytical Instruments Division of Perkin-Elmer keeping that name, while the life sciences division of the company became the separate PE Corporation. In 2022, a split of PerkinElmer resulted in one part, comprising applied, food and enterprise services businesses, being sold to the private equity firm New Mountain Capital for $2.45 billion and thus no longer being public but kept the PerkinElmer name. The other part, comprising life sciences and diagnostics businesses, remained public but required a new name, which in 2023 was announced as Revvity, Inc.
History
Founding
Richard Perkin was attending the Pratt Institute in Brooklyn to study chemical engineering, but left after a year to try his hand on Wall Street. Still interested in the sciences, he gave public lectures on various topics. Charles Elmer ran a firm that supplied court reporters and was nearing retirement when he attended one of Perkin's lectures on astronomy being held at the Brooklyn Institute of Arts and Sciences.
The two struck up a friendship over their shared interest in astronomy, and eventually came up with the idea of starting a firm to produce precision optics. Perkin raised US$15,000 from his relatives, while Elmer added US$5,000, and the firm was initially set up as a partnership on 19 April 1937. Initially, they worked from a small office in Manhattan, but soon opened a production facility in Jersey City. They incorporated the growing firm on 13 December 1939. A further move to Glenbrook in Connecticut in 1941 was quickly followed by another move to Norwalk, Connecticut, where the company remained until 2000. The opening of World War II led to significant expansion as the company produced optics for range finders, bombsights, and reconnaissance systems. This work led to the U.S. Navy awarding them the first "E" for Excellence award in 1942.
Perkin-Elmer retained a strong presence in the military field through the 1960s and at the same time was significantly involved with OAO-3 a 36-inch Ultra Violet Space Telescope, Skylab and their major contribution to the Apollo program was the sensor that saved the astronauts during the Apollo 13 failure. They were a primary supplier of the optical systems used in many reconnaissance platforms, first in aircraft and high-altitude balloons, and then in reconnaissance satellites. A significant advance was 1955's Transverse Panoramic Camera, which took images on wide frames that provided single-frame images from horizon to horizon from an aircraft flying at 40,000 ft altitude. Such systems remained a major part of the company's income, capped by the installation of laser retroreflectors on the Moon as part of the Apollo 11 mission.
Elmer died at age 83 in 1954, and the company began trading shares over the counter. The company was listed on the New York Stock Exchange on 13 December 1960. Perkin remained as president and CEO until June 1961, when Robert Lewis, previously of Argus Camera and Sylvania Electric Products, took over these roles. Perkin remained the chairman of the board until his death in 1969.
Semiconductor manufacturing
In 1967, the U.S. Air Force asked Perkin-Elmer to produce an all-optical "masking" system for semiconductor fabrication. Previous systems used a pattern, the "mask", which was pressed onto the surface of the silicon wafer as part of the photolithography process. Small bits of dirt or photoresist would stick the mask and ruin the patterning for subsequent chips, and it was not uncommon for the vast majority of the chips from a given wafer to malfunction. The Air Force, which by the late 1960s was highly reliant on integrated circuits, desired a more reliable system.
Perkin-Elmer responded with the Microprojector, which was essentially a large photocopier system. The mask was placed in a holder and never touched the surface of the chip. Instead, the image was projected onto the surface. Making this work required a complex 16-element lens system that focussed a narrow range of wavelengths of light onto the mask. The remainder of the light from the 1,000 watt mercury-vapor lamp was filtered out.
Harold Hemstreet was convinced that the concept could be simplified, and Abe Offner began the development of a system using mirrors instead of lenses, which did not suffer from the multispectral focussing problems of lenses. The result of this research was the Projection Scanning Aligner, or Micralign, which made chip making an assembly-line task and improved the number of working chips from perhaps 10% to 70% overnight. Chip prices plummeted as a result, with examples like the MOS 6502 selling for about US$20 while the previous generation of designs like the Motorola 6800 sold for around US$250.
The Micralign was so successful that Perkin-Elmer became the largest single vendor in the chip space in three years. In spite of this success, the company was largely a has-been by the 1980s due to their late response to the introduction of the stepping aligner, which allowed a single small mask to be stepped across the wafer, rather than requiring a single large mask covering the entire wafer. The company never regained their lead, and sold the division to The Silicon Valley Group.
Lab equipment
In the early 1990s, partnered with Cetus Corporation (and later Hoffmann-La Roche) to pioneer the polymerase chain reaction (PCR) equipment industry. Analytical-instruments business was also operated from 1954 to 2001 in Germany, by the Bodenseewerk Perkin-Elmer GmbH located in Überlingen at Lake Constance, and England (Perkin Elmer Ltd) at Beaconsfield in Buckinghamshire.
Computer Systems Division
Perkin-Elmer was involved in computer manufacture for a time. The Perkin-Elmer Computer Systems Division was formed through the purchase of Interdata, Inc., an independent computer manufacturer, in 1973–1974 for some US$63 million. This merger made Perkin-Elmer's annual sales rise to over US$200 million. This was also known as Perkin-Elmer's Data Systems Group. The 32-bit computers were very similar to an IBM System/370, but ran the OS/32MT operating system.
The Computer Systems Division had a large presence in Monmouth County, New Jersey, with some 1,700 staff making it one of the county's largest private employers. Its plant in Oceanport had 800 employees alone. By the early-mid-1980s the computing group had sales of $259 million; while profitable, it tended to have reduced visibility within the computing industry due to being owned by a diversified parent.
The Wollongong Group provided the commercial version of the Unix port to the Interdata 7/32 hardware, known as Edition 7 Unix. The port was originally done by the University of Wollongong in New South Wales, Australia, and was the first UNIX port to hardware other than the Digital Equipment Corporation PDP family. By 1982, the Wollongong Group Edition 7 Unix and Programmer's Workbench (PWB) were available on models such as the Perkin-Elmer 3210 and 3240 minicomputers.
In 1985, the computing division of Perkin-Elmer was spun off as Concurrent Computer Corporation, with the goal of giving it and the parallel processing product a clearer identification within the computer industry. At first, the new company was a wholly owned subsidiary of Perkin-Elmer, but with the intentions of putting a minority ownership in the company up for a public stock sale. This was done one in February 1986, with Perkin-Elmer retaining an 82 percent stake in Concurrent. In 1988, there was a merger between Concurrent Computer Corporation and MASSCOMP; as part of the deal, Perkin-Elmer's share in Concurrent was bought out. At that point, Perkin-Elmer said they had culminated their multi-year process of exiting from computer market, allowing them to focus on their primary business segments.
1999
Modern PerkinElmer traces its history back to a merger between divisions of what had been two S&P 500 companies, EG&G Inc. (formerly ) of Wellesley, Massachusetts and Perkin-Elmer (formerly ) of Norwalk, Connecticut. On May 28, 1999, the non-government side of EG&G Inc. purchased the Analytical Instruments Division of Perkin-Elmer, its traditional business segment, for US$425 million, also assuming the Perkin-Elmer name and forming the new PerkinElmer company, with new officers and a new board of directors. At the time, EG&G made products for diverse industries including automotive, medical, aerospace and photography.
The old Perkin-Elmer Board of Directors and Officers remained at that reorganized company under its new name, PE Corporation. It had been the Life Sciences division of Perkin-Elmer, and its two component tracking stock business groups, Celera Genomics () and PE Biosystems (formerly ), were centrally involved in the highest profile biotechnology events of the decade, the intense race against the Human Genome Project consortium, which then resulted in the genomics segment of the technology bubble. Perkin-Elmer purchased the Boston operations of NEN Life Sciences in 2001.
Recently
In 1992, the company merged with Applied Biosystems. In 1997 they merged with PerSeptive Biosystems. On July 14, 1999, the new analytical instruments maker PerkinElmer cut 350 jobs, or 12%, in its cost reduction reorganization. In 2006, PerkinElmer sold off the Fluid Sciences division for approximately US$400 million; the aim of the selloff was to increase the strategic focus on its higher-growth health sciences and photonic markets. Following on from the selloff, a number of small businesses were acquired, including Spectral Genomics, Improvision, Evotec-Technologies, Euroscreen, ViaCell, and Avalon Instruments. The brand "Evotec-Technologies" remains the property of Evotec, the former owner company. PerkinElmer had a license to use the brand until the end of year 2007.
PerkinElmer has continued to expand its interest in medicine with the acquisitions of clinical laboratories, In July 2006, it acquired NTD Labs located on Long Island, New York. The laboratory specializes in prenatal screening during the first trimester of pregnancy. In 2007, it purchased ViaCell, Inc. for US$300 million, which included its offices in Boston and cord blood storage facility in Kentucky near Cincinnati. The company was renamed ViaCord.
In 2001 Perkin Elmer acquired Packard Bioscience Inc from its majority shareholder, Dick McKernen. This acquisition also came with Agincourt Technologies Inc and consolidated Perkin Elmer's position in laboratory robotics, in particular, liquid handling robots which were to prove essential for the high-throughput sequencing needed for the Human Genome Project.
In March 2008, PerkinElmer purchased Pediatrix Screening (formerly Neo Gen Screening), a laboratory located in Bridgeville, Pennsylvania specializing in screening newborns for various inborn errors of metabolism such as phenylketonuria, hypothyroidism, and sickle-cell disease. It renamed the laboratory PerkinElmer Genetics, Inc.
In May 2011, PerkinElmer announced the signature of an agreement to acquire CambridgeSoft, and the successful acquisition of ArtusLabs.
In September 2011, PerkinElmer bought Caliper Life Sciences for US$600 million.
In December 2014 PerkinElmer acquired Perten Instruments for US$266 million to expand in food testing.
In January 2016, PerkinElmer acquired Swedish firm Vanadis Diagnostics.
In February 2016 PerkinElmer acquired Delta Instruments.
In January 2017, the company announced it would acquire the Indian in vitro diagnostic company, Tulip Diagnostics. In May 2017, the company acquired Euroimmun Medical Laboratory Diagnostics for approximately US$1.3 billion.
In 2018, the company acquired Australian biotech company, RHS Ltd., Chinese manufacturer of analytical instruments, Shanghai Spectrum Instruments Co. Ltd., and France-based company Cisbio Bioassays, which specializes in diagnostics and drug discovery solutions.
In November 2020, PerkinElmer announced it would acquire Horizon Discovery Group for around US$383 million.
In March 2021, PerkinElmer announced that the company has completed its acquisition of Oxford Immunotec Global PLC (Oxford Immunotec). In May of the same year, the business announced it would purchase Nexcelom Bioscience for $260 million and Immunodiagnostic Systems Holdings PLC for $155 million. In June the company announced it would acquire SIRION Biotech, a specialist in viral vector gene delivery methods. In July the business announced it would acquire BioLegend for $5.25 billion.
Acquisition history
PerkinElmer (Est. 1935, modern company formed from EG&G Inc. purchase of Perkin-Elmer, Analytical Instruments Division)
Applied Biosystems (Merged 1992)
PerSeptive Biosystems. (Acq. 1997)
Spectral Genomics
Improvision
Evotec-Technologies
Euroscreen
ViaCell
Avalon Instruments
Packard Bioscience Inc (Acq. 2003)
NTD Labs (Acq. 2006)
ViaCell, Inc. (Acq. 2007)
Pediatrix Screening (Acq. 2008)
CambridgeSoft (Acq. 2011)
ArtusLabs (Acq. 2011)
Caliper Life Sciences (Acq. 2011)
Zymark (Acq. 2003)
NovaScreen Biosciences Corporation (Acq. 2005)
Xenogen Corporation (Acq. 2006)
Xenogen Biosciences
Cambridge Research & Instrumentation Inc. (Acq. 2010)
Xenogen Corporation (Acq. 2006)
Xenogen Corporation (Acq. 2006)
Perten Instruments (Acq. 2014)
Vanadis Diagnostics (Acq. 2016)
Delta Instruments (Acq. 2016)
Tulip Diagnostics (Acq. 2017)
Euroimmun Medical Laboratory Diagnostics (Acq. 2017)
RHS Ltd (Acq. 2018)
Shanghai Spectrum Instruments Co. Ltd (Acq. 2018)
Cisbio Bioassays (Acq. 2018)
Horizon Discovery Group (Acq. 2020)
Oxford Immunotec Global PLC (Acq. 2021)
Nexcelom Bioscience (Acq. 2021)
Immunodiagnostic Systems Holdings PLC (Acq. 2021)
SIRION Biotech (Acq. 2021)
BioLegend (Acq. 2021)
BioLegend Japan KK
BioLegend UK Ltd
BioLegend GmbH
Programs
Hubble optics project
Perkin-Elmer's Danbury Optical System unit was commissioned to build the optical components of the Hubble Space Telescope. The construction of the main mirror began in 1979 and completed in 1981. The polishing process ran over budget and behind schedule, producing significant friction with NASA. Due to a miscalibrated null corrector, the primary mirror was also found to have a significant spherical aberration after reaching orbit on STS-31. Perkin-Elmer's own calculations and measurements revealed the primary mirror's surface discrepancies, but the company chose to withhold that data from NASA. A NASA investigation heavily criticized Perkin-Elmer for management failings, disregarding written quality guidelines, and ignoring test data that revealed the miscalibration. Corrective optics were installed on the telescope during the first Hubble service and repair mission STS-61. The correction, Corrective Optics Space Telescope Axial Replacement, was applied entirely to the secondary mirror and replaced existing instrumentation; the aberration of the primary mirror remained uncorrected.
The company agreed to pay US$15 million, essentially forgoing its fees in polishing the mirror, to avoid a threatened liability lawsuit under the False Claims Act by the Federal government. Hughes Aircraft, which acquired the Danbury Optical System unit one month after the launch of the telescope, paid US$10 million. The Justice Department asserted that the companies should have known about the flawed testing. Trade group Aerospace Industries Association protested when concerns were raised in the aerospace industry that aerospace companies might be held liable for failed equipment.
KH-9 Hexagon
Perkin-Elmer built the optical systems for the KH-9 Hexagon series of spy satellites at a facility in Danbury, Connecticut.
References
External links
PerkinElmer Announces New Business Alignment Focused on Improving Human and Environmental Health
SEC filings for PerkinElmer, Inc.
Photographs from the Perkin-Elmer-Applera Collection Science History Institute
Digital Collections (Extensive collection of print photographs and slides depicting the staff, facilities, and instrumentation of the Perkin-Elmer Corporation predominately dating from the 1960s and 1970s)
Companies formerly listed on the New York Stock Exchange
Design companies established in 1931
Technology companies of the United States
Life science companies based in Massachusetts
Instrument-making corporations
Companies based in Waltham, Massachusetts
Technology companies established in 1931
Defunct computer companies of the United States
Defunct computer hardware companies
Optics manufacturing companies of the United States | PerkinElmer | Biology | 3,679 |
194,085 | https://en.wikipedia.org/wiki/Sleeping%20Beauty | Sleeping Beauty (, or The Beauty Sleeping in the Wood; , or Little Briar Rose), also titled in English as The Sleeping Beauty in the Woods, is a fairy tale about a princess cursed by an evil fairy to sleep for a hundred years before being awakened by a handsome prince. A good fairy, knowing the princess would be frightened if alone when she wakes, uses her wand to put every living person and animal in the palace and forest asleep, to awaken when the princess does.
The earliest known version of the tale is found in the French narrative Perceforest, written between 1330 and 1344. Another was the Catalan poem Frayre de Joy e Sor de Paser. Giambattista Basile wrote another, "Sun, Moon, and Talia" for his collection Pentamerone, published posthumously in 1634–36 and adapted by Charles Perrault in Histoires ou contes du temps passé in 1697. The version collected and printed by the Brothers Grimm was one orally transmitted from the Perrault version, while including own attributes like the thorny rose hedge and the curse.
The Aarne-Thompson classification system for fairy tales lists Sleeping Beauty as a Type 410: it includes a princess who is magically forced into sleep and later woken, reversing the magic. The fairy tale has been adapted countless times throughout history and retold by modern storytellers across various media.
Origin
Early contributions to the tale include the medieval courtly romance Perceforest (). In this tale, a princess named Zellandine falls in love with a man named Troylus. Her father sends him to perform tasks to prove himself worthy of her, and while he is gone, Zellandine falls into an enchanted sleep. Troylus finds her, and rapes her in her sleep. They conceive and when their child is born, the child draws from her finger the flax that caused her sleep. She realizes from the ring Troylus left her that he was the father, and Troylus later returns to marry her. Another early literary predecessor is the Provençal versified novel ().
The second part of the Sleeping Beauty tale, in which the princess and her children are almost put to death but instead are hidden, may have been influenced by Genevieve of Brabant. Even earlier influences come from the story of the sleeping Brynhild in the Volsunga saga and the tribulations of saintly female martyrs in early Christian hagiography conventions. Following these early renditions, the tale was first published by Italian poet Giambattista Basile who lived from 1575 to 1632.
Plot
The folktale begins with a princess whose parents are told by a wicked fairy that their daughter will die when she pricks her finger on a particular item. In Basile's version, the princess pricks her finger on a piece of flax. In Perrault's and the Grimm Brothers' versions, the item is a spindle. The parents rid the kingdom of these items in the hopes of protecting their daughter, but the prophecy is fulfilled regardless. Instead of dying, as was foretold, the princess falls into a deep sleep. After some time, she is found by a prince and is awakened.
In Giambattista Basile's version of Sleeping Beauty, Sun, Moon, and Talia, the sleeping beauty, Talia, falls into a deep sleep after getting a splinter of flax in her finger. She is discovered in her palace by a wandering king, who "carrie[s] her to a bed, where he gather[s] the first fruits of love." He abandons her there after the assault and she later gives birth to twins while still unconscious.
According to Maria Tatar, there are versions of the story that include a second part to the narrative that details the couple's troubles after their union; some folklorists believe the two parts were originally separate tales.
The second part begins after the prince and princess have had children. Through the course of the tale, the princess and her children are introduced in some way to another woman from the prince's life. This other woman is not fond of the prince's new family, and calls a cook to kill the children and serve them for dinner. Instead of obeying, the cook hides the children and serves livestock. Next, the other woman orders the cook to kill the princess. Before this can happen, the other woman's true nature is revealed to the prince and then she is subjected to the very death that she had planned for the princess. The princess, prince, and their children live happily ever after.
Basile's narrative
In Giambattista Basile's dark version of Sleeping Beauty, Sun, Moon, and Talia, the sleeping beauty is named Talia. By asking wise men and astrologers to predict her future after her birth, her father, who is a great Lord, learns that Talia will be in danger from a splinter of flax. Talia, now grown, sees an old woman spinning outside her window. Intrigued by the sight of the twirling spindle, Talia invites the woman over and takes the distaff from her hand to stretch the flax. Tragically, the splinter of flax gets embedded under her nail, and she is put to sleep. After Talia falls asleep, she is seated on a velvet throne and her father, to forget his misery of what he thinks is her death, closes the doors and abandons the house forever. One day, while a king is walking by, one of his falcons flies into the house. The king knocks, hoping to be let in by someone, but no one answers, and he decides to climb in with a ladder. He finds Talia alive but unconscious, and impregnates her. Afterwards, he leaves her in bed and goes back to his kingdom. Though Talia is unconscious, she gives birth to twinsone of whom keeps sucking her finger. Talia awakens because the twin has sucked out the flax that got stuck in her finger. When she wakes up, she discovers that she is a mother and has no idea what happened to her. One day, the king decides he wants to go see Talia again. He goes back to the palace to find her awake and a mother to his twins. He informs her of who he is, what has happened, and they end up bonding. After a few days, the king has to leave to go back to his realm but promises Talia that he will return to take her to his kingdom.
When he arrives back in his kingdom, his wife hears him saying "Talia, Sun, and Moon" in his sleep. She bribes and threatens the king's secretary to tell her what is going on. After the queen learns the truth, she pretends she is the king and writes to Talia asking her to send the twins because he wants to see them. Talia sends her twins to the "king" and the queen tells the cook to kill the twins and make dishes out of them. She wants to feed the king his children; instead, the cook takes the twins to his wife and hides them. He then cooks two lambs and serves them as if they were the twins. Every time the king mentions how good the food is, the queen replies, "Eat, eat, you are eating of your own." Later, the queen invites Talia to the kingdom and is going to burn her alive, but the king appears and finds out what's going on with his children and Talia. He then orders that his wife be burned along with those who betrayed him. Since the cook actually did not obey the queen, the king thanks the cook for saving his children by giving him rewards. The story ends with the king marrying Talia and living happily ever after.
Perrault's narrative
Perrault's narrative is written in two parts, which some folklorists believe were originally separate tales, as they were in the Brothers Grimm's version, and were later joined together by Giambattista Basile and once more by Perrault. According to folklore editors Martin Hallett and Barbara Karasek, Perrault's tale is a much more subtle and pared down version than Basile's story in terms of the more immoral details. An example of this is depicted in Perrault's tale by the prince's choice to instigate no physical interaction with the sleeping princess when he discovers her.
At the christening of a king and queen's long-wished-for child, seven good fairies are invited to be the infant Princess's godmothers and give her gifts. The seven fairies attend the banquet at the palace and each is given a golden box containing golden utensils adorned with diamonds and rubies. Soon after, an old fairy enters the palace, overlooked because she has not left her tower in fifty years and everyone believed her to be cursed or dead. Nevertheless, the eighth fairy is seated and given a box of ordinary utensils. When she hears the eighth fairy muttering some threats, the seventh, fearing that the uninvited guest will harm the Princess, hides herself behind some curtains, so she can be the last to give a gift.
Six of the invited fairies offer their gifts of pure beauty, wit, grace, dance, song, and musical talent to the infant Princess. The eighth fairy, who is very angry about not being invited, curses the infant Princess so that she will one day prick her finger on a spindle of a spinning wheel and die. The seventh fairy then offers her gift: an attempt to reverse the evil fairy's curse, but she can only do so partially. Instead of dying, the Princess will fall into a deep sleep for 100 years and be awakened by a king's son ("").
The King then orders all spinning wheels in the kingdom banned and destroyed in an attempt to avert the eighth fairy's curse on his daughter. Fifteen or sixteen years pass and one day, when the king and queen are away, the Princess wanders through the palace rooms and comes upon an old woman (implied to be the evil fairy in disguise), spinning with her spindle. The Princess, who has never seen a spinning wheel before, asks the old woman if she can try it. The curse is fulfilled when the princess pricks her finger on the spindle and instantly falls into a deep sleep. The old woman cries for help and attempts are made to revive the princess. The king attributes this to fate and has the Princess carried to the finest room in the palace and placed upon a bed of gold and silver embroidered fabric. The seventh fairy arrives in her dragon-drawn chariot. Having great powers of foresight, the fairy sees that the Princess will awaken to distress when she finds herself alone, so the fairy puts everyone in the castle, except the King and Queen, to sleep. The King and Queen kiss their daughter goodbye and leave the castle to ban others from disturbing her, but the good fairy summons a forest of trees, brambles and thorns to spring up around the place, shielding it from the outside world.
A hundred years pass and a prince from another royal family spies the hidden castle during a hunting expedition. His attendants tell him differing stories regarding the castle until an old man recounts his father's words: within the castle lies a very beautiful princess who is doomed to sleep for a hundred years until a king's son comes and awakens her. The prince then braves the tall trees, brambles and thorns which part at his approach, and enters the castle. He passes the sleeping castle folk and comes across the chamber where the Princess lies asleep on the bed. Struck by the radiant beauty before him, he falls on his knees before her. The spell is broken, the princess awakens and bestows upon the prince a look "more tender than a first glance might seem to warrant" (in Perrault's original French tale, the prince does not kiss the princess to wake her up) then converses with the prince for a long time. Meanwhile, the servants of the castle awaken and go about their business. The prince and princess are later married by the chaplain in the castle chapel.
After marrying the Sleeping Beauty in secret, the Prince visits her for four years and she bears him two children, unbeknownst to his mother, who is an ogre. When his father, the King, dies, the Prince ascends the throne and he brings his wife, who is now twenty years old, and their two children - a four-year-old daughter named Morning (Aurore or Dawn in the original French) and a three-year-old son named Day (Jour in the original French) - to his kingdom.
One day, the new King must go to war against his neighbor, Emperor Contalabutte, and leaves his mother to govern the kingdom and look after his family. After her son leaves, the Ogress Queen Mother sends her daughter-in-law to a house secluded in the woods and orders her cook to prepare Morning with Sauce Robert for dinner. The kind-hearted cook substitutes a lamb for the princess, which satisfies the Queen Mother. She then demands Day, but the cook this time substitutes a kid for the prince, which also satisfies the Queen Mother. When the Ogress demands that he serve up the Sleeping Beauty, the latter substitutes a hind prepared with Sauce Robert, satisfying the Ogress, and secretly reuniting the young Queen with her children, who have been hidden by the cook's wife and maid. However, the Queen Mother soon discovers the cook's trick and she prepares a tub in the courtyard filled with vipers and other noxious creatures. The King returns home unexpectedly and the Ogress, her true nature having been exposed, throws herself into the tub and is fully consumed by the creatures. The King, young Queen, and children then live happily ever after.
Brothers Grimm's version
The Brothers Grimm included a variant of Sleeping Beauty, Little Briar Rose, in the first volume of Children's and Household Tales (published 1812). Their version ends when the prince arrives to wake Sleeping Beauty (named Rosamund) with a kiss and does not include the part two as found in Basile's and Perrault's versions. The brothers considered rejecting the story on the grounds that it was derived from Perrault's version, but the presence of the Brynhild tale convinced them to include it as an authentically German tale. Their decision was notable because in none of the Teutonic myths, meaning the Poetic and Prose Eddas or Volsunga Saga, are their sleepers awakened with a kiss, a fact Jacob Grimm would have known since he wrote an encyclopedic volume on German mythology. His version is the only known German variant of the tale, and Perrault's influence is almost certain. In the original Brothers Grimm's version, the fairies are instead wise women.
The Brothers Grimm also included, in the first edition of their tales, a fragmentary fairy tale, "The Evil Mother-in-law". This story begins with the heroine, a married mother of two children, and her mother-in-law, who attempts to eat her and the children. The heroine suggests an animal be substituted in the dish, and the story ends with the heroine's worry that she cannot keep her children from crying and getting the mother-in-law's attention. Like many German tales showing French influence, it appeared in no subsequent edition.
Variations
The princess's name has varied from one adaptation to the other. In Sun, Moon, and Talia, she is named Talia (Sun and Moon being her twin children). She has no name in Perrault's story but her daughter is called "Aurore". The Brothers Grimm named her "Briar Rose" in their first collection. However, some translations of the Grimms' tale give the princess the name "Rosamond". Tchaikovsky's ballet and Disney's version named her Princess Aurora; however, in the Disney version, she is also called "Briar Rose" in her childhood, when she is being raised incognito by the good fairies.
Besides Sun, Moon, and Talia, Basile included another variant of this Aarne-Thompson type, The Young Slave, in his book, The Pentamerone. The Grimms also included a second, more distantly related one titled The Glass Coffin.
Italo Calvino included a variant in Italian Folktales, Sleeping Beauty and Her Children. In his version, the cause of the princess's sleep is a wish by her mother. As in Pentamerone, the prince rapes her in her sleep and her children are born. Calvino retains the element that the woman who tries to kill the children is the king's mother, not his wife, but adds that she does not want to eat them herself, and instead serves them to the king. His version came from Calabria, but he noted that all Italian versions closely followed Basile's.
In his More English Fairy Tales, Joseph Jacobs noted that the figure of the Sleeping Beauty was in common between this tale and the Romani tale The King of England and his Three Sons.
The hostility of the king's mother to his new bride is repeated in the fairy tale The Six Swans, and also features in The Twelve Wild Ducks, where the mother is modified to be the king's stepmother. However, these tales omit the attempted cannibalism.
Russian Romantic writer Vasily Zhukovsky wrote a versified work based on the theme of the princess cursed into a long sleep in his poem "Спящая царевна" (), published in 1832.
Interpretations
According to Maria Tatar, the Sleeping Beauty tale has been disparaged by modern-day feminists who consider the protagonist to have no agency and find her passivity to be offensive; some feminists have even argued for people to stop telling the story altogether.
Disney has received criticism for depicting both Cinderella and the Sleeping Beauty princess as "naïve and malleable" characters. Time Out dismissed the princess as a "delicate" and "vapid" character. Sonia Saraiya of Jezebel echoed this sentiment, criticizing the princess for lacking "interesting qualities", where she also ranked her as Disney's least feminist princess. Similarly, Bustle also ranked the princess as the least feminist Disney Princess, with author Chelsea Mize expounding, "Aurora literally sleeps for like three quarters of the movie… Aurora just straight-up has no agency, and really isn't doing much in the way of feminine progress." Leigh Butler of Tor.com went on to defend the character writing, "Aurora’s cipher-ness in Sleeping Beauty would be infuriating if she were the only female character in it, but the presence of the Fairies and Maleficent allow her to be what she is without it being a subconscious statement on what all women are." Similarly, Refinery29 ranked Princess Aurora the fourth most feminist Disney Princess because, "Her aunts have essentially raised her in a place where women run the game." Despite being featured prominently in Disney merchandise, "Aurora has become an oft-forgotten princess", and her popularity pales in comparison to those of Cinderella and Snow White.
An example of the cosmic interpretation of the tale given by the nineteenth century solar mythologist school appears in John Fiske's Myths and Myth-Makers: “It is perhaps less obvious that winter should be so frequently symbolized as a thorn or sharp instrument... Sigurd is slain by a thorn, and Balder by a sharp sprig of mistletoe; and in the myth of the Sleeping Beauty, the earth-goddess sinks into her long winter sleep when pricked by the point of the spindle. In her cosmic palace, all is locked in icy repose, naught thriving save the ivy which defies the cold, until the kiss of the golden-haired sun-god reawakens life and activity.”
Media
"Sleeping Beauty" has been popular for many fairytale fantasy retellings. Some examples are listed below:
In film and television
La belle au Bois-Dormant (1902), a French silent film directed by Lucien Nonguet and Ferdinand Zecca.
La Belle au bois dormant (1908), a French silent film directed by Lucien Nonguet and Albert Capellani.
Dornröschen (1917), a German silent film directed by Paul Leni.
Dornröschen (1929), a German silent film directed by Dorothy Douglas.
Dornröschen (1936), a German film directed by Alf Zengerling.
Dornröschen (1941), a German stop-motion short directed by Ferdinand Diehl.
The Sleeping Princess (1939), a Walter Lantz Productions animated short parodying the original fairy tale.
A loose adaptation can be seen in a scene from the propaganda cartoon Education for Death, where Sleeping Beauty is a valkyrie representing Nazi Germany, and where the prince is replaced with Fuehrer Adolf Hitler in knights' armor. The short also parodies Richard Wagner's opera Siegfried.
Prinsessa Ruusunen (1949), a Finnish film directed by Edvin Laine and scored with Erkki Melartin's incidental music from 1912.
Dornröschen (1955), a West German film directed by Fritz Genschow.
Shirley Temple's Storybook (1958), episode The Sleeping Beauty, directed by Mitchell Leisen and starring Anne Helm, Judith Evelyn and Alexander Scourby.
Sleeping Beauty (1959), a Walt Disney animated film based on both Charles Perrault and the Brothers Grimm's versions. Featuring the original voices of Mary Costa as Princess Aurora, the Sleeping Beauty and Eleanor Audley as Maleficent.
Sleeping Beauty (Спящая красавица [Spjáščaja krasávica]) (1964), a filmed version of the ballet produced by the Kirov Ballet along with Lenfilm studios, starring Alla Sizova as Princess Aurora.
Dornröschen (1971), an East German film directed by Walter Beck.
Festival of Family Classics (1972–73), episode Sleeping Beauty, produced by Rankin/Bass and animated by Mushi Production.
Some Call It Loving (also known as Sleeping Beauty) (1973), directed by James B. Harris and starring Zalman King, Carol White, Tisa Farrow, and Richard Pryor, based on a short story by John Collier.
Manga Fairy Tales of the World (1976–79), 10-minute adaptation.
Jak se budí princezny (1978), a Czechoslovakian film directed by Václav Vorlíček.
World Famous Fairy Tale Series (Sekai meisaku dōwa) (1975–83) has a 9-minute adaptation, later reused in the U.S. edit of My Favorite Fairy Tales.
Faerie Tale Theatre (1983), episode Sleeping Beauty, directed by Jeremy Kagan and starring Christopher Reeve, Bernadette Peters and Beverly D'Angelo.
Goldilocks and the Three Bears/Rumpelstiltskin/Little Red Riding Hood/Sleeping Beauty (1984), direct-to-video featurette by Lee Mendelson Film Productions.
A 1986 episode of Brummkreisel had Kunibert (Hans-Joachim Leschnitz) demanding that he and his friends Achim (Joachim Kaps), Hops and Mops enact the story of Sleeping Beauty. Achim first compromises by incorporating Sleeping Beauty into his lesson about days of the week, and then finally he allows Kunibert to have his way; Hops played the princess, Kunibert played the prince, Mops played the wicked fairy and Achim played the brambles.
Sleeping Beauty (1987), a direct-to-television musical film directed by David Irving.
An episode of the series Grimm's Fairy Tale Classics (1987–89) is dedicated to Princess Briar Rose.
The Legend of Sleeping Brittany (1989), an episode of Alvin & the Chipmunks based on the fairy tale.
Briar-Rose or The Sleeping Beauty (1990), a Japanese/Czechoslovakian stop-motion animated featurette directed by Kihachiro Kawamoto.
Britannica's Tales Around the World (1990–91), features three variations of the story.
Sleeping Beauty (1991), a direct-to-video animated featurette produced by American Film Investment Corporation.
World Fairy Tale Series (Anime sekai no dōwa) (1995), anime television anthology produced by Toei Animation, has half-hour adaptation.
Sleeping Beauty (1995), a Japanese-American direct-to-video animated film by Jetlag Productions.
Wolves, Witches and Giants (1995–99), episode Sleeping Beauty, season 1 episode 12.
Happily Ever After: Fairy Tales for Every Child (1995), episode Sleeping Beauty, the classic story is told with a Hispanic cast, when Rosita is cast into a long sleep by Evelina, and later awakened by Prince Luis.
The Legend of Sleeping Beauty (La leggenda della bella addormentata) (1998), an Italian television series of 26 episodes, distributed by Mondo tv.
The Triplets (Les tres bessones/Las tres mellizas) (1997–2003), catalan animated series, season 1 episode 19.
Simsala Grimm (1999–2010), episode 9 of season 2.
Bellas durmientes (Sleeping Beauties) (2001), directed by Eloy Lozano, adapted from the Yasunari Kawabata novel.
Hello Kitty's Animation Theater (2001), features a 16-minute adaptation.
Shrek the third (2007) a Dreamworks animated film, directed by Chris Miller.
Dornröschen (2009), a German made-for-television film directed by Oliver Dieckmann starring Lotte Flack, François Goeske and Hannelore Elsner.
La belle endormie (The Sleeping Beauty) (2010), a film by Catherine Breillat.
Sleeping Beauty (2011), directed by Julia Leigh and starring Emily Browning, about a young girl who takes a sleeping potion and lets men have their way with her to earn extra money.
Once Upon a Time (2011), an ABC TV show with Sarah Bolger as Aurora and Julian Morris as Philip.
Sleeping Beauty (2014), a film by Rene Perez.
Sleeping Beauty (2014), a film by Casper Van Dien.
Maleficent (2014), a live-action reimagining of the Walt Disney movie starring Angelina Jolie as Maleficent and Elle Fanning as Princess Aurora.
Ever After High, episode Briar Beauty (2015), an animated Netflix series.
The Curse of Sleeping Beauty (2016), an American horror film directed by Pearry Reginald Teo.
Archie Campbell satirized the story with "Beeping Sleauty" in several Hee Haw television episodes.
Maleficent: Mistress of Evil (2019), the sequel to Maleficent (2014).
Avengers Grimm (2015) portrays an adult Sleeping Beauty with superpowers.
Sleeping Beauty is a main character of the "Neverafter" season of the tabletop role-playing game show Dimension 20. She is played by Siobhan Thompson. In this adaptation, she goes by the name Rosamund du Prix (2022–2023).
Awakening Sleeping Beauty (2025), an upcoming horror re-telling of the story set in The Twisted Childhood Universe, directed by Louisa Warren and starring Lora Hristova, Robbie Taylor and Charlotte Coleman as Princess Thalia, the Prince and Queen Primrose, respectively, along with Lila Lasso, Leah Glater, Sophie Rankin, Judy Tcherniak and Danielle Scott.
In literature
Sleeping Beauty (1830) and The Day-Dream (1842), two poems based on Sleeping Beauty by Alfred, Lord Tennyson.
The Rose and the Ring (1854), a satirical fantasy by William Makepeace Thackeray.
The Sleeping Beauty (1919), a poem by Mary Carolyn Davies about a failed hero who did not waken the princess, but died in the enchanted briars surrounding her palace.
The Sleeping Beauty (1920), a retelling of the fairy tale by Charles Evans, with illustrations by Arthur Rackham.
Briar Rose (Sleeping Beauty) (1971), a poem by Anne Sexton in her collection Transformations (1971), in which she re-envisions sixteen of the Grimm's Fairy Tales.
The Sleeping Beauty Quartet (1983–2015), four erotic novels written by Anne Rice under the pen name A.N. Roquelaure, set in a medieval fantasy world and loosely based on the fairy tale.
Beauty (1992), a novel by Sheri S. Tepper.
Briar Rose (1992), a novel by Jane Yolen.
Enchantment (1999), a novel by Orson Scott Card based on the Russian version of Sleeping Beauty.
Spindle's End (2000), a novel by Robin McKinley.
Clementine (2001), a novel by Sophie Masson.
Waking Rose (2007), novel by Regina Doman.
A Kiss in Time (2009), a novel by Alex Flinn.
The Sleeper and the Spindle (2012), a novel by Neil Gaiman.
The Gates of Sleep (2012), a novel by Mercedes Lackey from the Elemental Masters series set in Edwardian England.
Sleeping Beauty: The One Who Took the Really Long Nap (2018), a novel by Wendy Mass and the second book in the Twice Upon a Time series features a princess named Rose who pricks her finger and falls asleep for 100 years.
The Sleepless Beauty (2019), a novel by Rajesh Talwar setting the story in a small kingdom in the Himalayas.
Lava Red Feather Blue (2021), a novel by Molly Ringle involving a male/male twist on the Sleeping Beauty story.
Malice (2021), a novel by Heather Walter told by the Maleficent character's (Alyce's) POV and involving a woman/woman love story.
Misrule (2022), a novel by Heather Walter and sequel to Malice.
Immortality, a poem by Lisel Mueller in her Pulitzer Prize winning book "Alive Together"
In music
La Belle au Bois Dormant (1825), an opera by Michele Carafa.
La belle au bois dormant (1829), a ballet in four acts with book by Eugène Scribe, composed by Ferdinand Hérold and choreographed by Jean-Louis Aumer.
The Sleeping Beauty (1890), a ballet by Tchaikovsky.
Dornröschen (1902), an opera by Engelbert Humperdinck.
Pavane de la Belle au bois dormant (1910), the first movement of Ravel's Ma mère l'Oye.
The Sleeping Beauty (1992), song on album Clouds by the Swedish band Tiamat.
Sleeping Beauty Wakes (2008), an album by the American musical trio GrooveLily.
There Was A Princess Long Ago, a common nursery rhyme or singing game typically sung stood in a circle with actions, retells the story of Sleeping Beauty in a summarised song.
Sleeping Beauty The Musical (2019), a two act musical with book and lyrics by Ian Curran and music by Simon Hanson and Peter Vint.
Hex (2021) is a musical with book by Tanya Ronder, music by Jim Fortune and lyrics by Rufus Norris that opened at the Royal National Theatre in December 2021.
In video games
Kingdom Hearts is a video game in which Maleficent is one of the main antagonists and Aurora is one of the Princesses of Heart together with the other Disney princesses.
Little Briar Rose (2019) is a point-and-click adventure inspired by the Brothers Grimm's version of the fairy tale.
SINoALICE (2017) is a mobile gacha game which features Sleeping Beauty as one of the main player characters. She has her own dark story-line which follows her unending desire to sleep, and crosses over with the other fairy-tale characters featured in the game.
Video game series Dark Parables adapted the tale as the plot of its first game, Curse of Briar Rose (2010).
In art
See also
The Glass Coffin
Princess Aubergine
Rip Van Winkle
Snow White
The Sleeping Prince (fairy tale)
Sleeping Beauty problem, a mathematical puzzle based on the fairy tale
Notes
References
Further reading
Artal, Susana. "Bellas durmientes en el siglo XIV". In: Montevideana 10. Universidad de la Republica, Linardi y Risso. 2019. pp. 321–336.
Starostina, Aglaia. "Chinese Medieval Versions of Sleeping Beauty". In: Fabula, vol. 52, no. 3-4, 2012, pp. 189–206. https://doi.org/10.1515/fabula-2011-0017
de Vries, Jan. "Dornröschen". In: Fabula 2, no. 1 (1959): 110–121. https://doi.org/10.1515/fabl.1959.2.1.110
External links
Sleeping beauty in the woods, by Perrault, 1870 illustrated scanned book via Internet Archive
The Stalk of Flax adapted by Amy Friedman and Meredith Johnson
Illustrations of the fairy tale
A painting by John Wood, engraved by F. Bacon and with a poetical illustration by Letitia Elizabeth Landon in the Forget Me Not annual, 1837.
14th-century literature
Grimms' Fairy Tales
Female characters in fairy tales
European fairy tales
Fairy tales about fairies
Fairy tales about princes
Fairy tales about princesses
French fairy tales
Fictional princesses
Witchcraft in fairy tales
Works by Charles Perrault
European folklore characters
Textiles in folklore
Fiction about rape
Fiction about curses
Sleep in mythology and folklore
Fiction about uxoricide
ATU 400-459
Brunhild
Works about kissing | Sleeping Beauty | Biology | 7,033 |
44,459,690 | https://en.wikipedia.org/wiki/3-Hydroxyphenazepam | 3-Hydroxyphenazepam is a benzodiazepine with hypnotic, sedative, anxiolytic, and anticonvulsant properties. It is an active metabolite of phenazepam, as well as the active metabolite of the benzodiazepine prodrug cinazepam. Relative to phenazepam, 3-hydroxyphenazepam has diminished myorelaxant properties, but is about equivalent in most other regards. Like other benzodiazepines, 3-hydroxyphenazepam behaves as a positive allosteric modulator of the benzodiazepine site of the GABAA receptor with an EC50 value of 10.3 nM. It has been sold as a designer drug.
See also
Lorazepam, licensed medication
Nifoxipam
Nitemazepam
References
Hypnotics
Anticonvulsants
Sedatives
Anxiolytics
Benzodiazepines
2-Chlorophenyl compounds
Bromoarenes
Human drug metabolites
Designer drugs
Lactims | 3-Hydroxyphenazepam | Chemistry,Biology | 237 |
20,198,736 | https://en.wikipedia.org/wiki/HR%208799 | HR 8799 is a roughly 30 million-year-old main-sequence star located away from Earth in the constellation of Pegasus. It has roughly 1.5 times the Sun's mass and 4.9 times its luminosity. It is part of a system that also contains a debris disk and at least four massive planets. These planets were the first exoplanets whose orbital motion was confirmed by direct imaging. The star is a Gamma Doradus variable: its luminosity changes because of non-radial pulsations of its surface. The star is also classified as a Lambda Boötis star, which means its surface layers are depleted in iron peak elements. It is the only known star which is simultaneously a Gamma Doradus variable, a Lambda Boötis type, and a Vega-like star (a star with excess infrared emission caused by a circumstellar disk).
Location
HR 8799 is a star that is visible to the naked eye. It has a magnitude 5.96 and it is located inside the western edge of the great square of Pegasus almost exactly halfway between Beta and Alpha Pegasi. The star's name of HR 8799 is its line number in the Bright Star Catalogue.
Stellar properties
The star HR 8799 is a member of the Lambda Boötis ( Boo) class, a group of peculiar stars with an unusual lack of "metals" (elements heavier than hydrogen and helium) in their upper atmosphere. Because of this special status, stars like HR 8799 have a very complex spectral type. The luminosity profile of the Balmer lines in the star's spectrum, as well as the star's effective temperature, best match the typical properties of an F0 V star. However, the strength of the calcium II K absorption line and the other metallic lines are more like those of an A5 V star. The star's spectral type is therefore written as .
Age determination of this star shows some variation based on the method used. Statistically, for stars hosting a debris disk, the luminosity of this star suggests an age of about 20–150 million years. Comparison with stars having similar motion through space gives an age in the range 30–160 million years. Given the star's position on the Hertzsprung–Russell diagram of luminosity versus temperature, it has an estimated age in the range of 30–1,128 million years. Boötis stars like this are generally young, with a mean age of a billion years. More accurately, asteroseismology also suggests an age of approximately a billion years. However, this is disputed because it would make the planets become brown dwarfs to fit into the cooling models. Brown dwarfs would not be stable in such a configuration. The best accepted value for an age of HR 8799 is 30 million years, consistent with being a member of the Columba association co-moving group of stars.
Earlier analysis of the star's spectrum reveals that it has a slight overabundance of carbon and oxygen compared to the Sun (by approximately 30% and 10% respectively). While some Lambda Boötis stars have sulfur abundances similar to that of the Sun, this is not the case for HR 8799; the sulfur abundance is only around 35% of the solar level. The star is also poor in elements heavier than sodium: for example, the iron abundance is only 28% of the solar iron abundance. Asteroseismic observations of other pulsating Lambda Boötis stars suggest that the peculiar abundance patterns of these stars are confined to the surface only: the bulk composition is likely more normal. This may indicate that the observed element abundances are the result of the accretion of metal-poor gas from the environment around the star.
In 2020, spectral analysis utilizing multiple data sources have detected an inconsistency in prior data and concluded the star carbon and oxygen abundances are the same or slightly higher than solar. The iron abundance was updated to 30% of solar value.
Astroseismic analysis using spectroscopic data indicates that the rotational inclination of the star is constrained to be greater than or approximately equal to 40°. This contrasts with the planets' orbital inclinations, which are in roughly the same plane at an angle of about . Hence, there may be an unexplained misalignment between the rotation of the star and the orbits of its planets. Observation of this star with the Chandra X-ray Observatory indicates that it has a weak level of magnetic activity, but the X-ray activity is much higher than that of an A‑type star like Altair. This suggests that the internal structure of the star more closely resembles that of an F0 star. The temperature of the stellar corona is about 3.0 million K.
Planetary system
On 13 November 2008, Christian Marois of the National Research Council of Canada's Herzberg Institute of Astrophysics and his team announced they had directly observed three planets orbiting the star with the Keck and Gemini telescopes in Hawaii, in both cases employing adaptive optics to make observations in the infrared. A precovery observation of the outer 3 planets was later found in infrared images obtained in 1998 by the Hubble Space Telescope's NICMOS instrument, after a newly developed image-processing technique was applied. Further observations in 2009–2010 revealed the fourth giant planet orbiting inside the first three planets at a projected separation just less than 15 , which has been confirmed by multiple studies.
The outer planet orbits are inside a dusty disk like the Solar Kuiper belt. It is one of the most massive disks known around any star within 300 light years of Earth, and there is room in the inner system for terrestrial planets. There is an additional debris disk just inside the orbit of the innermost planet.
The orbital radii of planets e, d, c, and b are 2–3 times those of Jupiter, Saturn, Uranus, and Neptune's orbits, respectively. Because of the inverse square law relating radiation intensity to distance from the source, comparable radiation intensities are present at distances farther from HR 8799 than from the Sun, the upshot being that corresponding planets in the solar and HR 8799 systems receive similar amounts of stellar radiation.
These objects are near the upper mass limit for classification as planets; if they exceeded 13 Jupiter masses, they would be capable of deuterium fusion in their interiors and thus qualify as brown dwarfs under the definition of these terms used by the IAU's Working Group on Extrasolar Planets. If the mass estimates are correct, the HR 8799 system is the first multiple-planet extrasolar system to be directly imaged. The orbital motion of the planets is in an anticlockwise direction and was confirmed via multiple observations dating back to 1998. The system is more likely to be stable if the planets e, d, and c are in a 4:2:1 resonance, which would imply that the orbit of the planet d has an eccentricity exceeding 0.04 in order to match the observational constraints. Planetary systems with the best-fit masses from evolutionary models would be stable if the outer three planets are in a 1:2:4 orbital resonance (similar to the Laplace resonance between Jupiter's inner three Galilean satellites: Io, Europa, and Ganymede as well as three of the planets in the Gliese 876 system). However, it is disputed if planet b is in resonance with the other 3 planets. According to dynamical simulations, the HR 8799 planetary system may be even an extrasolar system with multiple resonance 1:2:4:8. The 4 young planets are still glowing red hot from the heat of their formation, and are larger than Jupiter and over time they will cool and shrink to the sizes of 0.8–1.0 Jupiter radii.
The broadband photometry of planets b, c and d has shown that there may be significant clouds in their atmospheres, while the infrared spectroscopy of planets b and c points to non-equilibrium / chemistry. Near-infrared observations with the Project 1640 integral field spectrograph on the Palomar Observatory have shown that compositions between the four planets vary significantly. This is a surprise since the planets presumably formed in the same way from the same disk and have similar luminosities.
An additional planet candidate was found in cycle 1 with NIRCam, 5 arcseconds south of HR 8799. Follow-up observations with NIRCam are planned to confirm or reject this candidate.
Planet spectra
A number of studies have used the spectra of HR 8799's planets to determine their chemical compositions and constrain their formation scenarios. The first spectroscopic study of planet b (performed at near-infrared wavelengths) detected strong water absorption and hints of methane absorption. Subsequently, weak methane and carbon monoxide absorption in this planet's atmosphere was also detected, indicating efficient vertical mixing of the atmosphere and a disequilibrium / ratio at the photosphere. Compared to models of planetary atmospheres, this first spectrum of planet b is best matched by a model of enhanced metallicity (about 10 times the metallicity of the Sun), which may support the notion that this planet formed through core-accretion.
The first simultaneous spectra of all four known planets in the HR 8799 system were obtained in 2012 using the Project 1640 instrument at Palomar Observatory. The near-infrared spectra from this instrument confirmed the red colors of all four planets and are best matched by models of planetary atmospheres that include clouds. Though these spectra do not directly correspond to any known astrophysical objects, some of the planet spectra demonstrate similarities with L- and T-type brown dwarfs and the night-side spectrum of Saturn. The implications of the simultaneous spectra of all four planets obtained with Project 1640 are summarized as follows: Planet b contains ammonia and/or acetylene as well as carbon dioxide, but has little methane; planet c contains ammonia, perhaps some acetylene but neither carbon dioxide nor substantial methane; planet d contains acetylene, methane, and carbon dioxide but ammonia is not definitively detected; planet e contains methane and acetylene but no ammonia or carbon dioxide. The spectrum of planet e is similar to a reddened spectrum of Saturn.
Moderate-resolution near-infrared spectroscopy, obtained with the Keck telescope, definitively detected carbon monoxide and water absorption lines in the atmosphere of planet c. The carbon-to-oxygen ratio, which is thought to be a good indicator of the formation history for giant planets, for planet c was measured to be slightly greater than that of the host star HR 8799. The enhanced carbon-to-oxygen ratio and depleted levels of carbon and oxygen in planet c favor a history in which the planet formed through core accretion. However, it is important to note that conclusions about the formation history of a planet based solely on its composition may be inaccurate if the planet has undergone significant migration, chemical evolution, or core dredging. Later, in November 2018, researchers confirmed the existence of water and the absence of methane in the atmosphere of using high-resolution spectroscopy and near-infrared adaptive optics (NIRSPAO) at the Keck Observatory.
The red colors of the planets may be explained by the presence of iron and silicate atmospheric clouds, while their low surface gravities might explain the strong disequilibrium concentrations of carbon monoxide and the lack of strong methane absorption.
Debris disk
In January 2009 the Spitzer Space Telescope obtained images of the debris disk around HR 8799. Three components of the debris disk were distinguished:
Warm dust ( ≈ 150 K) orbiting within the innermost planet (e). The inner and outer edges of this belt are close to 4:1 and 2:1 resonances with the planet.
A broad zone of cold dust ( ≈ 45 K) with a sharp inner edge orbiting just outside the outermost planet (b). The inner edge of this belt is approximately in 3:2 resonance with said planet, similar to Neptune and the Kuiper belt.
A dramatic halo of small grains originating in the cold dust component.
The halo is unusual and implies a high level of dynamic activity which is likely due to gravitational stirring by the massive planets. The Spitzer team says that collisions are likely occurring among bodies similar to those in the Kuiper Belt and that the three large planets may not yet have settled into their final, stable orbits.
In the photo, the bright, yellow-white portions of the dust cloud come from the outer cold disk. The huge extended dust halo, seen in orange-red, has a diameter of ≈ 2,000 . The diameter of Pluto's orbit (≈ 80 ) is shown for reference as a dot in the centre.
This disk is so thick that it threatens the young system's stability.
The disk was first resolved with ALMA in 2016 and was later imaged again in 2018. These later observations were more detailed and were studied by a team of astronomers. The disk has according to this team a smooth inner edge and a smooth outer edge. These also observed a possible inner dust belt. This inner belt was confirmed with MIRI observations, which measured a radius of 15 au of the inner disk.
Vortex Coronagraph: Testbed for high-contrast imaging technology
Up until the year 2010, telescopes could only directly image exoplanets under exceptional circumstances. Specifically, it is easier to obtain images when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation. However, in 2010 a team from NASAs Jet Propulsion Laboratory demonstrated that a vortex coronagraph could enable small telescopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets using just a 1.5 m portion of the Hale Telescope.
NICMOS images
In 2009, an old NICMOS image was processed to show a predicted exoplanet around HR 8799. In 2011, three further exoplanets were rendered viewable in a NICMOS image taken in 1998, using advanced data processing. The image allows the planets' orbits to be better characterised, since they take many decades to orbit their host star.
Search for radio emissions
Starting in 2010, astronomers searched for radio emissions from the exoplanets orbiting HR 8799 using the radio telescope at Arecibo Observatory. Despite the large masses, warm temperatures, and brown dwarf-like luminosities, they failed to detect any emissions at 5 GHz down to a flux density detection threshold of 1.0 mJy.
See also
List of exoplanets
Direct imaging of extrasolar planets
Notes
References
External links
A-type main-sequence stars
Gamma Doradus variables
Pegasus (constellation)
218396
114189
8799
Pegasi, V342
Planetary systems with four confirmed planets
Lambda Boötis stars
Circumstellar disks
Durchmusterung objects | HR 8799 | Astronomy | 3,052 |
43,507,282 | https://en.wikipedia.org/wiki/Quantification%20%28science%29 | In mathematics and empirical science, quantification (or quantitation) is the act of counting and measuring that maps human sense observations and experiences into quantities. Quantification in this sense is fundamental to the scientific method.
Natural science
Some measure of the undisputed general importance of quantification in the natural sciences can be gleaned from the following comments:
"these are mere facts, but they are quantitative facts and the basis of science."
It seems to be held as universally true that "the foundation of quantification is measurement."
There is little doubt that "quantification provided a basis for the objectivity of science."
In ancient times, "musicians and artists ... rejected quantification, but merchants, by definition, quantified their affairs, in order to survive, made them visible on parchment and paper."
Any reasonable "comparison between Aristotle and Galileo shows clearly that there can be no unique lawfulness discovered without detailed quantification."
Even today, "universities use imperfect instruments called 'exams' to indirectly quantify something they call knowledge."
This meaning of quantification comes under the heading of pragmatics.
In some instances in the natural sciences a seemingly intangible concept may be quantified by creating a scale—for example, a pain scale in medical research, or a discomfort scale at the intersection of meteorology and human physiology such as the heat index measuring the combined perceived effect of heat and humidity, or the wind chill factor measuring the combined perceived effects of cold and wind.
Social sciences
In the social sciences, quantification is an integral part of economics and psychology. Both disciplines gather data – economics by empirical observation and psychology by experimentation – and both use statistical techniques such as regression analysis to draw conclusions from it.
In some instances a seemingly intangible property may be quantified by asking subjects to rate something on a scale—for example, a happiness scale or a quality-of-life scale—or by the construction of a scale by the researcher, as with the index of economic freedom. In other cases, an unobservable variable may be quantified by replacing it with a proxy variable with which it is highly correlated—for example, per capita gross domestic product is often used as a proxy for standard of living or quality of life.
Frequently in the use of regression, the presence or absence of a trait is quantified by employing a dummy variable, which takes on the value 1 in the presence of the trait or the value 0 in the absence of the trait.
Quantitative linguistics is an area of linguistics that relies on quantification. For example, indices of grammaticalization of morphemes, such as phonological shortness, dependence on surroundings, and fusion with the verb, have been developed and found to be significantly correlated across languages with stage of evolution of function of the morpheme.
Hard versus soft science
The ease of quantification is one of the features used to distinguish hard and soft sciences from each other. Scientists often consider hard sciences to be more scientific or rigorous, but this is disputed by social scientists who maintain that appropriate rigor includes the qualitative evaluation of the broader contexts of qualitative data. In some social sciences such as sociology, quantitative data are difficult to obtain, either because laboratory conditions are not present or because the issues involved are conceptual but not directly quantifiable. Thus in these cases qualitative methods are preferred.
See also
Calibration
Internal standard
Isotope dilution
Physical quantity
Quantitative analysis (chemistry)
Standard addition
References
Further reading
Crosby, Alfred W. (1996) The Measure of Reality: Quantification and Western Society, 1250–1600. Cambridge University Press.
Wiese, Heike, 2003. Numbers, Language, and the Human Mind. Cambridge University Press. .
Philosophy of science
Analytical chemistry | Quantification (science) | Chemistry,Mathematics | 782 |
42,842,335 | https://en.wikipedia.org/wiki/Norwegian%20Institute%20for%20Water%20Research | The Norwegian Institute for Water Research (NIVA) is an environmental research organisation which researches, monitors, assesses and studies freshwater, coastal and marine environments and environmental technology.
Services and research
NIVA's areas of work include environmental contaminants, biodiversity and climate related issues. NIVA's research reports can be accessed through BIBSYS, and all reports from 1956 until 2015 are available for download. Some of the more widely read articles are also made available by Sciencenordic.com and forskning.no (articles in Norwegian only). The institute has twelve sections, led by research managers: Aquaculture, Biodiversity, Innovation, International projects and cooperation, Chemicals, Effects of climate change, Laboratory services, Environmental contaminants,
Environmental monitoring, Environmental technology and Measures against pollution.
In 2012 NIVA was in the news after its scientists developed a method of studying drug use in cities though analysis of sewage.
Organisation
NIVA was founded in 1958 and in 2015 is a non-profit research foundation. Its board is appointed by the Norwegian Ministry of the Environment, the Research Council of Norway and its employees. NIVA's Head Office is at the Oslo Innovation Centre in Oslo, with regional offices in Bergen, Grimstad and Hamar, as well as a large scale research facility in the Oslo Fjord. The organization is certified with ISO9001, and laboratory activities are accredited in accordance with ISO17025.
NIVA has about 200 employees. Two-thirds of these have educational backgrounds in water sciences and more than half work in research.
Subsidiaries
NIVA has several wholly and partly owned subsidiaries:
Akvaplan-niva AS is a research and consultancy firm in the fields of aquaculture, marine biology and freshwater biology. The company offers services related to environmental contaminants, biology, oceanography, chemistry and geology. Akvaplan-niva was founded in 1984 and is located at the Fram Centre in Tromsø. The company is wholly owned by NIVA.
AquaBiota Water Research AB is NIVA's subsidiary in Sweden. The company uses geographical information systems (GIS) to analyse and model biological and oceanographic phenomena. Around half of AquaBiota's business involves basic research; the other half consists of projects for government agencies and industry in Sweden and internationally.
BallastTech-NIVA AS was one of the first companies in the world to establish a full-scale centre for land-based testing of equipment for the treatment of ballast water in accordance with the International Maritime Organization (IMO). The testing centre is located at NIVA's marine research station at Solbergstrand, from Oslo. BallastTech-NIVA AS is a wholly owned subsidiary of NIVA-tech AS.
NIVA Chile SA is owned 50% by NIVA and 50% by NIVA-tech AS. The organisation's primary focus is to provide advice about water-quality and -treatment to the Chilean aquaculture industry.
NIVA-tech AS works with commercial applications for NIVA's competence, services, products and technology.
NIVA China conducts research and development on environmental challenges in China.
References
External links
Official home page
NIVA at BIBSYS Open Access
Environmental research institutes
Research institutes in Norway
Water organizations
Water supply and sanitation in Norway
Research institutes established in 1958
Non-profit organisations based in Norway
Organisations based in Oslo
1958 establishments in Norway | Norwegian Institute for Water Research | Environmental_science | 689 |
71,710,152 | https://en.wikipedia.org/wiki/TESS%20Hunt%20for%20Young%20and%20Maturing%20Exoplanets | TESS Hunt for Young and Maturing Exoplanets (THYME) is an exoplanet search project. The researchers of the THYME collaboration are mainly from the United States and search for young exoplanets using data from the Transiting Exoplanet Survey Satellite (TESS). The new discoveries should help to understand the early evolution of exoplanets. As of March 2023 the collaboration produced 9 papers announcing the discovery of exoplanets.
Paper number 8 adapted the backronym to "Transit Hunt for Young and Maturing Exoplanets", because it used data from the Kepler space telescope.
List of discoveries
References
Exoplanet search projects | TESS Hunt for Young and Maturing Exoplanets | Astronomy | 141 |
1,432,429 | https://en.wikipedia.org/wiki/JP%20Aerospace | JP Aerospace is an American company that aims to achieve affordable access to space. Their main activities include high-atmospheric lighter-than-air flights carrying cameras or miniature experiments called PongSats and minicubes. They are also engaged in an Airship to Orbit project.
JP Aerospace was founded by John Marchel Powell, familiarly known as "JP", with Michael Stucky and Scott Mayo. JP Aerospace specializes in lighter-than-air flight, with the stated aim of achieving cheap access to space.
Balloon flights
An early suborbital space launch attempt using a rockoon (balloon-launched high power rocket) at the Black Rock Desert in northwestern Nevada in May 1999 was unsuccessful. The event was covered by CNN. The CATS Prize expired without being awarded in November 2000.
In the early 21st century they developed a V-shaped high-altitude airship under a U.S. Air Force initiative to provide the rapid launch of battlefield communication and monitoring systems.
Since then, JP Aerospace has launched several balloons into the upper atmosphere, carrying mixed payloads for research students and media companies. Media clients have included The Discovery Channel, National Geographic, and Toshiba's 2009 television commercial Space Chair. In 2011, a twin-balloon utility airship is claimed to have set an altitude record of 95,085 feet (ca. 28,982 m) on October 22, 2011.
A PongSat is a small experiment housed in a table tennis or ping-pong ball. A MiniCube is slightly larger. JP Aerospace claim to have carried many hundreds or thousands of student PongSat projects to a near-space environment at low cost. The flights are typically crowdfunded.
Commercial flights, typically carrying cameras, have been made for a number of media organizations.
Airships
JP Aerospace obtained a contract for development of military communication and surveillance airships designed to hover over battlefields at altitudes too high for conventional anti-aircraft systems. A prototype was completed in 2005 but was damaged while being prepared for flight and the contract was ended.
Other vehicles are still under development, and JP aerospace has subsequently flown several aerostats as testbeds for ATO hardware and techniques.
The JP Aerospace Twin Balloons Airship is an unmanned airship comprising two balloon envelopes side by side, with twin electric-powered propellers mounted midway along the connecting boom. On October 22, 2011 it is claimed to have flown to 95,085 feet (ca. 28,982 m), nearly 4 miles higher than any airship before.
Airship to Orbit project
JP Aerospace is developing technology intended to launch airships into orbit.
The proposed system employs three separate airship stages to reach orbit. Multiple vehicles are needed because an airship made strong enough to survive the dense and turbulent lower atmosphere would be too small and heavy to lift payloads high enough. An orbital airship must be much larger and with thinner walls to maintain its buoyancy-to-weight ratio. The three stages are; the Ascender ground launcher, the Dark Sky permanent sky station, and the Orbital Ascender spacecraft. A fourth airship design similar to the record-breaking Tandem but based on air-filled beams will be required for the assembly of Dark Sky Station.
Because of the thin atmosphere at such high altitudes, to carry a useful payload very large volume and/or very strong but lightweight materials are required. The ISAS BU60-1 scientific balloon holds the world altitude record for an unmanned balloon as of 2009, at 53.0 km. The average density of BU60-1 over its gross volume was 0.00066 kg per cubic meter. To fly higher, this must be significantly improved.
Ascender
The Ascender airship would operate between the ground and the Dark Sky Station at 140,000 feet (ca. 42,672 m). A long, V-shaped planform with an airfoil profile would provide aerodynamic lift to supplement the airship's inherent buoyancy, with the craft driven by propellers designed to operate in a near vacuum. The Ascender would be larger than any airship yet built, but would be dwarfed by the later stages. It would be operated by a crew of three.
JP aerospace has developed two large-scale test models, the Ascender 90 and the Ascender 175. The number denotes the length of the airship in feet (ca. 27.4 m and 53.3 m). More recent airships have reverted to being named in sequence.
Dark Sky Station
The Dark Sky Station would be a permanent floating structure, remaining at 140,000 feet (ca. 42,672 m). It provides an intermediate stage allowing transfer of cargo or personnel between the Ascender stage and the orbital stage. It would also serve as the construction facility for the orbital component, which would be too fragile to travel lower.
The station could also be used as a relay station for telecommunications due to its high altitude.
Orbital Ascender
The Orbital Ascender airship would be the final flight stage from the station to orbit. It would initially rise as a lighter-than-air craft from the station at 140,000 feet to 180,000 feet (ca. 42,672 m to 54,864 m). The orbiter would have to be over a mile long to gain enough buoyancy.
At 180,000 ft it would accelerate forwards using lightweight, low power ion propulsion, enabling it to rise further with additional aerodynamic lift. This would be powered by solar panels which cover most of the upper surface of the airship. The V-shaped planform and airfoil profile would allow hypersonic flight by 200,000 feet, increasing to orbital speed (above Mach 20).
If hit by a meteorite or space debris, this would have little effect because the inner cells are "zero pressure balloons" saying "There is no difference in pressure to create a bursting force. All a meteorite would do is to make a hole. The gas would leak out staggeringly slowly..." (Page 112). They also say (page 109) that ""By losing velocity before it reaches the lower thicker atmosphere, the reentry temperatures are radically lower.... This makes reentry as safe as the climb to orbit". The skin would be made of nylon rip-stop polyethylene (page 111). On re-entry the orbital airship slows down at a very high altitude because it has such low mass with such a large cross section presented to the atmosphere (a low ballistic coefficient).
See also
Near space
NewSpace
Orbital Launch Proposals
Space Fellowship
References
External links
JP Aerospace
JP Aerospace Rockoons
Stew Magnusson (2003), Air Force Explores Balloon-Assisted Launches, Space.com Space News, Imaginova Trade Publishing
Space organizations
Space program of the United States | JP Aerospace | Astronomy | 1,380 |
11,849,133 | https://en.wikipedia.org/wiki/Berkovich%20tip | A Berkovich tip is a type of nanoindenter tip used for testing the indentation hardness of a material. It is a three-sided pyramid which is geometrically self-similar. The popular Berkovich now has a very flat profile, with a total included angle of 142.3° and a half angle of 65.27°, measured from the axis to one of the pyramid flats. This Berkovich tip has the same projected area-to-depth ratio as a Vickers indenter. The original tip shape was invented by Russian scientist E. S. Berkovich in the USSR around 1950, which has a half angle of 65.03°.
As it is three-sided, it is easier to grind these tips to a sharp point and so is more readily employed for nanoindentation tests. It is typically used to measure bulk materials and films greater than thick.
References
Hardness tests
Soviet inventions
Russian inventions | Berkovich tip | Materials_science | 189 |
28,299,952 | https://en.wikipedia.org/wiki/H%20topology | In algebraic geometry, the h topology is a Grothendieck topology introduced by Vladimir Voevodsky to study the homology of schemes. It combines several good properties possessed by its related "sub"topologies, such as the qfh and cdh topologies. It has subsequently been used by Beilinson to study p-adic Hodge theory, in Bhatt and Scholze's work on projectivity of the affine Grassmanian, Huber and Jörder's study of differential forms, etc.
Definition
Voevodsky defined the h topology to be the topology associated to finite families of morphisms of finite type such that is a universal topological epimorphism (i.e., a set of points in the target is an open subset if and only if its preimage is open, and any base change also has this property). Voevodsky worked with this topology exclusively on categories of schemes of finite type over a Noetherian base scheme S.
Bhatt-Scholze define the h topology on the category of schemes of finite presentation over a qcqs base scheme to be generated by -covers of finite presentation. They show (generalising results of Voevodsky) that the h topology is generated by:
fppf-coverings, and
families of the form where
is a proper morphism of finite presentation,
is a closed immersion of finite presentation, and
is an isomorphism over .
Note that is allowed in an abstract blowup, in which case Z is a nilimmersion of finite presentation.
Examples
The h-topology is not subcanonical, so representable presheaves are almost never h-sheaves. However, the h-sheafification of representable sheaves are interesting and useful objects; while presheaves of relative cycles are not representable, their associated h-sheaves are representable in the sense that there exists a disjoint union of quasi-projective schemes whose h-sheafifications agree with these h-sheaves of relative cycles.
Any h-sheaf in positive characteristic satisfies where we interpret as the colimit over the Frobenii (if the Frobenius is of finite presentation, and if not, use an analogous colimit consisting of morphisms of finite presentation). In fact, (in positive characteristic) the h-sheafification of the structure sheaf is given by . So the structure sheaf "is an h-sheaf on the category of perfect schemes" (although this sentence doesn't really make sense mathematically since morphisms between perfect schemes are almost never of finite presentation). In characteristic zero similar results hold with perfection replaced by semi-normalisation.
Huber-Jörder study the h-sheafification of the presheaf of Kähler differentials on categories of schemes of finite type over a characteristic zero base field . They show that if X is smooth, then , and for various nice non-smooth X, the sheaf recovers objects such as reflexive differentials and torsion-free differentials. Since the Frobenius is an h-covering, in positive characteristic we get for , but analogous results are true if we replace the h-topology with the cdh-topology.
By the Nullstellensatz, a morphism of finite presentation towards the spectrum of a field admits a section up to finite extension. That is, there exists a finite field extension and a factorisation . Consequently, for any presheaf and field we have where , resp. , denotes the h-sheafification, resp. etale sheafification.
Properties
As mentioned above, in positive characteristic, any h-sheaf satisfies . In characteristic zero, we have where is the semi-normalisation (the scheme with the same underlying topological space, but the structure sheaf is replaced with its termwise seminormalisation).
Since the h-topology is finer than the Zariski topology, every scheme admits an h-covering by affine schemes.
Using abstract blowups and Noetherian induction, if is a field admitting resolution of singularities (e.g., a characteristic zero field) then any scheme of finite type over admits an h-covering by smooth -schemes. More generally, in any situation where de Jong's theorem on alterations is valid we can find h-coverings by regular schemes.
Since finite morphisms are h-coverings, algebraic correspondences are finite sums of morphisms.
cdh topology
The cdh topology on the category of schemes of finite presentation over a qcqs base scheme is generated by:
Nisnevich coverings, and
families of the form where
is a proper morphism of finite presentation,
is a closed immersion of finite presentation, and
is an isomorphism over .
It is the universal topology with a "good" theory of compact supports.
The cd stands for completely decomposed (in the same sense it is used for the Nisnevich topology). As mentioned in the examples section, over a field admitting resolution of singularities, any variety admits a cdh-covering by smooth varieties. This topology is heavily used in the study of Voevodsky motives with integral coefficients (with rational coefficients the h-topology together with de Jong alterations is used).
Since the Frobenius is not a cdh-covering, the cdh-topology is also a useful replacement for the h-topology in the study of differentials in positive characteristic.
Rather confusingly, there are completely decomposed h-coverings, which are not cdh-coverings, for example the completely decomposed family of flat morphisms .
Relation to v-topology and arc-topology
The v-topology (or universally subtrusive topology) is equivalent to the h-topology on the category of schemes of finite type over a Noetherian base scheme S. Indeed, a morphism in is universally subtrusive if and only if it is universally submersive . In other words,
More generally, on the category of all qcqs schemes, neither of the v- nor the h- topologies are finer than the other: and . There are v-covers which are not h-covers (e.g., ) and h-covers which are not v-covers (e.g., where R is a valuation ring of rank 2 and is the non-open, non-closed prime ).
However, we could define an h-analogue of the fpqc topology by saying that an hqc-covering is a family such that for each affine open there exists a finite set K, a map and affine opens such that is universally submersive (with no finiteness conditions). Then every v-covering is an hqc-covering.
Indeed, any subtrusive morphism is submersive (this is an easy exercise using ).
By a theorem of Rydh, for a map of qcqs schemes with Noetherian, is a v-cover if and only if it is an arc-cover (for the statement in this form see ). That is, in the Noetherian setting everything said above for the v-topology is valid for the arc-topology.
Notes
Further reading
Algebraic geometry | H topology | Mathematics | 1,514 |
60,290,317 | https://en.wikipedia.org/wiki/Neural%20dust | Neural dust is a hypothetical class of nanometer-sized devices operated as wirelessly powered nerve sensors; it is a type of brain–computer interface. The sensors may be used to study, monitor, or control the nerves and muscles and to remotely monitor neural activity. In practice, a medical treatment could introduce thousands of neural dust devices into human brains. The term is derived from "smart dust", as the sensors used as neural dust may also be defined by this concept.
Background
The design for neural dust was first proposed in a 2011 presentation by Jan Rabaey from the University of California, Berkeley Wireless Research Center and was subsequently demonstrated by graduate students in his lab. While the history of BCI begins with the invention of the electroencephalogram by Hans Berger in 1924, the term did not appear in scientific literature until the 1970s. The hallmark research of the field came from the University of California, Los Angeles (UCLA), following a research grant from the National Science Foundation.
While neural dust does fall under the category of BCI, it also could be used in the field of neuroprosthetics (also known as neural prosthetics). While the terms can sometimes be used interchangeably, the main difference is that while BCI generally interface neural activity directly to a computer, neuroprosthetics tend to connect activity in the central nervous system to a device meant to replace the function of a missing or impaired body part.
Function
Components
The principal components of the neural dust system include the sensor nodes (neural dust), which aim to be in the 10-100 μm3 scale, and a sub-cranial interrogator, which would sit below the dura mater and would provide both power and a communication link to the neural dust.
Neural dust sensors can use a multitude of mechanisms for powering and communication, including traditional RF as well as ultrasonics. An ultrasound based neural dust motes consist of a pair of recording electrodes, a custom transistor, and a piezoelectric sensor. The piezoelectric crystal is capable of recording brain activity from the extracellular space, and converting it into an electrical signal.
Data and Power Transfer
While many forms of BCI exist, neural dust is in a class of its own due to its size and wireless capability. While electromagnetic waves (such as radio frequencies) can be used to interact with neural dust or other wireless neural sensors, the use of ultrasound offers reduced attenuation in the tissue. This results in higher implantation depths (and therefore easier communication with the sub-cranial communicator), as well as a reduction of energy being distributed into the body's tissues due to scattering or absorption. This excess energy would take the form of heat, which could cause damage to the surrounding tissue if temperatures rose too high. Theoretically, ultrasound would allow smaller sensor nodes, allowing for sizes less than 100 μm, however, many practical and scalability challenges remain.
Backscatter Communication
Due to the extremely small size of the neural dust sensors, it would be impractical and nearly impossible to create a functional transmitter in the sensor itself. Thus backscatter communication, adopted from radio frequency identification (RFID) technologies, is employed. In RFID passive, battery-less tags are capable of absorbing and reflecting radio frequency (RF) energy when in close proximity to a RF interrogator, which is a device that transmits RF energy. As they reflect the RF energy back to the interrogator, they are capable of modulating the frequency, and in doing so, encoding information. Neural dust employs this method by having the sub-dural communicator send out a pulse (either RF or ultrasound) that is then reflected by the neural dust sensors.
While neural dust can use a traditional amplifier to sense action potentials, in the case of an ultrasound based neural dust sensor, a piezoelectric crystal can also be used to measure form its location in the extracellular space. The ultrasound energy reflected back to the interrogator would be modulated in a way that would communicate the recorded activity. In one proposed model of the neural dust sensor, the transistor model allowed for a method of separating between local field potentials and action potential "spikes", which would allow for a greatly diversified wealth of data acquirable from the recordings.
Clinical and health applications
Neural prosthetics
Some examples of neural prostheses include cochlear implants that can aid in restoring hearing, artificial silicon retina microchips that have shown to be effective in treating retinal degeneration from retinitis pigmentosa, and even motor prostheses that could offer the capability for movement in those affected with quadriplegia or disorders like amyotrophic lateral sclerosis. The use of neural dust in conjunction with motor prostheses could allow for a much finer control of movement.
Electrostimulation
While methods of electrical stimulation of nerves and brain tissue have already been employed for some time, the size and wireless nature of neural dust allows for advancement in clinical applications of the technique. Importantly, because traditional methods of neurostimulation and certain forms of nerve stimulation such as spinal cord stimulation use implanted electrodes that remain connected to wires, the risk of infection and scarring is high. While these risks are not a factor in the use of neural dust, the challenge of applying sufficient electrical current to the sensor node, is still present.
Sleep Apnea
Electrostimulation devices have already shown some efficacy in treating Obstructive Sleep Apnea (OSA). Researchers that used a surgically implanted electrostimulation device on patients with severe OSA found significant improvement over a 12-month period of treatment with the device. Stimulation of the phrenic nerve has also been shown to be effective in reducing central sleep apnea.
Bladder Control in Paraplegics
Electrical stimulation devices have been effective in allowing spinal cord injury patients to have improved ability to urinate and defecate by using radio-linked implants to stimulate the sacral anterior root area of the spine
Epilepsy
Electrical stimulation therapy in patients with epilepsy has been a well established procedure for some time, being traced back to as early as the 1950s. A paramount goal of the American Epilepsy Society is the continued development of automated brain electrical stimulation (also known as contingent, or closed loop stimulation), which provides seizure-halting electrical stimulation based on brain patterns that indicate a seizure is about to happen. This provides a much better treatment of the disorder than stimulation that is based on an estimate of when the seizure might occur. While vagal nerve stimulation is often a target area for treatment of epileptic seizures, there has been research into the efficacy of stimulation in the hippocampus, thalamus, and subthalamic nucleus. Closed-loop cortical neuromodulation has also been investigated as a treatment technique for Parkinson's disease
References
Brain–computer interface
Human–computer interaction
Neural engineering
Neuroprosthetics | Neural dust | Engineering | 1,443 |
740,817 | https://en.wikipedia.org/wiki/Mathematical%20coincidence | A mathematical coincidence is said to occur when two expressions with no direct relationship show a near-equality which has no apparent theoretical explanation.
For example, there is a near-equality close to the round number 1000 between powers of 2 and powers of 10:
Some mathematical coincidences are used in engineering when one expression is taken as an approximation of another.
Introduction
A mathematical coincidence often involves an integer, and the surprising feature is the fact that a real number arising in some context is considered by some standard as a "close" approximation to a small integer or to a multiple or power of ten, or more generally, to a rational number with a small denominator. Other kinds of mathematical coincidences, such as integers simultaneously satisfying multiple seemingly unrelated criteria or coincidences regarding units of measurement, may also be considered. In the class of those coincidences that are of a purely mathematical sort, some simply result from sometimes very deep mathematical facts, while others appear to come 'out of the blue'.
Given the countably infinite number of ways of forming mathematical expressions using a finite number of symbols, the number of symbols used and the precision of approximate equality might be the most obvious way to assess mathematical coincidences; but there is no standard, and the strong law of small numbers is the sort of thing one has to appeal to with no formal opposing mathematical guidance. Beyond this, some sense of mathematical aesthetics could be invoked to adjudicate the value of a mathematical coincidence, and there are in fact exceptional cases of true mathematical significance (see Ramanujan's constant below, which made it into print some years ago as a scientific April Fools' joke). All in all, though, they are generally to be considered for their curiosity value, or perhaps to encourage new mathematical learners at an elementary level.
Some examples
Rational approximants
Sometimes simple rational approximations are exceptionally close to interesting irrational values. These are explainable in terms of large terms in the continued fraction representation of the irrational value, but further insight into why such improbably large terms occur is often not available.
Rational approximants (convergents of continued fractions) to ratios of logs of different numbers are often invoked as well, making coincidences between the powers of those numbers.
Many other coincidences are combinations of numbers that put them into the form that such rational approximants provide close relationships.
Concerning π
The second convergent of π, [3; 7] = 22/7 = 3.1428..., was known to Archimedes, and is correct to about 0.04%. The fourth convergent of π, [3; 7, 15, 1] = 355/113 = 3.1415929..., found by Zu Chongzhi, is correct to six decimal places; this high accuracy comes about because π has an unusually large next term in its continued fraction representation: = [3; 7, 15, 1, 292, ...].
A coincidence involving π and the golden ratio φ is given by . Consequently, the square on the middle-sized edge of a Kepler triangle is similar in perimeter to its circumcircle. Some believe one or the other of these coincidences is to be found in the Great Pyramid of Giza, but it is highly improbable that this was intentional.
There is a sequence of six nines in pi beginning at the 762nd decimal place of its decimal representation. For a randomly chosen normal number, the probability of a particular sequence of six consecutive digits—of any type, not just a repeating one—to appear this early is 0.08%. Pi is conjectured, but not known, to be a normal number.
The first Feigenbaum constant is approximately equal to , with an error of 0.0015%.
Concerning base 2
The coincidence , correct to 2.4%, relates to the rational approximation , or to within 0.3%. This relationship is used in engineering, for example to approximate a factor of two in power as 3 dB (actual is 3.0103 dB – see Half-power point), or to relate a kibibyte to a kilobyte; see binary prefix. The same numerical coincidence is responsible for the near equality between one third of an octave and one tenth of a decade.
The same coincidence can also be expressed as (eliminating common factor of , so also correct to 2.4%), which corresponds to the rational approximation , or (also to within 0.4%). This is invoked in preferred numbers in engineering, such as shutter speed settings on cameras, as approximations to powers of two (128, 256, 512) in the sequence of speeds 125, 250, 500, etc., and in the original Who Wants to Be a Millionaire? game show in the question values ...£16,000, £32,000, £64,000, £125,000, £250,000,...
Concerning musical intervals
In music, the distances between notes (intervals) are measured as ratios of their frequencies, with near-rational ratios often sounding harmonious. In western twelve-tone equal temperament, the ratio between consecutive note frequencies is .
The coincidence , from , closely relates the interval of 7 semitones in equal temperament to a perfect fifth of just intonation: , correct to about 0.1%. The just fifth is the basis of Pythagorean tuning; the difference between twelve just fifths and seven octaves is the Pythagorean comma.
The coincidence permitted the development of meantone temperament, in which just perfect fifths (ratio ) and major thirds () are "tempered" so that four 's is approximately equal to , or a major third up two octaves. The difference () between these stacks of intervals is the syntonic comma.
The coincidence leads to the rational version of 12-TET, as noted by Johann Kirnberger.
The coincidence leads to the rational version of quarter-comma meantone temperament.
The coincidence of powers of 2, above, leads to the approximation that three major thirds concatenate to an octave, . This and similar approximations in music are called dieses.
Numerical expressions
Concerning powers of π
correct to about 1.32%. This can be understood in terms of the formula for the zeta function This coincidence was used in the design of slide rules, where the "folded" scales are folded on rather than because it is a more useful number and has the effect of folding the scales in about the same place.
correct to about 0.086%.
correct to 4 parts per million.
correct to 0.02%.
correct to about 0.002% and can be seen as a combination of the above coincidences.
or accurate to 8 decimal places (due to Ramanujan: Quarterly Journal of Mathematics, XLV, 1914, pp. 350–372). Ramanujan states that this "curious approximation" to was "obtained empirically" and has no connection with the theory developed in the remainder of the paper.
Some near-equivalences, which hold to a high degree of accuracy, are not actually coincidences. For example,
The two sides of this expression differ only after the 42nd decimal place; this is not a coincidence.
Containing both π and e
to 4 digits, where γ is the Euler–Mascheroni constant.
, to about 7 decimal places. Equivalently, .
, to about 4 decimal places.
, to about 9 decimal places.
to about 4 decimal places. (Conway, Sloane, Plouffe, 1988); this is equivalent to Once considered a textbook example of a mathematical coincidence, the fact that is close to 20 is itself not a coincidence, although the approximation is an order of magnitude closer than would be expected. No explanation for the near-identity was known until 2023. It is a consequence of the infinite sum resulting from the Jacobian theta functional identity. The first term of the sum is by far the largest, which gives the approximation or Using the estimate then gives
, within 4 parts per million.
, to about 5 decimal places. That is, , within 0.0002%.
, within 0.02%.
. In fact, this generalizes to the approximate identity which can be explained by the Jacobian theta functional identity.
Ramanujan's constant: , within , discovered in 1859 by Charles Hermite. This very close approximation is not a typical sort of accidental mathematical coincidence, where no mathematical explanation is known or expected to exist (as is the case for most). It is a consequence of the fact that 163 is a Heegner number.
There are several integers () such that for some integer n, or equivalently for the same These are not strictly coincidental because they are related to both Ramanujan's constant above and the Heegner numbers. For example, so these integers k are near-squares or near-cubes and note the consistent forms for n = 18, 22, 37,
with the last accurate to 14 or 15 decimal places.
is almost an integer, to about 8th decimal place.
Other numerical curiosities
In a discussion of the birthday problem, the number occurs, which is "amusingly" equal to to 4 digits.
, the product of three Mersenne primes.
, the geometric mean of the first 6 natural numbers, is approximately 2.99; that is, .
The sixth harmonic number, which is approximately (2.449489...) to within 5.2 × 10−4.
, within . Equivalently, , within 2.2 × 10−5.
Decimal coincidences
, making 3435 the only non-trivial Münchhausen number in base 10 (excluding 0 and 1). If one adopts the convention that , however, then 438579088 is another Münchhausen number.
and are the only non-trivial factorions in base 10 (excluding 1 and 2).
, , , and . If the end result of these four anomalous cancellations are multiplied, their product reduces to exactly 1/100.
, , and . (In a similar vein, .)
, making 127 the smallest nice Friedman number. A similar example is .
, , , and are all narcissistic numbers.
, a prime number. The fraction 1/17 also produces 0.05882353 when rounded to 8 digits.
. The largest number with this pattern is .
. This number, found in 2017, answers a question by John Conway whether the digits of a composite number could be the same as its prime factorization.
Numerical coincidences in numbers from the physical world
Speed of light
The speed of light is (by definition) exactly , extremely close to (). This is a pure coincidence, as the metre was originally defined as 1 / of the distance between the Earth's pole and equator along the surface at sea level, and the Earth's circumference just happens to be about 2/15 of a light-second. It is also roughly equal to one foot per nanosecond (the actual number is 0.9836 ft/ns).
Angular diameters of the Sun and the Moon
As seen from Earth, the angular diameter of the Sun varies between 31′27″ and 32′32″, while that of the Moon is between 29′20″ and 34′6″. The fact that the intervals overlap (the former interval is contained in the latter) is a coincidence, and has implications for the types of solar eclipses that can be observed from Earth.
Gravitational acceleration
While not constant but varying depending on latitude and altitude, the numerical value of the acceleration caused by Earth's gravity on the surface lies between 9.74 and 9.87 m/s2, which is quite close to 10. This means that as a result of Newton's second law, the weight of a kilogram of mass on Earth's surface corresponds roughly to 10 newtons of force exerted on an object.
This is related to the aforementioned coincidence that the square of pi is close to 10. One of the early definitions of the metre was the length of a pendulum whose half swing had a period equal to one second. Since the period of the full swing of a pendulum is approximated by the equation below, algebra shows that if this definition was maintained, gravitational acceleration measured in metres per second per second would be exactly equal to π2.
The upper limit of gravity on Earth's surface (9.87 m/s2) is equal to π2 m/s2 to four significant figures. It is approximately 0.6% greater than standard gravity (9.80665 m/s2).
Rydberg constant
The Rydberg constant, when multiplied by the speed of light and expressed as a frequency, is close to :
This is also approximately the number of feet in one meter:
ft m
US customary to metric conversions
As discovered by Randall Munroe, a cubic mile is close to cubic kilometres (within 0.5%). This means that a sphere with radius n kilometres has almost exactly the same volume as a cube with side length n miles.
The ratio of a mile to a kilometre is approximately the Golden ratio. As a consequence, a Fibonacci number of miles is approximately the next Fibonacci number of kilometres.
The ratio of a mile to a kilometre is also very close to (within 0.006%). That is, where m is the number of miles, k is the number of kilometres and e is Euler's number.
A density of one ounce per cubic foot is very close to one kilogram per cubic metre: 1 oz/ft3 = 1 oz × 0.028349523125 kg/oz / (1 ft × 0.3048 m/ft)3 ≈ 1.0012 kg/m3.
The ratio between one troy ounce and one gram is approximately .
Fine-structure constant
The fine-structure constant is close to, and was once conjectured to be precisely equal to . Its CODATA recommended value is
=
is a dimensionless physical constant, so this coincidence is not an artifact of the system of units being used.
Earth's Solar Orbit
The number of seconds in one year, based on the Gregorian calendar, can be calculated by:
This value can be approximated by or 31,415,926.54 with less than one percent of an error:
See also
Almost integer
Anthropic principle
Birthday problem
Exceptional isomorphism
Narcissistic number
Sophomore's dream
Strong law of small numbers
Experimental mathematics
Koide formula
References
External links
В. Левшин. – Магистр рассеянных наук. – Москва, Детская Литература 1970, 256 с.
Davis, Philip J. - Are There Coincidences in Mathematics - American Mathematical Monthly, vol. 84 no. 5, 1981.
Hardy, G. H. – A Mathematician's Apology. – New York: Cambridge University Press, 1993, ()
Various mathematical coincidences in the "Science & Math" section of futilitycloset.com
Press, W. H., "Seemingly Remarkable Mathematical Coincidences Are Easy to Generate"
coincidence
Recreational mathematics | Mathematical coincidence | Mathematics | 3,139 |
660,941 | https://en.wikipedia.org/wiki/XSL%20attack | In cryptography, the eXtended Sparse Linearization (XSL) attack is a method of cryptanalysis for block ciphers. The attack was first published in 2002 by researchers Nicolas Courtois and Josef Pieprzyk. It has caused some controversy as it was claimed to have the potential to break the Advanced Encryption Standard (AES) cipher, also known as Rijndael, faster than an exhaustive search. Since AES is already widely used in commerce and government for the transmission of secret information, finding a technique that can shorten the amount of time it takes to retrieve the secret message without having the key could have wide implications.
The method has a high work-factor, which unless lessened, means the technique does not reduce the effort to break AES in comparison to an exhaustive search. Therefore, it does not affect the real-world security of block ciphers in the near future. Nonetheless, the attack has caused some experts to express greater unease at the algebraic simplicity of the current AES.
In overview, the XSL attack relies on first analyzing the internals of a cipher and deriving a set of quadratic simultaneous equations. These systems of equations are typically very large, for example 8,000 equations with 1,600 variables for the 128-bit AES. Several methods for solving such systems are known. In the XSL attack, a specialized algorithm, termed eXtended Sparse Linearization, is then applied to solve these equations and recover the key.
The attack is notable for requiring only a handful of known plaintexts to perform; previous methods of cryptanalysis, such as linear and differential cryptanalysis, often require unrealistically large numbers of known or chosen plaintexts.
Solving multivariate quadratic equations
Solving multivariate quadratic equations (MQ) over a finite set of numbers is an NP-hard problem (in the general case) with several applications in cryptography. The XSL attack requires an efficient algorithm for tackling MQ. In 1999, Kipnis and Shamir showed that a particular public key algorithm, known as the Hidden Field Equations scheme (HFE), could be reduced to an overdetermined system of quadratic equations (more equations than unknowns). One technique for solving such systems is linearization, which involves replacing each quadratic term with an independent variable and solving the resultant linear system using an algorithm such as Gaussian elimination. To succeed, linearization requires enough linearly independent equations (approximately as many as the number of terms). However, for the cryptanalysis of HFE there were too few equations, so Kipnis and Shamir proposed re-linearization, a technique where extra non-linear equations are added after linearization, and the resultant system is solved by a second application of linearization. Re-linearization proved general enough to be applicable to other schemes.
In 2000, Courtois et al. proposed an improved algorithm for MQ known as XL (for eXtended Linearization), which increases the number of equations by multiplying them with all monomials of a certain degree. Complexity estimates showed that the XL attack would not work against the equations derived from block ciphers such as AES. However, the systems of equations produced had a special structure, and the XSL algorithm was developed as a refinement of XL which could take advantage of this structure. In XSL, the equations are multiplied only by carefully selected monomials, and several variants have been proposed.
Research into the efficiency of XL and its derivative algorithms remains ongoing (Yang and Chen, 2004).
Application to block ciphers
Courtois and Pieprzyk (2002) observed that AES (Rijndael) and partially also Serpent could be expressed as a system of quadratic equations. The variables represent not just the plaintext, ciphertext and key bits, but also various intermediate values within the algorithm. The S-box of AES appears to be especially vulnerable to this type of analysis, as it is based on the algebraically simple inverse function. Subsequently, other ciphers have been studied to see what systems of equations can be produced (Biryukov and De Cannière, 2003), including Camellia, KHAZAD, MISTY1 and KASUMI. Unlike other forms of cryptanalysis, such as differential and linear cryptanalysis, only one or two (in the case of a 128 bit block size and a 256 bit key size) known plaintexts are required.
The XSL algorithm is tailored to solve the type of equation systems that are produced. Courtois and Pieprzyk estimate that an "optimistic evaluation shows that the XSL attack might be able to break Rijndael [with] 256 bits and Serpent for key lengths [of] 192 and 256 bits." Their analysis, however, is not universally accepted. For example:
In AES 4 Conference, Bonn 2004, one of the inventors of Rijndael, Vincent Rijmen, commented, "The XSL attack is not an attack. It is a dream." Promptly Courtois answered, "XSL may be a dream. It may also be a very bad dream and turn into a nightmare." However neither any later paper or any actions by the NSA or NIST give any support to this remark by Courtois.
In 2003, Murphy and Robshaw discovered an alternative description of AES, embedding it in a larger cipher called "BES", which can be described using very simple operations over a single field, GF(28). An XSL attack mounted on this system yields a simpler set of equations which would break AES with complexity of around 2100, if the Courtois and Pieprzyk analysis is correct. In 2005 Cid and Leurent gave evidence that, in its proposed form, the XSL algorithm does not provide an efficient method for solving the AES system of equations; however Courtois disputed their findings. At FSE 2007, Chu-Wee Lim and Khoongming Khoo showed that the XSL attack was worse than brute force on BES.
Even if XSL works against some modern algorithms, the attack currently poses little danger in terms of practical security. Like many modern cryptanalytic results, it would be a so-called "certificational weakness": while faster than a brute force attack, the resources required are still huge, and it is very unlikely that real-world systems could be compromised by using it. Future improvements could increase the practicality of an attack, however. Because this type of attack is new and unexpected, some cryptographers have expressed unease at the algebraic simplicity of ciphers like Rijndael. Bruce Schneier and Niels Ferguson write, "We have one criticism of AES: we don't quite trust the security… What concerns us the most about AES is its simple algebraic structure… No other block cipher we know of has such a simple algebraic representation. We have no idea whether this leads to an attack or not, but not knowing is reason enough to be skeptical about the use of AES." (Practical Cryptography, 2003, pp. 56–57)
References
S. Murphy, M. Robshaw Comments on the Security of the AES and the XSL Technique.
External links
Courtois' page on AES
"Quadratic Cryptanalysis", an explanation of the XSL attack by J. J. G. Savard
"AES is NOT broken" by T. Moh
Courtois and Pieprzyk paper on ePrint
Commentary in the Crypto-gram newsletter: , , .
An overview of AES and XSL
Cryptographic attacks | XSL attack | Technology | 1,567 |
275,126 | https://en.wikipedia.org/wiki/Carl%20N%C3%A4geli | Carl Wilhelm von Nägeli (26 or 27 March 1817 – 10 May 1891) was a Swiss botanist. He studied cell division and pollination but became known as the man who discouraged Gregor Mendel from further work on genetics. He rejected natural selection as a mechanism of evolution, favouring orthogenesis driven by a supposed "inner perfecting principle".
Birth and education
Nägeli was born in Kilchberg near Zürich, where he studied medicine at the University of Zürich. From 1839, he studied botany under A. P. de Candolle at Geneva, and graduated with a botanical thesis at Zürich in 1840. His attention having been directed by Matthias Jakob Schleiden, then professor of botany at Jena, to the microscopical study of plants, he engaged more particularly in that branch of research. He also coined the term "meristematic tissue" in 1858.
Academic career
Soon after graduation he became Privatdozent and subsequently professor extraordinary, in the University of Zürich; later he was called to fill the chair of botany at the University of Freiburg; and in 1857 he was promoted to Munich, where he remained as professor until his death.
Contributions
It was thought that Nägeli had first observed cell division during the formation of pollen, in 1842. However, this is disputed by Henry Harris, who writes: "What Nägeli saw and did not see in plant material at about the same time [as Robert Remak] is somewhat obscure... I conclude... that, unlike Remak, he did not observe nuclear division... it is clear that Nägeli did not in 1844 have any idea of the importance of the nucleus in the life of the cell."
In 1857, Nägeli first described microsporidia, the causative agent of pebrine disease in silkworms, which has historically devastated the silk industry in Europe.
Among his other contributions to science were a series of papers in the Zeitschrift für wissenschaftliche Botanik (1844–1846); Die neueren Algensysteme (1847); Gattungen einzelliger Algen (1849); Pflanzenphysiologische Untersuchungen (1855–1858), with Carl Eduard Cramer; Beiträge zur wissenschaftlichen Botanik (1858–1868); a number of papers contributed to the Royal Bavarian Academy of Sciences, forming three volumes of Botanische Mitteilungen (1861–1881); and, finally, his volume, Mechanisch-physiologische Theorie der Abstammungslehre, published in 1884. However, perhaps Nägeli is best known nowadays for his unproductive correspondence (1866–1873) with Gregor Mendel concerning the latter's celebrated work on Pisum sativum, the garden pea.
The writer Simon Mawer, in his book Gregor Mendel: planting the seeds of genetics (2006), gives an account of Nägeli's correspondence with Mendel, underlining that, at the time Nägeli was writing to the friar from Moravia, Nägeli "must have been preparing his great work entitled A mechanico-physiological theory of organic evolution (published in 1884, the year of Mendel's death) in which he proposes the concept of the 'idioplasm' as the hypothetical transmitter of inherited characters". Mawer notes that, in this Nägeli book, there is not a single mention of the work of Gregor Mendel. That prompted him to write: "We can forgive von Nägeli for being obtuse and supercilious. We can forgive him for being ignorant, a scientist of his time who did not really have the equipment to understand the significance of what Mendel had done despite the fact that he (von Nägeli) speculated extensively about inheritance. But omitting an account of Mendel's work from his book is, perhaps, unforgivable."
Nägeli and Hugo von Mohl were the first scientists to distinguish the plant cell wall from the inner contents, which was named the protoplasm in 1846. Nägeli believed that cells receive their hereditary characters from a part of the protoplasm which he called the idioplasma. Nägeli was an advocate of orthogenesis and an opponent of Darwinism. He developed an "inner perfecting principle" which he believed directed evolution. He wrote that many evolutionary developments were nonadaptive and variation was internally programmed.
Nägeli also coined the terms 'Meristem', 'Xylem' and 'Phloem' (all in 1858) while he and Hofmeister gave the 'Apical Cell Theory' (1846) which aimed to explain origin and functioning of the shoot apical meristem in plants.
Works
See also
University of Freiburg Faculty of Biology
Notes
External links
Short biography and bibliography in the Virtual Laboratory of the Max Planck Institute for the History of Science
Biography and work (in German)
Entire facsimile text of "Mechanisch-physiologische Theorie der Abstammungslehre"
1817 births
1891 deaths
Phycologists
Botanists with author abbreviations
People from Horgen District
Swiss mycologists
Swiss nobility
Academic staff of ETH Zurich
University of Zurich alumni
Academic staff of the University of Zurich
University of Geneva alumni
Academic staff of the University of Freiburg
Academic staff of the Ludwig Maximilian University of Munich
Orthogenesis
Foreign members of the Royal Society
19th-century Swiss botanists
Members of the Göttingen Academy of Sciences and Humanities | Carl Nägeli | Biology | 1,151 |
7,173,874 | https://en.wikipedia.org/wiki/Ecophysiology | Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym.
Plants
Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis.
In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions.
Light
Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum efficiency; the x-intercept is the light intensity at which biochemical assimilation (gross assimilation) balances leaf respiration so that the net CO2 exchange of the leaf is zero, called light compensation point; and a horizontal asymptote representing the maximum assimilation rate. Sometimes after reaching the maximum assimilation declines for processes collectively known as photoinhibition.
As with most abiotic factors, light intensity (irradiance) can be both suboptimal and excessive. Suboptimal light (shade) typically occurs at the base of a plant canopy or in an understory environment. Shade tolerant plants have a range of adaptations to help them survive the altered quantity and quality of light typical of shade environments.
Excess light occurs at the top of canopies and on open ground when cloud cover is low and the sun's zenith angle is low, typically this occurs in the tropics and at high altitudes. Excess light incident on a leaf can result in photoinhibition and photodestruction. Plants adapted to high light environments have a range of adaptations to avoid or dissipate the excess light energy, as well as mechanisms that reduce the amount of injury caused.
Light intensity is also an important component in determining the temperature of plant organs (energy budget).
Temperature
In response to extremes of temperature, plants can produce various proteins. These protect them from the damaging effects of ice formation and falling rates of enzyme catalysis at low temperatures, and from enzyme denaturation and increased photorespiration at high temperatures. As temperatures fall, production of antifreeze proteins and dehydrins increases. As temperatures rise, production of heat shock proteins increases. Metabolic imbalances associated with temperature extremes result in the build-up of reactive oxygen species, which can be countered by antioxidant systems. Cell membranes are also affected by changes in temperature and can cause the membrane to lose its fluid properties and become a gel in cold conditions or to become leaky in hot conditions. This can affect the movement of compounds across the membrane. To prevent these changes, plants can change the composition of their membranes. In cold conditions, more unsaturated fatty acids are placed in the membrane and in hot conditions, more saturated fatty acids are inserted.
Plants can avoid overheating by minimising the amount of sunlight absorbed and by enhancing the cooling effects of wind and transpiration. Plants can reduce light absorption using reflective leaf hairs, scales, and waxes. These features are so common in warm dry regions that these habitats can be seen to form a 'silvery landscape' as the light scatters off the canopies. Some species, such as Macroptilium purpureum, can move their leaves throughout the day so that they are always orientated to avoid the sun (paraheliotropism). Knowledge of these mechanisms has been key to breeding for heat stress tolerance in agricultural plants.
Plants can avoid the full impact of low temperatures by altering their microclimate. For example, Raoulia plants found in the uplands of New Zealand are said to resemble 'vegetable sheep' as they form tight cushion-like clumps to insulate the most vulnerable plant parts and shield them from cooling winds. The same principle has been applied in agriculture by using plastic mulch to insulate the growing points of crops in cool climates in order to boost plant growth.
Water
Too much or too little water can damage plants. If there is too little water then tissues will dehydrate and the plant may die. If the soil becomes waterlogged then the soil will become anoxic (low in oxygen), which can kill the roots of the plant.
The ability of plants to access water depends on the structure of their roots and on the water potential of the root cells. When soil water content is low, plants can alter their water potential to maintain a flow of water into the roots and up to the leaves (Soil plant atmosphere continuum). This remarkable mechanism allows plants to lift water as high as 120 m by harnessing the gradient created by transpiration from the leaves.
In very dry soil, plants close their stomata to reduce transpiration and prevent water loss. The closing of the stomata is often mediated by chemical signals from the root (i.e., abscisic acid). In irrigated fields, the fact that plants close their stomata in response to drying of the roots can be exploited to 'trick' plants into using less water without reducing yields (see partial rootzone drying). The use of this technique was largely developed by Dr Peter Dry and colleagues in Australia
If drought continues, the plant tissues will dehydrate, resulting in a loss of turgor pressure that is visible as wilting. As well as closing their stomata, most plants can also respond to drought by altering their water potential (osmotic adjustment) and increasing root growth. Plants that are adapted to dry environments (Xerophytes) have a range of more specialized mechanisms to maintain water and/or protect tissues when desiccation occurs.
Waterlogging reduces the supply of oxygen to the roots and can kill a plant within days. Plants cannot avoid waterlogging, but many species overcome the lack of oxygen in the soil by transporting oxygen to the root from tissues that are not submerged. Species that are tolerant of waterlogging develop specialised roots near the soil surface and aerenchyma to allow the diffusion of oxygen from the shoot to the root. Roots that are not killed outright may also switch to less oxygen-hungry forms of cellular respiration. Species that are frequently submerged have evolved more elaborate mechanisms that maintain root oxygen levels, such as the aerial roots seen in mangrove forests.
However, for many terminally overwatered houseplants, the initial symptoms of waterlogging can resemble those due to drought. This is particularly true for flood-sensitive plants that show drooping of their leaves due to epinasty (rather than wilting).
concentration
is vital for plant growth, as it is the substrate for photosynthesis. Plants take in through stomatal pores on their leaves. At the same time as enters the stomata, moisture escapes. This trade-off between gain and water loss is central to plant productivity. The trade-off is all the more critical as Rubisco, the enzyme used to capture , is efficient only when there is a high concentration of in the leaf. Some plants overcome this difficulty by concentrating within their leaves using carbon fixation or Crassulacean acid metabolism. However, most species used carbon fixation and must open their stomata to take in whenever photosynthesis is taking place.
The concentration of in the atmosphere is rising due to deforestation and the combustion of fossil fuels. This would be expected to increase the efficiency of photosynthesis and possibly increase the overall rate of plant growth. This possibility has attracted considerable interest in recent years, as an increased rate of plant growth could absorb some of the excess and reduce the rate of global warming. Extensive experiments growing plants under elevated using Free-Air Concentration Enrichment have shown that photosynthetic efficiency does indeed increase. Plant growth rates also increase, by an average of 17% for above-ground tissue and 30% for below-ground tissue. However, detrimental impacts of global warming, such as increased instances of heat and drought stress, mean that the overall effect is likely to be a reduction in plant productivity. Reduced plant productivity would be expected to accelerate the rate of global warming. Overall, these observations point to the importance of avoiding further increases in atmospheric rather than risking runaway climate change.
Wind
Wind has three very different effects on plants.
It affects the exchanges of mass (water evaporation, ) and of energy (heat) between the plant and the atmosphere by renewing the air at the contact with the leaves (convection).
It is sensed as a signal driving a wind-acclimation syndrome by the plant known as thigmomorphogenesis, leading to modified growth and development and eventually to wind hardening.
Its drag force can damage the plant (leaf abrasion, wind ruptures in branches and stems and windthrows and toppling in trees and lodging in crops).
Exchange of mass and energy
Wind influences the way leaves regulate moisture, heat, and carbon dioxide. When no wind is present, a layer of still air builds up around each leaf. This is known as the boundary layer and in effect insulates the leaf from the environment, providing an atmosphere rich in moisture and less prone to convective heating or cooling. As wind speed increases, the leaf environment becomes more closely linked to the surrounding environment. It may become difficult for the plant to retain moisture as it is exposed to dry air. On the other hand, a moderately high wind allows the plant to cool its leaves more easily when exposed to full sunlight. Plants are not entirely passive in their interaction with wind. Plants can make their leaves less vulnerable to changes in wind speed, by coating their leaves in fine hairs (trichomes) to break up the airflow and increase the boundary layer. In fact, leaf and canopy dimensions are often finely controlled to manipulate the boundary layer depending on the prevailing environmental conditions.
Acclimation
Plants can sense the wind through the deformation of its tissues. This signal leads to inhibits the elongation and stimulates the radial expansion of their shoots, while increasing the development of their root system. This syndrome of responses known as thigmomorphogenesis results in shorter, stockier plants with strengthened stems, as well as to an improved anchorage. It was once believed that this occurs mostly in very windy areas. But it has been found that it happens even in areas with moderate winds, so that wind-induced signal were found to be a major ecological factor.
Trees have a particularly well-developed capacity to reinforce their trunks when exposed to wind. From the practical side, this realisation prompted arboriculturalists in the UK in the 1960s to move away from the practice of staking young amenity trees to offer artificial support.
Wind damage
Wind can damage most of the organs of the plants. Leaf abrasion (due to the rubbing of leaves and branches or to the effect of airborne particles such as sand) and leaf of branch breakage are rather common phenomena, that plants have to accommodate. In the more extreme cases, plants can be mortally damaged or uprooted by wind. This has been a major selective pressure acting over terrestrial plants. Nowadays, it is one of the major threatening for agriculture and forestry even in temperate zones. It is worse for agriculture in hurricane-prone regions, such as the banana-growing Windward Islands in the Caribbean.
When this type of disturbance occurs in natural systems, the only solution is to ensure that there is an adequate stock of seeds or seedlings to quickly take the place of the mature plants that have been lost- although, in many cases, a successional stage will be needed before the ecosystem can be restored to its former state.
Animals
Humans
The environment can have major influences on human physiology. Environmental effects on human physiology are numerous; one of the most carefully studied effects is the alterations in thermoregulation in the body due to outside stresses. This is necessary because in order for enzymes to function, blood to flow, and for various body organs to operate, temperature must remain at consistent, balanced levels.
Thermoregulation
To achieve this, the body alters three main things to achieve a constant, normal body temperature:
Heat transfer to the epidermis
The rate of evaporation
The rate of heat production
The hypothalamus plays an important role in thermoregulation. It connects to thermal receptors in the dermis, and detects changes in surrounding blood to make decisions of whether to stimulate internal heat production or to stimulate evaporation.
There are two main types of stresses that can be experienced due to extreme environmental temperatures: heat stress and cold stress.
Heat stress is physiologically combated in four ways: radiation, conduction, convection, and evaporation. Cold stress is physiologically combated by shivering, accumulation of body fat, circulatory adaptations (that provide an efficient transfer of heat to the epidermis), and increased blood flow to the extremities.
There is one part of the body fully equipped to deal with cold stress. The respiratory system protects itself against damage by warming the incoming air to 80-90 degrees Fahrenheit before it reaches the bronchi. This means that not even the most frigid of temperatures can damage the respiratory tract.
In both types of temperature-related stress, it is important to remain well-hydrated. Hydration reduces cardiovascular strain, enhances the ability of energy processes to occur, and reduces feelings of exhaustion.
Altitude
Extreme temperatures are not the only obstacles that humans face. High altitudes also pose serious physiological challenges on the body. Some of these effects are reduced arterial , the rebalancing of the acid-base content in body fluids, increased hemoglobin, increased RBC synthesis, enhanced circulation, and increased levels of the glycolysis byproduct 2,3 diphosphoglycerate, which promotes off-loading of O2 by hemoglobin in the hypoxic tissues.
Environmental factors can play a huge role in the human body's fight for homeostasis. However, humans have found ways to adapt, both physiologically and tangibly.
Scientists
George A. Bartholomew (1919–2006) was a founder of animal physiological ecology. He served on the faculty at UCLA from 1947 to 1989, and almost 1,200 individuals can trace their academic lineages to him. Knut Schmidt-Nielsen (1915–2007) was also an important contributor to this specific scientific field as well as comparative physiology.
Hermann Rahn (1912–1990) was an early leader in the field of environmental physiology. Starting out in the field of zoology with a Ph.D. from University of Rochester (1933), Rahn began teaching physiology at the University of Rochester in 1941. It is there that he partnered with Wallace O. Fenn to publish A Graphical Analysis of the Respiratory Gas Exchange in 1955. This paper included the landmark O2-CO2 diagram, which formed the basis for much of Rahn's future work. Rahn's research into applications of this diagram led to the development of aerospace medicine and advancements in hyperbaric breathing and high-altitude respiration. Rahn later joined the University at Buffalo in 1956 as the Lawrence D. Bell Professor and Chairman of the Department of Physiology. As Chairman, Rahn surrounded himself with outstanding faculty and made the University an international research center in environmental physiology.
See also
Comparative physiology
Evolutionary physiology
Ecology
Phylogenetic comparative methods
Plant physiology
Raymond B. Huey
Theodore Garland, Jr.
Tyrone Hayes
References
Further reading
Spicer, J. I., and K. J. Gaston. 1999. Physiological diversity and its ecological implications. Blackwell Science, Oxford, U.K. x + 241 pp.
. Definitions and Opinions by: G. A. Bartholomew, A. F. Bennett, W. D. Billings, B. F. Chabot, D. M. Gates, B. Heinrich, R. B. Huey, D. H. Janzen, J. R. King, P. A. McClure, B. K. McNab, P. C. Miller, P. S. Nobel, B. R. Strain.
Subfields of ecology
Physiology
Animal physiology
Plant physiology
Ecology terminology
Animal ecology
Plant ecology
Articles containing video clips | Ecophysiology | Biology | 3,590 |
12,799,573 | https://en.wikipedia.org/wiki/Cambridge%20Structural%20Database | The Cambridge Structural Database (CSD) is both a repository and a validated and curated resource for the three-dimensional structural data of molecules generally containing at least carbon and hydrogen, comprising a wide range of organic, metal-organic and organometallic molecules. The specific entries are complementary to the other crystallographic databases such as the Protein Data Bank (PDB), Inorganic Crystal Structure Database and International Centre for Diffraction Data. The data, typically obtained by X-ray crystallography and less frequently by electron diffraction or neutron diffraction, and submitted by crystallographers and chemists from around the world, are freely accessible (as deposited by authors) on the Internet via the CSD's parent organization's website (CCDC, Repository). The CSD is overseen by the not-for-profit incorporated company called the Cambridge Crystallographic Data Centre, CCDC.
The CSD is a widely used repository for small-molecule organic and metal-organic crystal structures for scientists. Structures deposited with Cambridge Crystallographic Data Centre (CCDC) are publicly available for download at the point of publication or at consent from the depositor. They are also scientifically enriched and included in the database used by software offered by the centre. Targeted subsets of the CSD are also freely available to support teaching and other activities.
History
The CCDC grew out of the activities of the crystallography group led by Olga Kennard OBE FRS in the Department of Organic, Inorganic and Theoretical Chemistry of the University of Cambridge. From 1965, the group began to collect published bibliographic, chemical and crystal structure data for all small molecules studied by X-ray or neutron diffraction. With the rapid developments in computing taking place at this time, this collection was encoded in electronic form and became known as the Cambridge Structural Database (CSD).
The CSD was one of the first numerical scientific databases to begin operations anywhere in the world, and received academic grants from the UK Office for Scientific and Technical Information and then from the UK Science and Engineering Research Council. These funds, together with subventions from National Affiliated Centres, enabled the development of the CSD and its associated software during the 1970s and 1980s. The first releases of the CSD System to the United States, Italy and Japan occurred in the early 1970s. By the early 1980s the CSD System was being distributed in more than 30 countries. As of 2014, the CSD System was distributed to academics in 70 countries.
During the 1980s, interest in the CSD System from pharmaceutical and agrochemicals companies increased significantly. This led to the establishment of the Cambridge Crystallographic Data Centre (CCDC) as an independent company in 1987, with the legal status of a non-profit charitable institution, and with its operations overseen by an international board of governors. The CCDC moved into purpose-built premises on the site of the University Department of Chemistry in 1992.
Kennard retired as Director in 1997 and was succeeded by David Hartley (1997-2002) and Frank Allen (2002-2008). Colin Groom was appointed as executive director from 1 October 2008 to September 2017. And most recently, Juergen Harter was appointed CEO in June 2018.
CCDC software products diversified to the use of crystallographic data in applications in the life sciences and crystallography. Much of this software development and marketing is carried out by CCDC Software Limited (founded in 1998), a wholly owned subsidiary which covenants all of its profits back to the CCDC.
Although the CCDC is a self-administering organization, it retains close links with the University of Cambridge, and is a University Partner Institution that is qualified to train postgraduate students for higher degrees (PhD, MPhil).
The CCDC established US applications and support operations in the US in October 2013, initially at Rutgers, the State University of New Jersey, where it is co-located with the RCSB Protein Data Bank
Contents
The CSD is updated with about 50,000 new structures each year, and with improvements to existing entries. Entries (structures) in the repository are released for public access as soon as the corresponding entry has appeared in the peer-reviewed scientific literature. Meanwhile, data can also be deposited and published directly through the CSD without an accompanying scientific article as what is known as a CSD Communication.
Periodically, general statistics about the breadth of CSD holdings are reported, for example the January 2014 report. , the summary statistics are as follows:
As of January 2019, the top 25 scientific journals in terms of publication of structures in the CSD repository were:
1. structures were reported in Inorg. Chem.
2. structures were reported in Dalton & J. Chem. Soc., Dalton Trans.
3. structures were reported in Organometallics
4. structures were reported in J. Am. Chem. Soc.
5. structures were reported in Acta Crystallogr. Sect. E
6. structures were reported in Chem. Eur. J.
7. structures were reported in J. Organomet. Chem.
8. structures were reported in Angew. Chem. Int. Ed.
9. structures were reported in Inorg. Chim. Acta
10. structures were reported in Chem. Commun. & J. Chem. Soc.
11. structures were reported in CSD Communications
12. structures were reported in Acta Crystallogr. Sect. C
13. structures were reported in Polyhedron
14. structures were reported in Eur. J. Inorg. Chem.
15. structures were reported in J. Org. Chem.
16. structures were reported in Cryst. Growth Des.
17. structures were reported in CrystEngComm
18. structures were reported in Organic Letters
19. structures were reported in Z. Anorg. Allg. Chem.
20. structures were reported in Acta Crystallogr. Sect. B
21. structures were reported in Tetrahedron structures were reported as Private Communication to the CSD
22. structures were reported in J. Mol. Struct.
23. structures were reported in Tetrahedron Lett.
24. structures were reported in Eur. J. Org. Chem.
25. structures were reported in New Journal of Chemistry
These 25 journals account for 704,541 of the 996,193 or 70.7% of the structures in the CSD.
These data show that most structures are determined by X-ray diffraction, with less than 1% of structures being determined by neutron diffraction or powder diffraction. The number of error-free coordinates were taken as a percentage of structures for which 3D coordinates are present in the CSD.
The significance of the structure factor files, mentioned above, is that, for CSD structures determined by X-ray diffraction that have a structure file, a crystallographer can verify the interpretation of the observed measurements.
Growth trend
Historically, the number of structures in the CSD has grown at an approximately exponential rate passing the 25,000 structures milestone in 1977, the 50,000 structures milestone in 1983, the 125,000 structures milestone in 1992, the 250,000 structures milestone in 2001, the 500,000 structures milestone in 2009, and the 1,000,000 structures milestone on June 8, 2019. The one millionth structure added to CSD is the crystal structure of 1-(7,9-diacetyl-11-methyl-6H-azepino[1,2-a]indol-6-yl)propan-2-one.
Note: data for 1923-1964 are aggregated together in the last line of the table.
File format
The primary file format for CSD structure deposition, adopted around 1991, is the "Crystallographic Information file" format, CIF.
The deposited CSD files can be downloaded in the CIF format. The validated and curated CSD files can be exported in a wide range of formats, including CIF, MOL, Mol2, PDB, SHELX and XMol, using tools in the CSD System.
The CCDC uses two different codes to distinguish between the deposited dataset and the curated CSD entry. For example, one specific ‘CSD Communication’ of an organic molecule was deposited with the CCDC and assigned the deposition number 'CCDC-991327.' This allows free public access to the data as deposited. From the deposited data, selected information is extracted to prepare the validated and curated CSD entry which was assigned the refcode 'MITGUT'. As a part of the curation process, CCDC also applies an algorithm, DeCIFer, to help the editors assign chemistry to structures when those representations (e.g. bond types and charge assignments etc.) are missing from the original CIF files submitted. The validated and curated entry is included in the CSD System and WebCSD distributions, with availability restricted to those making appropriate contributions.
Viewing the data
Each data set in CSD can be openly viewed and retrieved using the free Access Structure service. Through this web-browser based service, users can view the data set in 2D and 3D, obtain some basic information about the structure, and download the deposited data set. More advanced search functions and curated information are available through the subscription based CSD system.
Besides using the CSD system, the structure files may be viewed using one of several open source computer programs such as Jmol. Some other free, but not open source programs include MDL Chime, Pymol, UCSF Chimera, Rasmol, WINGX, the CCDC provides a free version of its visualization program Mercury.
Starting from 2015, Mercury from CCDC also provides the functionality to generate 3D print ready file from structures in CSD.
See also
Crystallographic database
Mercury
Protein structure
References
External links
The Cambridge Crystallographic Data Centre (CCDC) — parent site to CSD
Biological databases
Chemical databases
Chemical industry in the United Kingdom
Crystallographic databases
Databases in the United Kingdom
Science and technology in Cambridgeshire | Cambridge Structural Database | Chemistry,Materials_science,Biology | 2,069 |
74,046,482 | https://en.wikipedia.org/wiki/List%20of%20Oceanian%20countries%20by%20life%20expectancy | This is a list of Oceanian countries by life expectancy at birth.
United Nations (2023)
Estimation of the analytical agency of the UN.
UN: Estimate of life expectancy for various ages in 2023
UN: Change of life expectancy from 2019 to 2023
World Bank Group (2022)
Estimation of the World Bank Group for 2022. The data is filtered according to the list of countries in Oceania. The values in the World Bank Group tables are rounded. All calculations are based on raw data, so due to the nuances of rounding, in some places illusory inconsistencies of indicators arose, with a size of 0.01 year.
In 2014, some of the world's leading countries had a local peak in life expectancy, so this year is chosen for comparison with 2019 and 2022.
WHO (2019)
Estimation of the World Health Organization for 2019.
Charts
See also
References
Life expectancy
Oceania | List of Oceanian countries by life expectancy | Biology | 192 |
60,529,141 | https://en.wikipedia.org/wiki/Richard%20Yost | Richard A Yost (born 31 May 1953 in Martins Ferry) is an American scientist and University Professor Emeritus at the University of Florida. He is best known for his work inventing the triple quadrupole mass spectrometer. Yost received his BS degree in chemistry in 1974 from the University of Arizona, having performed undergraduate research in chromatography with Mike Burke and his PhD degree in Analytical Chemistry in 1979 from Michigan State University, having performed graduate research with Chris Enke.
He joined the faculty of the University of Florida after his graduate research.
He served as director of the Southeast Center for Integrated Metabolomics (SECIM) and of NIH's Metabolomics Consortium Coordinating Center (M3C). He is also a Professor of Pathology at both the University of Florida and the University of Utah/ARUP.
His professional activities have focused on research and teaching in analytical mass spectrometry, particularly tandem mass spectrometry (MS/MS). His group's research has reflected a unique balance between instrumentation development, fundamental studies, and applications in analytical chemistry. His group has led in the application of novel mass spectrometric methods and techniques to areas such as metabolomics, clinical, biomedical, pharmaceutical, environmental, petrochemical, and forensic chemistry. Yost has supervised the research of over 120 graduate students during the past 44 years, graduating over 100 PhDs from his group. He has served as PI or Co-PI on grants and contracts totaling over $65M of funding. Research in the group has led to over 240 publications and 25 patents.
He is one of the inventors of the triple quadrupole mass spectrometer. He won the ASMS Award for Distinguished Contribution to Mass Spectrometry along with Chris Enke in 1993. His research has also been recognized with the 2018 MSACL Award for Distinguished Contribution to Clinical Mass Spectrometry, and the 2019 CPSA Distinguished Analytical Scientist Award. In 2019 he was named the Florida Academy of Sciences Medalist, and was inducted into the Florida Inventors Hall of Fame.
His research interests lie in mass spectrometric instrumentation and applications in analytical chemistry. This includes the development of new mass spectrometric and ion mobility instrumentation and techniques and their application to biology and chemistry.
Yost has served on the Florida Board of Governors (Regents) and the University of Florida Board of Trustees. In 2019 he received the Distinguished Eagle Scout Award. Yost also served as the president of the American Society for Mass Spectrometry.
References
University of Arizona alumni
Living people
Mass spectrometrists
University of Florida faculty
1953 births
American chemists | Richard Yost | Physics,Chemistry | 537 |
22,529,331 | https://en.wikipedia.org/wiki/Lonely%20hearts%20killer | A lonely hearts killer (also called want-ad killer) is a criminal who commits murder by contacting a victim who has either posted advertisements to or answered advertisements via newspaper classified ads and personal or lonely hearts ads.
Varied motives
The actual motivations of these criminals are varied. By definition, a killing will have taken place in as much as the suspected, accused, or convicted perpetrator has been dubbed a want-ad or lonely hearts killer. However, the crime may have involved a simple robbery gone wrong, an elaborate insurance fraud scheme, sexual violence/rape, or any of several other ritualized pathological impulses (e.g. necrophilia, mutilation, cannibalism, etc.). Sometimes murder is not the (original) intent, but becomes a by-product of rape or other struggles; in some cases, murder is committed simply to cover up the original crime. Some, on the other hand, are serial killers who utilize this method of targeting victims, either exclusively, or when it suits them.
Notable lonely hearts and want-ad killers
The following accused and convicted murderers and serial killers are known to have used want ads, personal ads, and/or matrimonial bureaus to contact their victims:
Elfriede Blauensteiner (1931–2003) – known as "The Black Widow"
Viktor Bolkhovsky (b. 1959) – known as "The Necromancer Maniac"
Harvey Carignan (1927–2023) – known as "The Want-Ad Killer"
Nannie Doss (1905–1965) – known as "The Lonely Hearts Killer", among other names
Amelia Dyer (1836–1896) – known as "The Ogress of Reading"
Raymond Fernandez (1914–1951) and Martha Beck (1920–1951) – known as "The Honeymoon Killers" and "The Lonely Hearts Killers"
Albert Fish (1870–1936)
Harvey Glatman (1927–1959) – known as "The Lonely Hearts Killer"
Denis Gorbunov (1977–2006)
Belle Gunness (1859–1908?) – she became part of American criminal folklore, a female Bluebeard.
Robert Hansen (1939–2014)
Béla Kiss (1877–19?)
Sheila LaBarre (b. 1959) – serving two consecutive life sentences for two murders on farm she inherited from deceased husband. Boyfriend later died, as did a man who replied to her personal ad.
Henri Désiré Landru (1869–1922)
Bobby Joe Long (1953–2019) – known as "The Classified Ad Rapist"
Philip Markoff (1986–2010) – known as "The Craigslist Killer"
Harry Powers (1892–1932) – known as "The Lonely Hearts Killer", "The West Virginia Bluebeard", and "The Butcher of Clarksburg"
See also
Internet homicide
Internet suicide
Murder of Margaret Martin
References
Crime
Interpersonal relationships
Killings by type | Lonely hearts killer | Biology | 599 |
3,744,312 | https://en.wikipedia.org/wiki/Electro-pneumatic%20action | The electro-pneumatic action is a control system by the mean of air pressure for pipe organs, whereby air pressure, controlled by an electric current and operated by the keys of an organ console, opens and closes valves within wind chests, allowing the pipes to speak. This system also allows the console to be physically detached from the organ itself. The only connection was via an electrical cable from the console to the relay, with some early organ consoles utilizing a separate wind supply to operate combination pistons.
Invention
Although early experiments with Barker lever, tubular-pneumatic and electro-pneumatic actions date as far back as the 1850s, credit for a feasible design is generally given to the English organist and inventor, Robert Hope-Jones. He overcame the difficulties inherent in earlier designs by including a rotating centrifugal air blower and replacing banks of batteries with a DC generator, which provided electrical power to the organ. This allowed the construction of new pipe organs without any physical linkages whatsoever. Previous organs used tracker action, which requires a mechanical linkage between the console and the organ windchests, or tubular-pneumatic action, which linked the console and windchests with a large bundle of lead tubing.
Operation
When an organ key is depressed, an electric circuit is completed by means of a switch connected to that key. This causes a low-voltage current to flow through a cable to the windchest, upon which a rank, or multiple ranks of pipes are set. Within the chest, a small electro-magnet associated with the key that is pressed becomes energized. This causes a very small valve to open. This, in turn, allows wind pressure to activate a bellows or "pneumatic" which operates a larger valve. This valve causes a change of air pressure within a channel that leads to all pipes of that note. A separate "stop action" system is used to control the admittance of air or "wind" into the pipes of the rank or ranks selected by the organist's selection of stops, while other ranks are "stopped" from playing. The stop action can also be an electro-pneumatic action, or may be another type of action
This pneumatically assisted valve action is in contrast to a direct electric action in which each pipe's valve is opened directly by an electric solenoid which is attached to the valve.
Advantages and disadvantages
The console of an organ which uses either type of electric action is connected to the other mechanisms by an electrical cable. This makes it possible for the console to be placed in any desirable location. It also permits the console to be movable, or to be installed on a "lift", as was the practice with theater organs.
While many consider tracker action organs to be more sensitive to the player's control, others find some tracker organs heavy to play and tubular-pneumatic organs to be sluggish, and so prefer electro-pneumatic or direct electric actions.
An electro-pneumatic action requires less current to operate than a direct electric action. This causes less demand on switch contacts. An organ using electro-pneumatic action was more reliable in operation than early direct electric organs until improvements were made in direct electric components.
A disadvantage of an electro-pneumatic organ is its use of large quantities of thin perishable leather, usually lambskin. This requires an extensive "re-leathering" of the windchests every twenty-five to forty years depending upon the quality of the material used, the atmospheric conditions and the use of the organ.
Like tracker and tubular action, electro-pneumatic action—when employing the commonly used pitman-style windchests—is less flexible in operation than direct electric action . When electro-pneumatic action uses unit windchests (as does the electro-pneumatic action constructed by organ builder Schoenstein & Co.), then it works similarly to direct electric action, in which each rank operates independently, allowing "unification", where each individual rank on a windchest can be played at various octave ranges.
A drawback to older electric action organs was the large amount of wiring required for operation. With each stop tab and key being wired, the transmission cable could easily contain several hundred wires. The great number of wires required between the keyboards, the banks of relays and the organ itself, with each solenoid requiring its own signal wire, made the situation worse, especially if a wire was broken (this was particularly true with consoles located on lifts and/or turntables), which made tracing the break very difficult.
These problems increased with the size of the instrument, and it would not be unusual for a particular organ to contain over a hundred miles of wiring. The largest pipe organ in the world, the Boardwalk Hall Auditorium Organ, is said to contain more than of wire. Modern electronic switching has largely overcome these physical problems.
Modern methods
In the years after the advent of the transistor, and later, integrated circuits and microprocessors, miles of wiring and electro-pneumatic relays have given way to electronic and computerized control and relay systems, which have made the control of pipe organs much more efficient. But for its time, the electro-pneumatic action was considered a great success, and even today modernized versions of this action are used in many new pipe organs, especially in the United States and the United Kingdom.
References
Further reading
George Ashdown Audsley. The Art of Organ Building.
Pipe organ components | Electro-pneumatic action | Technology | 1,129 |
2,727,977 | https://en.wikipedia.org/wiki/22%20Tauri | 22 Tauri is a component of the Asterope double star in the Pleiades open cluster. 22 Tauri is the stars' Flamsteed designation. It is situated near the ecliptic and thus is subject to lunar occultation. The star has an apparent visual magnitude of 6.43, which is near the lower threshold of visibility to the naked eye. Anybody attempting to view the object is likely to instead see the Asterope pair as a single elongated form of magnitude 5.6. Based upon an annual parallax shift of , this star is located 444 light years away from the Sun. It is moving further from the Earth with a heliocentric radial velocity of +7 km/s.
This is an ordinary A-type main-sequence star with a stellar classification of A0 Vn. The 'n' suffix indicates the spectrum displays "nebulous" absorption lines due to rapid rotation. This is confirmed by a high projected rotational velocity of 232 km/s. The star is radiating sixty times the Sun's luminosity from its photosphere at an effective temperature of 11,817 K.
References
A-type main-sequence stars
Pleiades
Taurus (constellation)
Durchmusterung objects
Tauri, 022
023441
017588
1152
Sterope II | 22 Tauri | Astronomy | 275 |
28,960,689 | https://en.wikipedia.org/wiki/Commercial%20Product%20Assurance | Commercial Product Assurance (CPA) is a CESG approach to gaining confidence in the security of commercial products.
It is intended to supplant other approaches such as Common Criteria (CC) and CCT Mark for UK government use.
Organisation
CPA is being developed under the auspices of the UK Government's CESG as the UK National Technical Authority (NTA) for Information Security.
Architectural patterns
CESG also produce Architectural Patterns which cover good practices for common business problems, which looks to use CPA product.
Current Architectural Patterns include:
Walled Gardens for Remote Access
Mobile Remote End Point Devices
Data Import between Security Domains
Comparisons
In comparison to other schemes:
Unlike Common Criteria, there is no Mutual Recognition Agreement (MRA) for CPA, which means that products tested in the UK will not normally be accepted in other markets
Unlike the CCT Mark, the coverage of CPA is limited to Information Security products, and therefore excludes services. The target audience for CPA also appears to be focused on Central Government ("I'm protecting Government data") rather than including the Wider Public Sector (WPS) and Critical National Infrastructure (CNI) segments that were target customers for CCT Mark
References
Computer security procedures
Evaluation of computers | Commercial Product Assurance | Technology,Engineering | 250 |
11,512,309 | https://en.wikipedia.org/wiki/Microsphaera%20verruculosa | Microsphaera verruculosa is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Microsphaera
Fungi described in 1981
Fungus species | Microsphaera verruculosa | Biology | 44 |
1,887,912 | https://en.wikipedia.org/wiki/Letter-quality%20printer | A letter-quality printer was a form of computer impact printer that was able to print with the quality typically expected from a business typewriter such as an IBM Selectric.
A letter-quality printer operates in much the same fashion as a typewriter. A metal or plastic printwheel embossed with letters, numbers, or symbols strikes an inked ribbon, depositing the ink (or carbon, if an expensive single-strike ribbon was installed) on the page and thus printing a character.
Over time, several different technologies were developed including automating ordinary typebar typewriter mechanisms (such as the Friden Flexowriter), daisy wheel printers (dating from a 1939 patent, but brought to life in the 1970s by Diablo engineer David S. Lee) where the type is moulded around the edge of a wheel, and "golf ball" (the popular informal name for "typeball", as used in the IBM Selectric typewriter) printers where the type is distributed over the face of a globe-shaped printhead (including automating IBM Selectric mechanisms such as the IBM 2741 terminal). The daisy wheel and Selectric-based printers offered the advantage that the typeface was readily changeable by the user to accommodate varying needs.
These printers were referred to as "letter-quality printers" during their heyday, and could produce text which was as clear and crisp as a typewriter (though they were nowhere near the quality of printing presses). Most were available either as complete computer terminals with keyboards (or with a keyboard add-on option) that could double as a typewriter in stand-alone ("off-line") mode, or as print-only devices. Because of its low cost at the time, the daisy wheel printer became the most successful, the method used by Diablo, Qume, Brother and Apple.
Letter-quality impact printers, however, were slow, noisy, incapable of printing graphics or images (unless the programmable microspacing and over-use of the dot were employed), sometimes limited to monochrome, and limited to a fixed set (usually one) of typefaces without operator intervention, though certain font effects like underlining and boldface could be achieved by overstriking. Soon, dot-matrix printers (such as the Oki Microline 84) would offer "Near Letter Quality" (NLQ) modes which were much faster than daisy-wheel printers, could produce graphics well, but were still very noticeably lower than "letter quality". Nowadays, printers using non-impact printing (for example laser printers, inkjet printers, and other similar means) have replaced traditional letter-quality printers in most applications. The quality of inkjet printers can approach the old letter-quality impact printers (but can be limited by factors such as paper type).
Use in word processing
Dedicated word processors and WP software for general-purpose computers that rose in popularity in the late 1970s and 1980s would use features such as microspacing (usually by 1/120 of an inch horizontally and, possibly, 1/48 of an inch vertically) to implement subscripts, proportional spacing, underlining, and so on. The more rudimentary software packages would implement bold text by overtyping the character in exactly the same spot (for example, using the backspace control code), but better software would print the letter in 3 slightly different positions. Software did exist to (slowly) produce pie charts on such printers (and on some daisywheels the dot was reinforced with metal to cope with extra wear).
See also
Apple Daisy Wheel Printer
Diablo 630 — the archetypal daisy wheel printer
Dot matrix printers
Near-letter quality
Teleprinter
References
Office equipment
Computer printers
Computer peripherals | Letter-quality printer | Technology | 766 |
12,953,285 | https://en.wikipedia.org/wiki/IEEE%20Internet%20Award | IEEE Internet Award is a Technical Field Award established by the IEEE in June 1999. The award is sponsored by Nokia Corporation. It may be presented annually to an individual or up to three recipients, for exceptional contributions to the advancement of Internet technology for network architecture, mobility and/or end-use applications. Awardees receive a bronze medal, certificate, and honorarium.
Recipients
The following people have received the award:
2000 – Paul Baran, Donald W. Davies, Leonard Kleinrock and Larry Roberts (for packet switching)
2001 – Louis Pouzin (for datagrams)
2002 – Steve Crocker (for approach enabling evolution of Internet Protocols)
2003 – Paul Mockapetris (for the domain name system; the Mockapetris citation specifically cites Jon Postel who had died and therefore could not receive the award for their DNS work)
2004 – Raymond Tomlinson and David H. Crocker (for networked email)
2005 – Sally Floyd (for contributions in congestion control, traffic modeling, and active queue management)
2006 – Scott Shenker (for contributions to the study of resource sharing)
2007 – not awarded
2008 – Mike Brecia, Ginny Travers, and Bob Hinden (for early IP routers)
2009 – Lixia Zhang (for Internet architecture and modeling)
2010 – Stephen Deering (for IP multicasting and IPv6)
2011 – Jun Murai (for leadership in the development of the global Internet, especially in Asia)
2012 – Mark Handley (for exceptional contributions to the advancement of Internet technology for network architecture, mobility, and/or end-use applications)
2013 – David L. Mills (for significant leadership and sustained contributions in the research, development, standardization, and deployment of quality time synchronization capabilities for the Internet)
2014 – Jon Crowcroft (for contributions to research in and teaching of Internet protocols, including multicast, transport, quality of service, security, mobility, and opportunistic networking)
2015 – KC Claffy and Vern Paxson (for seminal contributions to the field of Internet measurement, including security and network data analysis, and for distinguished leadership in and service to the Internet community by providing open-access data and tools)
2016 – Henning Schulzrinne
2017 – Deborah Estrin
2018 – Ramesh Govindan
2019 – Jennifer Rexford
2020 – Stephen Casner and Eve Schooler (for contributions to Internet multimedia standards and protocols)
2023 – Ian Foster and Carl Kesselman (for contributions to the design, deployment, and application of practical Internet-scale global computing platforms)
2024 – Walter Willinger
Notes
See also
Internet Hall of Fame
Internet pioneers
IEEE Alexander Graham Bell Medal
List of computer science awards
SIGCOMM Award
References
External links
Internet Award
Computer science awards
Awards established in 1999 | IEEE Internet Award | Technology | 569 |
56,162,314 | https://en.wikipedia.org/wiki/Iran%20International%20Neuroscience%20Institute | The Iran International Neuroscience Institute (Persian: بنیاد علمی بینالمللی علوم مغز و اعصاب ایران) or Iran INI (Persian: آیانآی ایران) is an International research centre and hospital located in Tehran, Iran. It is the largest centre of Neuroscience researches in the world and third version of its own kind that was founded by professor Majid Samii. The first INI is in Hanover of Germany. This research centre of is being constructed in 11 floors.
See also
List of hospitals in Iran
References
Buildings and structures under construction
Hospitals in Iran
Medical research institutes in Iran
Neuroscience research centers in Iran
Hospitals established in 2019 | Iran International Neuroscience Institute | Engineering | 155 |
307,882 | https://en.wikipedia.org/wiki/Wireline%20%28cabling%29 | In the oil and gas industry, the term wireline usually refers to the use of multi-conductor, single conductor or slickline cable, or "wireline", as a conveyance for the acquisition of subsurface petrophysical and geophysical data and the delivery of well construction services such as pipe recovery, perforating, plug setting and well cleaning and fishing. The subsurface geophysical and petrophysical information results in the description and analysis of subsurface geology, reservoir properties and production characteristics.
Associated with this, "wireline logging" is the acquisition and analysis of geophysical and petrophysical data and the provision of related services provided as a function of along-hole depth.
There are four basic types of wireline: multi-conductor, single conductor, slickline and braided line. Other types of wireline include sheathed slickline and fibre-optic lines.
Multi-conductor lines consist of external armor wires wound around a core of typically 4- or 7-conductors. The conductors are bound together in a central core, protected by the outer armor wires. These conductors are used to transmit power to the downhole instrumentation and transmit data (and commands) to and from the surface. Multi-conductor cables are used primarily in open- (and cased-) hole applications. Typically they have diameters from to with suggested working loads from . (Note that wireline diameters and performance characteristics are typically expressed in imperial units.) Multi-conductor cables can be sheathed in smooth polymer coverings but are more commonly open wound cables.
Single-conductor cables are similar in construction to multi-conductor cables but have only one conductor. The diameters are usually much smaller, ranging from to and with suggested working loads of 800 to 7,735 lbf. Because of their size, these cables can be used in pressurized wells making them particularly suited for cased hole logging activities under pressure. They are typically used for well construction activities such as pipe recovery, perforating and plug setting as well as production logging and reservoir production characterization such as production logging, noise logging, pulsed neutron, production fluid sampling and production flow monitoring.
Slickline is a smooth single strand of wireline with diameters ranging form 0.082" to 0.160". Slickline has no conductor (although there are specialized polymer coated slicklines and tubing encapsulated (TEC) slicklines). They are used for light well construction and well maintenance activities as well as memory reliant subsurface data gathering. Slickline work includes mechanical services such a gauge emplacement and recovery, subsurface valve manipulation, well bore cleaning and fishing.
Braided line has mechanical characteristics similar to mono-conductor wireline, and is used for well construction and maintenance tasks such as heavy duty fishing and well bore cleaning work.
Slicklines
Used to place and recover wellbore equipment, such as plugs, gauges and valves, slicklines are single-strand non-electric cables lowered into oil and gas wells from the surface. Slicklines can also be used to adjust valves and sleeves located downhole, as well as repair tubing within the wellbore.
Wrapped around a drum on the back of a truck, the slickline is raised and lowered in the well by reeling in and out the wire hydraulically.
Braided line can contain an inner core of insulated wires which provide power to equipment located at the end of the cable, normally referred to as electric line, and provides a pathway for electrical telemetry for communication between the surface and equipment at the end of the cable.
On the other hand, wirelines are electric cables that transmit data about the well. Consisting of single strands or multi-strands, the wireline is used for both well intervention and formation evaluation operations. In other words, wirelines are useful in gathering data about the well in logging activities, as well as in workover jobs that require data transmittal.
Wireline logs
First developed by Conrad and Marcel Schlumberger in 1927, wireline logs measure formation properties in a well through electrical lines of wire. Different from measurement while drilling (MWD) and mud logs, wireline logs are constant downhole measurements sent through the electrical wireline used to help geologists, drillers and engineers make real-time decisions about the reservoir and drilling operations. Wireline instruments can measure a host of petrophysical properties that form the basis of geological and petrophysical analysis of the subsurface. Measurements include self-potential, natural gamma ray, acoustic travel time, formation density, neutron porosity, resistivity and conductivity, nuclear magnetic resonance, borehole imaging, well bore geometry, formation dip and orientation, fluid characteristics such as density and viscosity and formation sampling.
The logging tool, also called a sonde, is located at the end of the wireline. The measurements are made by initially lowering sonde using the wireline to the prescribed depth and then recorded while raising it out of the well. The sonde responses are recorded continuously on the way up creating a so-called "log" of the instrument responses. The tension on the line assures that the depth measurement can be corrected for elastic stretch of the wireline. This elastic stretch correction will change as a function of cable length, tension at surface (called surface tension, Surf.Ten) and at the tool end of the wireline (called cablehead tension, CHT) and the elastic stretch coefficient of the cable. None of these are constants, so the correction has to be adjusted continuously between when starting the logging operation to recovery to the reference point (usually surface, or zero depth point, ZDP).
Workover operations
When producing wells require remedial work to sustain, restore or enhance production, this is called workover. Many times, workover operations require production shut-in, but not always.
Slickline firing head system
In workover operations, a well-servicing unit is used to winch items in and out of the wellbore. The line used to raise and lower equipment can be braided steel wireline or a single steel slickline. Workover operations conducted can include well clean-up, setting plugs, production logging and perforation through explosives.
Wireline tools
Wireline tools are specially designed instruments lowered into a well bore on the end of the wireline cable. They are individually designed to provide any number of particular services, such as evaluation of the rock properties, the location of casing collars, formation pressures, information regarding the pore size or fluid identification and sample recovery. Modern wireline tools can be extremely complicated, and are often engineered to withstand very harsh conditions such as those found in many modern oil, gas, and geothermal wells. Pressures in gas wells can exceed 30,000 psi, while temperatures can exceed 500 deg Fahrenheit in some geothermal wells. Corrosive or carcinogenic gases such as hydrogen sulfide can also occur downhole.
To reduce the amount of time running in the well, several wireline tools are often joined together and run simultaneously in a tool string that can be hundreds of feet long and weigh more than 5000 lbs.
Natural gamma ray tools
Natural gamma ray tools are designed to measure gamma radiation in the Earth caused by the disintegration of naturally occurring potassium, uranium, and thorium. Unlike nuclear tools, these natural gamma ray tools emit no radiation. The tools have a radiation sensor, which is usually a scintillation crystal that emits a light pulse proportional to the strength of the gamma ray striking it. This light pulse is then converted to a current pulse by means of a photomultiplier tube (PMT). From the photomultiplier tube, the current pulse goes to the tool's electronics for further processing and ultimately to the surface system for recording. The strength of the received gamma rays is dependent on the source emitting gamma rays, the density of the formation, and the distance between the source and the tool detector. The log recorded by this tool is used to identify lithology, estimate shale content, and depth correlation of future logs.
Nuclear tools
Nuclear tools measure formation properties through the interaction of reservoir molecules with radiation emitted from the logging tool. The two most common properties measured by nuclear tools are formation porosity and rock density:
Formation porosity is determined by installing a radiation source capable of emitting fast neutrons into the downhole environment. Any pore spaces in the rock are filled with fluid containing hydrogen atoms, which slow the neutrons down to an epithermal or thermal state. This atomic interaction creates gamma rays which are then measured in the tool through dedicated detectors, and interpreted through a calibration to a porosity. A higher number of gamma rays collected at the tool sensor would indicate a larger number of interactions with hydrogen atoms, and thus a larger porosity.
Most open hole nuclear tools utilize double-encapsulated chemical sources.
Density tools use gamma ray radiation to determine the lithology and density of the rock in the downhole environment. Modern density tools utilize a Cs-137 radioactive source to generate gamma rays which interact with the rock strata. Since higher density materials absorb gamma rays much better than lower density materials, a gamma ray detector in the wire line tool is able to accurately determine formation density by measuring the number and associated energy level of returning gamma rays that have interacted with the rock matrix. Density tools usually incorporate an extendable caliper arm, which is used both to press the radioactive source and detectors against the side of the bore and also to measure the exact width of the bore in order to remove the effect of varying bore diameter on the readings.
Some modern nuclear tools use an electronically powered source controlled from the surface to generate neutrons. By emitting neutrons of varying energies, the logging engineer is able to determine formation lithology in fractional percentages.
Resistivity tools
In any matrix which has some porosity, the pore spaces will be filled with a fluid of oil, gas (either hydrocarbon or otherwise) or formation water (sometimes referred to as connate water). This fluid will saturate the rock and change its electrical properties. A wireline resistivity tool direct injects current (lateralog-type tools for conductive water-based muds) or induces (induction-type tools for resistive or oil-based muds) an electric current into the surrounding rock and determines the resistivity via Ohm's law. The resistivity of the formation is used primarily to identify pay zones containing highly resistive hydrocarbons as opposed to those containing water, which is generally more conductive. It is also useful for determining the location of the oil-water contact in a reservoir. Most wireline tools are able to measure the resistivity at several depths of investigation into the bore hole wall, allowing log analysts to accurately predict the level of fluid invasion from the drilling mud, and thus determine a qualitative measurement of permeability.
Some resistivity tools have many electrodes mounted on several articulated pads, allowing for multiple micro-resistivity measurements. These micro-resistivities have a very shallow depth of investigation, typically in the range of 0.1 to 0.8 inches, making them suitable for borehole imaging. Resistivity imagers are available which operate using induction methods for resistive mud systems (oil base), and direct current methods for conductive mud systems (water based).
Sonic and ultrasonic tools
Sonic tools, such as the Baker Hughes XMAC-F1, consist of multiple piezoelectric transducers and receivers mounted on the tool body at fixed distances. The transmitters generate a pattern of sound waves at varying operating frequencies into the down hole formation. The signal path leaves the transmitter, passes through the mud column, travels along the borehole wall and is collected at multiple receivers spaced out along the tool body. The time it takes for the sound wave to travel through the rock is dependent on a number of properties of the existing rock, including formation porosity, lithology, permeability and rock strength. Different types of pressure waves can be generated in specific axis, allowing geoscientists to determine anisotropic stress regimes. This is very important in determining hole stability and aids drilling engineers in planning for future well design.
Sonic tools are also used extensively to evaluate the cement bond between casing and formation in a completed well, primarily by calculating the accentuation of the signal after it as passed through the casing wall (see Cement Bond Tools below).
Ultrasonic tools use a rotating acoustic transducer to map a 360-degree image of the borehole as the logging tool is pulled to surface. This is especially useful for determining small scale bedding and formation dip, as well as identifying drilling artifacts such as spiraling or induced fractures.
Nuclear magnetic resonance tools
A measurement of the nuclear magnetic resonance (NMR) properties of hydrogen in the formation. There are two phases to the measurement: polarization and acquisition. First, the hydrogen atoms are aligned in the direction of a static magnetic field (B0). This polarization takes a characteristic time T1. Second, the hydrogen atoms are tipped by a short burst from an oscillating magnetic field that is designed so that they precess in resonance in a plane perpendicular to B0. The frequency of oscillation is the Larmor frequency. The precession of the hydrogen atoms induces a signal in the antenna. The decay of this signal with time is caused by transverse relaxation and is measured by the CPMG pulse sequence. The decay is the sum of different decay times, called T2. The T2 distribution is the basic output of an NMR measurement.
The NMR measurement made by both a laboratory instrument and a logging tool follow the same principles very closely. An important feature of the NMR measurement is the time needed to acquire it. In the laboratory, time presents no difficulty. In a log, there is a trade-off between the time needed for polarization and acquisition, logging speed and frequency of sampling. The longer the polarization and acquisition, the more complete the measurement. However, the longer times require either lower logging speed or less frequent sampling.
Borehole seismic tools
Cased hole electric line tools
Cement bond tools
A cement bond tool, or CBT, is an acoustic tool used to measure the quality of the cement behind the casing. Using a CBT, the bond between the casing and cement as well as the bond between cement and formation can be determined. Using CBT data, a company can troubleshoot problems with the cement sheath if necessary. This tool must be centralized in the well to function properly.
Two of the largest problems found in cement by CBT's are channeling and micro-annulus. A micro annulus is the formation of microscopic cracks in the cement sheath. Channeling is where large, contiguous voids in the cement sheath form, typically caused by poor centralization of the casing. Both of these situations can, if necessary, be fixed by remedial electric line work.
A CBT makes its measurements by rapidly pulsing out compressional waves across the well bore and into the pipe, cement, and formation. The compressional pulse originates in a transmitter at the top of the tool, which, when powered up on surface sounds like a rapid clicking sound. The tool typically has two receivers, one three feet away from the receiver, and another at five feet from the transmitter. These receivers record the arrival time of the compressional waves. The information from these receivers are logged as travel times for the three- and five-foot receivers and as a micro-seismogram.
Recent advances in logging technologies have allowed the receivers to measure 360 degrees of cement integrity and can be represented on a log as a radial cement map and as 6-8 individual sector arrival times.
Casing collar locators
Casing collar locator tools, or CCL's, are among the simplest and most essential in cased hole electric line. CCL's are typically used for depth correlation and can be an indicator of line overspeed when logging in heavy fluids.
A CCL operates on Faraday's Law of Induction. Two magnets are separated by a coil of copper wire. As the CCL passes by a casing joint, or collar, the difference in metal thickness across the two magnets induces a current spike in the coil. This current spike is sent uphole and logged as what's called a collar kick on the cased hole log.
Gamma perforating tools
A cased hole gamma perforator is used to perform mechanical services, such as shooting perforations, setting downhole tubing/casing elements, dumping remedial cement, tracer surveys, etc. Typically, a gamma perforator will have some sort of explosively initiated device attached to it, such as a perforating gun, a setting tool, or a dump bailor. In certain instances, the gamma perforator is used to merely spot objects in the well, as in tubing conveyed perforating operations and tracer surveys.
Gamma perforators operate in much the same way as an open hole natural gamma ray tool. Gamma rays given off from naturally occurring radioactive elements bombard the scintillation detector mounted on the tool. The tool processes the gamma ray counts and sends the data uphole where it processed by a computerized acquisition system, and plotted on a log versus depth. The information is then used to ensure that the depth shown on the log is correct. After that, power can be applied through the tool to set off explosive charges for things like perforating, setting plugs or packers, dumping cement, etc.
Wireline pressure setting assemblies (WLSPA)
Setting tools are used to set downhole completion elements such as production packers or bridge plugs. Setting tools typically use the expanding gas energy from a slow-burning explosive charge to drive a hydraulic piston assembly. The assembly is attached to the plug or packer by means of a setting mandrel and a sliding sleeve, which when "stroked" by the piston assembly, effectively squeezes the elastomer elements of the packing element, deforming it sufficiently to wedge it into place in the tubing or casing string. Most completion packers or plugs have a specially designed shear mechanism that release the setting tool from the element allowing it to be retrieved back to surface. The packer/plug, however, remains down hole as a barrier to isolate production zones or permanently plug off a well bore.
Casing expander tools
Expansion tools incorporate similar design features to WLSPA, using an internal piston assembly, except the main differences are that the piston is bi-directional, and does not detach to be left downhole. A hardened set of contoured pads expand when the piston is "stroked", indenting a small circle in the inner wall of the casing, and expanding the overall casing to make full contact with cement, packing material, or directly with the formation wall. The original design and concept of the tool was to stop surface casing pressure without impacting production by leaving hardware in the well bore. They can also be used in other applications like plugging and abandoning or drilling intervention operations like setting whipstocks.
Additional equipment
Cable head
The cable head is the uppermost portion of the toolstring on any given type of wireline. The cable head is where the conductor wire is made into an electrical connection that can be connected to the rest of the toolstring. Cable heads are typically custom built by the wireline operator for every job and depend greatly on depth, pressure and the type of wellbore fluid.
Electric line weakpoints are also located in the cable head. If the tool is to become stuck in the well, the weak point is where the tool would first separate from the wireline. If the wireline were severed anywhere else along the line, the tool becomes much more difficult to fish.
Tractors
Tractors are electrical tools used to push the toolstring into hole, overcoming wireline's disadvantage of being gravity dependent. These are used for in highly deviated and horizontal wells where gravity is insufficient, even with roller stem. They push against the side of the wellbore either through the use of wheels or through a wormlike motion.
Measuring head
A measuring head is the first piece of equipment the wireline comes into contact with off the drum. The measuring head is composed of several wheels that support the wireline on its way to the winch and they also measure crucial wireline data.
A measuring head records tension, depth, and speed. Current models use optical encoders to derive the revolutions of a wheel with a known circumference, which in turn is used to figure speed and depth. A wheel with a pressure sensor is used to figure tension.
Wireline apparatus
For oilfield work, the wireline resides on the surface, wound around a large (3 to 10 feet in diameter) spool. Operators may use a portable spool (on the back of a special truck) or a permanent part of the drilling rig. A motor and drive train turn the spool and raise and lower the equipment into and out of the well – the winch.
Pressure control during wireline operations
The pressure control employed during wireline operations is intended to contain pressure originating from the well bore. During open hole electric line operations, the pressure might be the result from a well kicking. During cased hole electric line, this is most likely the result of a well producing at high pressures. Pressure equipment must be rated to well over the expected well pressures. Normal ratings for wireline pressure equipment is 5,000, 10,000, and 15,000 pounds per square inch. Some wells are contained with 20,000 psi and 30,000 psi equipment is in development also.
Flange
A flange attaches to the top of the Christmas tree, usually with some sort of adapter for the rest of the pressure control. A metal gasket is placed between the top of the Christmas tree and the flange to keep in well pressures.
Wireline valve
A wireline control valve, also called a wireline blow out preventer (BOP), is an enclosed device with one or more rams capable of closing over the wireline in an emergency. A dual wireline valve has two sets of rams and some have the capability of pumping grease in the space between the rams to counterbalance the well pressure.
Lubricator
Lubricator is the term used for sections of pressure tested pipe that act to seal in wireline tools during pressurization.
As stated it is a series of pipes that connect and it is what holds the tool string so operators can make runs in and out of the well. It has valves to bleed off pressure so that you can disconnect it from the well and work on tools, etc.
Pump-in sub
Pump-in subs (also known as a flow T) allow for the injection of fluid into the pressure control string. Normally these are used for wellsite pressure testing, which is typically performed between every run into the well. They can also be used to bleed off pressure from the string after a run in the well, or to pump in kill fluids to control a wild well.
Grease injector head
The grease injector head is the main apparatus for controlling well pressure while running into the hole. The grease head uses a series of very small pipes, called flow tubes, to decrease the pressure head of the well. Grease is injected at high pressure into the bottom portion of the grease head to counteract the remaining well pressure.
Pack-off
Pack-off subs utilize hydraulic pressure on a two brass fittings which compress a rubber sealing element to create a seal around the wireline. Pack-offs can be hand pumped or compressed through a motorized pumping unit.
Line wiper
A line wiper operates in much the same way as a pack-off sub, except that the rubber element is much softer. Hydraulic pumps exert force on the rubber element until a light pressure is exerted on the wireline, cleaning grease and well fluid off the line in the process.
Quick test sub
A quick test sub (QTS) is used when pressure testing the pressure control equipment (PCE) for repetitive operations. The PCE is pressure tested and then broke at the QTS afterward to avoid having to retest the entire string. The PCE is then reconnected at the QTS. The QTS has two O-rings where it was disconnected that can be tested with hydraulic pressure to confirm the PCE can still hold the pressure it was tested to.
Ball-check valve
If the wireline were to become severed from the tool, a ball check valve can seal the well off from the surface. During wireline operations, a steel ball sits to the side of a confined area within the grease head while the cable runs in and out of the hole. If the wireline exits that confined area under pressure, the pressure will force the steel ball up towards the hole where the wireline had been. The ball's diameter is larger than that of the hole, so the ball effectively seals off pressure to the surface.
Head catcher
A head catcher (also called tool catcher) is a device placed at the top of the lubricator section. Should the wireline tools be forced into the top of the lubricator section, the head catcher, which looks like a small 'claw,' will clamp down on the fishing neck of the tool. This action prevents the tools from falling downhole should the line pull out of the rope socket. Pressure is applied to the head catcher to release the tools.
Tool trap
A tool trap has the same purpose as a head catcher in that it prevents the tools from inadvertently dropping down the hole. This device is normally located just above the well control valves, providing protection to these important barriers from a dropped tool. The tool trap has to be functioned Open in order to allow the tools to enter the well, and is normally built to allow tools to be recovered through the tool trap even when it is in the Closed position.
Quick-connect sub
A subassembly device that is bolted to the top of the BOP stack that is designed to eliminate traditional bolt-flanges to connect lubricator heads and utilize tapered-wedge and lock ring designs. This allows the same security of traditional pressure control connections but a significant time savings component.
See also
Oil industry
Well intervention
Well logging
Perforating
Seaboard International
List of oilfield service companies
Coiled tubing
Sources and citations
Tools
Well logging
Petroleum production
Oil wells | Wireline (cabling) | Chemistry,Engineering | 5,450 |
1,441,461 | https://en.wikipedia.org/wiki/Span%20%28unit%29 | A span is the distance measured by a human hand, from the tip of the thumb to the tip of the little finger. In ancient times, a span was considered to be half a cubit. Sometimes the distinction is made between the great span or full span (thumb to little finger) and little span or short span (thumb to index finger, or index finger to little finger).
History
Ancient Greek texts show that the span was used as a fixed measure in ancient Greece since at least archaic period. The word spithame (Greek: "σπιθαμή"), "span", is attested in the work of Herodotus in the 5th century BC; however, the span was used in Greece long before that, since the word trispithamos (Greek: "τρισπίθαμος"), "three spans long", occurs as early as the 8th century BC in Hesiod.
Size of the span
English usage
1 span = 9 inches
= 22.86 cm
Chinese usage
In China and Chinese cultured countries, a span (一拃) refers to the distance between the tip of the thumb and the tip of the outstretched index finger (sometimes middle finger), and typically measures 15-20 centimetres.
Arabic usage
In Arabic, the analogue of the great span is the šibr (شبر). It is used in Modern Standard Arabic and classical Arabic, as well as in modern-day dialects.
Slavic usage
In Slavic languages, the analogue of the span is various words derived from Proto-Slavic *pędь (Bulgarian педя, Polish piędź, Russian пядь, Slovenian ped, etc.). In various Slavic languages it is the distance from the tip of the thumb to the tip of the little finger or index finger. For example, Slovenian velika ped = great span (23 cm), mala ped = little span (9.5 cm); Russian piad = 4 vershoks = 17.8 cm. See Obsolete Russian weights and measures.
African usage
In Swahili, the equivalent of the great span (thumb to little finger) is the shubiri or shibiri while the little span (thumb to forefinger) is the morita or futuri.
Hungarian usage
In Hungarian, the span, or arasz, is occasionally used as an informal measure and occurs in two varieties: measured between the tips of the extended thumb and index finger, it is kis arasz (the "small arasz"); between the tips of the thumb and little finger, it is nagy arasz (the "large arasz"). The term "arasz," used by itself without a modifier, is usually understood as referring to the "large arasz," i.e., to the "span."
South Asian usage
In Hindi-Urdu and other languages of Northern India and Pakistan, the span is commonly used as an informal measure and called bālisht (Urdu: بالشت, Hindi: बालिश्त).
In Bengali, it is called bighāt (বিঘত or বিঘৎ).
In Marathi, it is called weet (वीत).
In Nepal, where this method of measurement is still used in informal context, a span is called bhitta.
In Tamil, it is called saaN.
Southeast Asian usage
In Southeast Asia, the span is used as an informal measure.
In Malay and Indonesian, it is called jengkal.
In Thai, it is called khuep.
In Filipino, it is called dangkal.
Mongolian usage
The span is commonly used as a traditional and informal measure in Mongolia, where it is called tuu (төө). Depending on the use of index or middle finger and the placement of the thumb, the span is named differently as tuu (төө) and mukhar tuu (мухар төө) etc.
Portuguese usage
The old Portuguese customary unit analogue to the span was the palmo de craveira or simply palmo.
1 palmo de craveira = 8 polegadas (Portuguese inches)
= 1/5 varas (Portuguese yards)
= 0.22 m
See also
Anthropic units
Hand (unit)
List of human-based units of measurement
List of unusual units of measurement
Units of measure
Notes
References
Lyle V. Jones. 1971. “The Nature of Measurement.” In: Robert L. Thorndike (ed.), Educational Measurement, 2nd ed. Washington, DC: American Council on Education, pp. 335–355.
Span
Imperial units
Units of length | Span (unit) | Mathematics | 953 |
1,424,864 | https://en.wikipedia.org/wiki/Plate%20electrode | A plate, usually called anode in Britain, is a type of electrode that forms part of a vacuum tube. It is usually made of sheet metal, connected to a wire which passes through the glass envelope of the tube to a terminal in the base of the tube, where it is connected to the external circuit. The plate is given a positive potential, and its function is to attract and capture the electrons emitted by the cathode. Although it is sometimes a flat plate, it is more often in the shape of a cylinder or flat open-ended box surrounding the other electrodes.
Construction
The plate must dissipate heat created when the electrons hit it with a high velocity after being accelerated by the voltage between the plate and cathode. Most of the waste power used in a vacuum tube is dissipated as heat by the plate. In low power tubes it is usually given a black coating, and often has "fins" to help it radiate heat. In power vacuum tubes used in radio transmitters, it is often made of a refractory metal like molybdenum. and is part of a large heat sink that projects through the glass or ceramic tube envelope and is cooled by radiation cooling, forced air or water.
Secondary emission
A problem in early vacuum tubes was secondary emission; electrons striking the plate could knock other electrons out of the metal surface. In some tubes such as tetrodes these secondary electrons could be absorbed by other electrodes such as grids in the tube, resulting in a current out of the plate. This current could cause the plate circuit to have negative resistance, which could cause unwanted parasitic oscillations. To prevent this most plates in modern tubes are given a chemical coating which reduces secondary emission.
See also
Anode
External links
https://web.archive.org/web/20101007201649/http://pentalabs.com/tubeworks.html – The history of vacuum tubes
The Thermionic Detector – HJ van der Bijl (October 1919)
How vacuum tubes really work – Thermionic emission and vacuum tube theory, using introductory college-level mathematics.
The Vacuum Tube FAQ – FAQ from rec.audio
The invention of the thermionic valve. Fleming discovers the thermionic (or oscillation) valve, or 'diode'.
References
Shiers, George, "The First Electron Tube", Scientific American, March 1969, p. 104.
Tyne, Gerald, Saga of the Vacuum Tube, Ziff Publishing, 1943, (reprint 1994 Prompt Publications), p. 30–83.
RCA Radiotron Designer's Handbook, 1953 (4th Edition). Contains chapters on the design and application of receiving tubes.
Vacuum tubes
Electrodes | Plate electrode | Physics,Chemistry | 559 |
10,833,408 | https://en.wikipedia.org/wiki/Helium%20hydride%20ion | The helium hydride ion, hydridohelium(1+) ion, or helonium is a cation (positively charged ion) with chemical formula HeH+. It consists of a helium atom bonded to a hydrogen atom, with one electron removed. It can also be viewed as protonated helium. It is the lightest heteronuclear ion, and is believed to be the first compound formed in the Universe after the Big Bang.
The ion was first produced in a laboratory in 1925. It is stable in isolation, but extremely reactive, and cannot be prepared in bulk, because it would react with any other molecule with which it came into contact. Noted as the strongest known acid—stronger than even fluoroantimonic acid—its occurrence in the interstellar medium had been conjectured since the 1970s, and it was finally detected in April 2019 using the airborne SOFIA telescope.
Physical properties
The helium hydrogen ion is isoelectronic with molecular hydrogen ().
Unlike the dihydrogen ion , the helium hydride ion has a permanent dipole moment, which makes its spectroscopic characterization easier. The calculated dipole moment of HeH+ is 2.26 or 2.84 D. The electron density in the ion is higher around the helium nucleus than the hydrogen. 80% of the electron charge is closer to the helium nucleus than to the hydrogen nucleus.
Spectroscopic detection is hampered, because one of its most prominent spectral lines, at 149.14 μm, coincides with a doublet of spectral lines belonging to the methylidyne radical ⫶CH.
The length of the covalent bond in the ion is 0.772 Å or 77.2 pm.
Isotopologues
The helium hydride ion has six relatively stable isotopologues, that differ in the isotopes of the two elements, and hence in the total atomic mass number (A) and the total number of neutrons (N) in the two nuclei:
or (A = 4, N = 1)
or (A = 5, N = 2)
or (A = 6, N = 3; radioactive)
or (A = 5, N = 2)
or (A = 6, N = 3)
or (A = 7, N = 4; radioactive)
They all have three protons and two electrons. The first three are generated by radioactive decay of tritium in the molecules HT = , DT = , and = , respectively. The last three can be generated by ionizing the appropriate isotopologue of in the presence of helium-4.
The following isotopologues of the helium hydride ion, of the dihydrogen ion , and of the trihydrogen ion have the same total atomic mass number A:
, , , (A = 4)
, , , , (A = 5)
, , , , (A = 6)
, , (A = 7)
The masses in each row above are not equal, though, because the binding energies in the nuclei are different.
Neutral molecule
Unlike the helium hydride ion, the neutral helium hydride molecule HeH is not stable in the ground state. However, it does exist in an excited state as an excimer (HeH*), and its spectrum was first observed in the mid-1980s.
The neutral molecule is the first entry in the Gmelin database.
Chemical properties and reactions
Preparation
Since HeH+ reacts with every substance, it cannot be stored in any container. As a result, its chemistry must be studied by creating it in situ.
Reactions with organic substances can be studied by substituting hydrogen in the desired organic compound with tritium. The decay of tritium to 3He+ followed by its extraction of a hydrogen atom from the compound yields 3HeH+, which is then surrounded by the organic material and will in turn react.
TR → 3He+ + R• (beta decay)
3He+ + HR → 3HeH+ + R• (hydrogen abstraction)
Acidity
HeH+ cannot be prepared in a condensed phase, as it would donate a proton to any anion, molecule or atom that it came in contact with. It has been shown to protonate O2, NH3, SO2, H2O, and CO2, giving , , , H3O+, and respectively. Other molecules such as nitric oxide, nitrogen dioxide, nitrous oxide, hydrogen sulfide, methane, acetylene, ethylene, ethane, methanol and acetonitrile react but break up due to the large amount of energy produced.
In fact, HeH+ is the strongest known acid, with a proton affinity of 177.8 kJ/mol.
Other helium-hydrogen ions
Additional helium atoms can attach to HeH+ to form larger clusters such as He2H+, He3H+, He4H+, He5H+ and He6H+.
The dihelium hydride cation, He2H+, is formed by the reaction of dihelium cation with molecular hydrogen:
+ H2 → He2H+ + H
It is a linear ion with hydrogen in the centre.
The hexahelium hydride ion, He6H+, is particularly stable.
Other helium hydride ions are known or have been studied theoretically. Helium dihydride ion, or dihydridohelium(1+), , has been observed using microwave spectroscopy. It has a calculated binding energy of 25.1 kJ/mol, while trihydridohelium(1+), , has a calculated binding energy of 0.42 kJ/mol.
History
Discovery in ionization experiments
Hydridohelium(1+), specifically , was first detected indirectly in 1925 by T. R. Hogness and E. G. Lunn. They were injecting protons of known energy into a rarefied mixture of hydrogen and helium, in order to study the formation of hydrogen ions like , and . They observed that appeared at the same beam energy (16 eV) as , and its concentration increased with pressure much more than that of the other two ions. From these data, they concluded that the ions were transferring a proton to molecules that they collided with, including helium.
In 1933, K. Bainbridge used mass spectrometry to compare the masses of the ions (helium hydride ion) and (twice-deuterated trihydrogen ion) in order to obtain an accurate measurement of the atomic mass of deuterium relative to that of helium. Both ions have 3 protons, 2 neutrons, and 2 electrons. He also compared (helium deuteride ion) with (trideuterium ion), both with 3 protons and 3 neutrons.
Early theoretical studies
The first attempt to compute the structure of the HeH+ ion (specifically, ) by quantum mechanical theory was made by J. Beach in 1936. Improved computations were sporadically published over the next decades.
Tritium decay methods in chemistry
H. Schwartz observed in 1955 that the decay of the tritium molecule = should generate the helium hydride ion with high probability.
In 1963, F. Cacace at the Sapienza University of Rome conceived the decay technique for preparing and studying organic radicals and carbenium ions. In a variant of that technique, exotic species like methanium are produced by reacting organic compounds with the that is produced by the decay of that is mixed with the desired reagents. Much of what we know about the chemistry of came through this technique.
Implications for neutrino mass experiments
In 1980, V. Lubimov (Lyubimov) at the ITEP laboratory in Moscow claimed to have detected a mildly significant rest mass (30 ± 16) eV for the neutrino, by analyzing the energy spectrum of the β decay of tritium. The claim was disputed, and several other groups set out to check it by studying the decay of molecular tritium . It was known that some of the energy released by that decay would be diverted to the excitation of the decay products, including ; and this phenomenon could be a significant source of error in that experiment. This observation motivated numerous efforts to precisely compute the expected energy states of that ion in order to reduce the uncertainty of those measurements. Many have improved the computations since then, and now there is quite good agreement between computed and experimental properties; including for the isotopologues , , and .
Spectral predictions and detection
In 1956, M. Cantwell predicted theoretically that the spectrum of vibrations of that ion should be observable in the infrared; and the spectra of the deuterium and common hydrogen isotopologues ( and ) should lie closer to visible light and hence easier to observe. The first detection of the spectrum of was made by D. Tolliver and others in 1979, at wavenumbers between 1,700 and 1,900 cm−1. In 1982, P. Bernath and T. Amano detected nine infrared lines between 2,164 and 3,158 waves per cm.
Interstellar space
HeH+ has long been conjectured since the 1970s to exist in the interstellar medium. Its first detection, in the nebula NGC 7027, was reported in an article published in the journal Nature in April 2019.
Natural occurrence
From decay of tritium
The helium hydride ion is formed during the decay of tritium in the molecule HT or tritium molecule T2. Although excited by the recoil from the beta decay, the molecule remains bound together.
Interstellar medium
It is believed to be the first compound to have formed in the universe, and is of fundamental importance in understanding the chemistry of the early universe. This is because hydrogen and helium were almost the only types of atoms formed in Big Bang nucleosynthesis. Stars formed from the primordial material should contain HeH+, which could influence their formation and subsequent evolution. In particular, its strong dipole moment makes it relevant to the opacity of zero-metallicity stars. HeH+ is also thought to be an important constituent of the atmospheres of helium-rich white dwarfs, where it increases the opacity of the gas and causes the star to cool more slowly.
HeH+ could be formed in the cooling gas behind dissociative shocks in dense interstellar clouds, such as the shocks caused by stellar winds, supernovae and outflowing material from young stars. If the speed of the shock is greater than about , quantities large enough to detect might be formed. If detected, the emissions from HeH+ would then be useful tracers of the shock.
Several locations had been suggested as possible places HeH+ might be detected. These included cool helium stars, H II regions, and dense planetary nebulae, like NGC 7027, where, in April 2019, HeH+ was reported to have been detected.
See also
Dihydrogen cation
Trihydrogen cation
Argonium
References
Acids
Superacids
Cations
Helium compounds
Hydrogen compounds
Substances discovered in the 1920s | Helium hydride ion | Physics,Chemistry | 2,281 |
15,032,118 | https://en.wikipedia.org/wiki/Crystal%20Island%20%28building%20project%29 | Crystal Island () is a future building project in Moscow, Russia that is planned to have around 2,500,000 square meters (27,000,000 square ft) of floor space and a height of 450 meters (1,476 ft) designed by Norman Foster. At these dimensions upon completion it would be the largest structure (in floor space) in the world. The architectural firm behind the design is Foster and Partners.
History
The tent-like superstructure would rise to 450 m, and form a breathable "second skin" and thermal buffer for the main building, shielding the interior spaces from Moscow’s weather. This section skin will be sealed in winter to minimize heat loss, and opened in the summer to naturally cool the interior. The building would be integrated into a new park, which would provide a range of activities throughout the year, with cross country skiing and ice skating in the winter.
It is stated to have a multitude of cultural, exhibition, performance, hotel, apartment, retail and office space, as well as an international school for 500 students. The building would be powered by built-in solar panels and wind turbines. The structure would also feature on-site renewable and low-carbon energy generation.
In 2009, due to the global economic crisis, financial backing for the project was lost, and construction of the project was postponed.
See also
Hyperboloid structure
List of hyperboloid structures
Arcology
References
External links
Crystal Island at Foster and Partners
Unbuilt buildings and structures in Russia
Buildings and structures in Moscow
Hyperboloid structures
Foster and Partners buildings
Proposed arcologies | Crystal Island (building project) | Technology | 322 |
65,195,585 | https://en.wikipedia.org/wiki/Uthapuram%20caste%20wall | The Uthapuram caste wall, called by various names as the wall of shame, the wall of untouchability is a 12 ft high and 600 meter long wall built by dominant caste villagers reportedly to segregate the Dalit population in the Village of Uthapuram in Tamil Nadu. The village witnessed violence between Dalits and the dominant castes during 1948, 1964 and 1989 and was also known for its caste based discrimination.
Protests started in 2008 campaigning to demolish the wall led mostly by the Communist Party of India (Marxist) and left-wing organizations. Later a small portion of the wall was demolished by the government to allow entry to the Dalits to access the main road. Many dominant caste villagers left the village and moved 3 km away with their belongings reportedly as a protest for demolishing the wall.
70 houses belonging to the Dalits were attacked in October 2008 reportedly in retaliation for the demolition of the wall and a Dalit man was shot dead by the police. Tensions continued until 2015, when during a clash between the communities several vehicles were set on fire and many were hospitalized.
Background
Caste divisions and clashes
The Village of Uthapuram in the Madurai district has two major castes, the dominant caste Pillai and the Dalit Pallar caste. The village was known for its caste tensions and there were violent conflicts between the castes during the years 1948, 1964 and 1989.
Caste discrimination
The dominant caste villagers reportedly blocked attempts of the Dalits to build a bus stop and increased the elevation of a parapet close to the bus stop to discourage the Dalits from sitting before them. The tea-shops managed by caste Hindus are not visited by the Dalits. The Dalits are not permitted to enter an dominant caste-dominated streets and are refused space in the community halls and in the village squares and were also denied entry to burial sites.
The wall
The wall which was 600 meters long and 12 ft high was described in variously as a caste wall, a wall of shame, a wall of bias and a wall of untouchability, was built by caste-Hindus in 1989 after a caste violence in the village. The passes through areas intended for common use by members of all the castes. It also barred Dalits from directly entering the main road. Dalits have to use a circular path and walk a some more miles to get to the main road.
Clashes and protests in 2008
The fourth conflict began in 2008 after a period of 20 years, and kept going in numerous ways for another 5 years. It began in April 2008 when the caste Hindus used iron rods to electrify the 600 meter wall to prevent the Dalits from entering into the dominant caste areas during night times. Initially, the Dalits were hesitant to contend but the Tamil Nadu Untouchability Eradication Front (TNUEF), Communist Party of India (Marxist) (CPM), Communist Party of India (CPI) and All India Democratic Women's Association (AIDWA) opposed this action by the dominant caste villagers vigorously. A member of the TNUEF alleged that two cows were electrocuted by the electrified wall. Following the state-wide protests of the progressive organisations, the electricity minister of Tamil Nadu called for the removal of the power line. The CPI(M) along with local Dalits started a campaign for the destruction of the caste wall. The Dalits orchestrated a demonstration at the front of the Taluk office calling for the wall to be pulled down. The CPI(M)'s general secretary, N. Varadarajan said that his party cadre will demolish the wall on their own if the government did not take any actions.
Demolition
On 6 May, the district administration got involved and destroyed a 15-foot portion of the wall to allow the Dalits to travel in the presence of a few hundred policemen and the supervision of the district officials. In an act of protest, some caste Hindus returned their ration cards to the Tehsildar. About 600 dominant caste members left the village during the demolition and moved to Thalaiyoothu, a place 3 km from the village with their livestock and declared that they would not return.
The problem became tense again when the dominant caste villagers who left the village didn't listen to a request from the District Collector to come back soon so that everyone in the village can live in peace. When district officials met with them, they made several demands including a patta for a temple where they had been worshiping for more than 400 years, a permanent police outpost in the village, and new housing for people whose residences which they claimed were destroyed by Dalit anti-socials during the riots of 1989.
At Thalaiyoothu on May 12, The leader of the village's dominant caste group, told Frontline that his people left the village more out of panic than as a mark of rebellion. After the wall was taken down, he said they felt insecure. He claimed the Dalits live better now with most of them having government jobs or being land owners. He also claimed that since the Dalits were actually on a buying spree and the dominant caste members fear that they might be forced to sell their property to Dalits. He also claimed that the wall was built to protect the dominant caste villagers. However this version is not accepted by the village's Dalits. They assert they were at the receiving side of hostility, instead of the other way around.
Attacks
On 1 October 2008, more than 70 Dalits houses were attacked as a response to the destruction of the wall and a Dalit youth was shot dead by the police as a result of the tensions on November 4, 2008.
Continued tensions
On 10 November 2011, several Dalits entered a temple controlled by dominant caste with police protection. Although several dominant caste members welcomed them with folded arms, there were women crying in the streets opposing their entry. In 2012, the Dalits were not allowed to participate in the temple's consecration ceremony and in 2013 the Dalits did not attend the temple festivals. In April 2014, the dominant caste villagers locked the temple and left the village opposing the High court order for allowing the Dalits for Temple entry.
In October 2015, the Dalits and the dominant caste villagers clashed during a temple festival which started over a dispute over placing a garland over a tree. Six motor-bikes were set ablaze and the tehsildars vehicle was also damaged. The police filed cases on 70 people belonging to both the castes and arrested 21. Several injured during the clashes were hospitalized.
References
External links
Accord in Uthapuram - The Hindu Frontline
Madurai district
Separation barriers
Caste-related violence in India
Crime in Tamil Nadu
Social history of Tamil Nadu
History of Tamil Nadu (1947–present)
Dalit history
Violence against Dalits in Tamil Nadu | Uthapuram caste wall | Engineering | 1,366 |
1,754,604 | https://en.wikipedia.org/wiki/SAR201 | Saudi Aramco Rig 201 is offshore rig owned and managed by Saudi Aramco. This jackup type rig is located in the Persian Gulf.
This rig was built in 1982 in Singapore and has a max drill depth of 20,000 Feet.
References
Oil platforms
Saudi Aramco | SAR201 | Chemistry,Engineering | 59 |
22,134,387 | https://en.wikipedia.org/wiki/Copper%20Salmon%20Wilderness | The Copper Salmon Wilderness is a protected wilderness area in the Southern Oregon Coast Range and is part of the Rogue River–Siskiyou National Forest. The wilderness area was created by the Omnibus Public Land Management Act of 2009, which was signed into law by President Barack Obama on March 30, 2009.
The Copper Salmon Wilderness is located along the North and South Forks of Elk River and the upper Middle Fork of Sixes River.
The area contains one of the nation's largest remaining stands of low-elevation old-growth forest and one of the healthiest salmon, steelhead, and cutthroat trout runs in the continental United States along the north Fork of the Elk River, as well as stands of vulnerable Port Orford cedar and endangered marbled murrelets and northern spotted owls.
See also
List of Oregon Wildernesses
List of U.S. Wilderness Areas
List of old growth forests
Wilderness Act
References
External links
Copper Salmon Wilderness - U.S. Forest Service
Protected areas of Coos County, Oregon
Protected areas of Curry County, Oregon
Rogue River-Siskiyou National Forest
Wilderness areas of Oregon
Old-growth forests
Protected areas established in 2009
2009 establishments in Oregon | Copper Salmon Wilderness | Biology | 235 |
77,396,603 | https://en.wikipedia.org/wiki/Mini-puberty | Mini-puberty is a transient hormonal activation of the hypothalamic-pituitary-gonadal (HPG) axis that occurs in infants shortly after birth. This period is characterized by a surge in the secretion of gonadotropins (LH and FSH) and sex steroids (testosterone in males and estradiol in females), similar to but less intense than the hormonal changes that occur in puberty during adolescence. Mini-puberty plays a crucial role in the early development of the reproductive system and the establishment of secondary sexual characteristics.
Physiology
Hypothalamic-pituitary-gonadal axis activation
Mini-puberty begins within the first few days or weeks of life and typically lasts until 6–12 months of age. The HPG axis is temporarily reactivated, resulting in increased secretion of gonadotropin-releasing hormone (GnRH) from the hypothalamus. GnRH stimulates the pituitary gland to release luteinizing hormone (LH) and follicle-stimulating hormone (FSH), which in turn stimulate the gonads (testes in males and ovaries in females) to produce sex steroids.
Hormonal changes
Males: There is a significant increase in testosterone levels, peaking around 1–3 months of age and leveling off around 6 months. This rise in testosterone is essential for the development of male genitalia, testicular descent, and the proliferation of Sertoli and Leydig cells.
Females: There is an increase in estradiol and FSH levels, although less pronounced compared to the hormonal changes in males. This rise in estradiol is involved in the maturation of ovarian follicles and the growth of the uterus and levels off around 2 years of life. FSH peaks at 1–3 months, similar to boys, but may remain elevated for 3–4 years of life.
Clinical significance
Developmental role
Mini-puberty is crucial for several developmental processes, including:
Sexual differentiation: In males, the surge in testosterone supports the continued development of male genitalia and other secondary sexual characteristics.
Growth and metabolism: The hormonal changes may have effects on growth patterns, bone maturation, and overall metabolism.
Neurodevelopment: Sex steroids play a role in brain development and may influence behaviors and cognitive functions, including language development.
Diagnostic marker
Mini-puberty can serve as a valuable diagnostic window for identifying congenital abnormalities of the HPG axis or gonads. Conditions such as congenital hypogonadotropic hypogonadism and certain forms of intersex can be diagnosed during this period by evaluating hormone levels and gonadal response.
Potential disorders
Disruptions in the mini-puberty process can lead to various clinical conditions, including:
Delayed or absent mini-puberty: This may indicate underlying issues with the HPG axis or gonads, requiring further investigation and potential intervention.
Environmental influences
Environmental factors, such as exposure to Endocrine Disrupting Chemicals (EDCs), have been shown to impact mini-puberty. EDCs are widespread in daily life and can be found in products such as pesticides and personal care items. Bisphenol A (BPA) and many phthalates are known to interfere with the earlier HPG axis activation during pregnancy for boys, affecting testosterone levels during mini-puberty, anogenital distance (AGD), and testicular descent.
More recently, BPA and phthalate exposure during mini-puberty have been shown to interfere with HPG axis activation and testosterone levels during that same time frame, suggesting that mini-puberty is a particularly vulnerable window for EDC exposure. Such disruptions may lead to long-term consequences, including delayed or precocious puberty, reproductive health issues, and increased risk of conditions like polycystic ovary syndrome (PCOS), breast cancer and prostate cancer.
In a small study, it was shown that "PCDD/Fs and PCBs measured in breast milk collected within the first 3 weeks following birth were more strongly associated with sexually dimorphic outcomes than exposures measured in maternal blood collected between weeks 28 and 43" of pregnancy, adding evidence that EDC exposure during mini-puberty may interfere with endocrine and neurological development.
Research and future directions
Although the phenomenon has been known for over 40 years, research into mini-puberty continues to uncover its broader implications for long-term health and development. The potential impact of environmental factors and endocrine disruptors on mini-puberty is an area of active investigation. At the same time, researchers also investigate if mini-puberty may be a window to treat certain disorders, e.g. treating micropenis using gonadotropin (testosterone) injections.
See also
Development of the endocrine system
References
Endocrine system
Human biology | Mini-puberty | Biology | 1,015 |
2,775,257 | https://en.wikipedia.org/wiki/AS-Interface | Actuator Sensor Interface (AS-Interface or ASi) is an industrial networking solution (Physical Layer, Data access Method and Protocol) used in PLC, DCS and PC-based automation systems. It is designed for connecting simple field I/O devices (e.g. binary ON/OFF devices such as actuators, sensors, rotary encoders, analog inputs and outputs, push buttons, and valve position sensors) in discrete manufacturing and process applications using a single two-conductor cable.
AS-Interface is an 'open' technology supported by a multitude of automation equipment vendors. The AS-Interface has been an international standard according to IEC 62026-2 since 1999.
AS-Interface is a networking alternative to the hard wiring of field devices. It can be used as a partner network for higher level fieldbus networks such as Profibus, DeviceNet, Interbus and Industrial Ethernet, for whom it offers a low-cost remote I/O solution. It is used in automation applications, including conveyor control, packaging machines, process control valves, bottling plants, electrical distribution systems, airport baggage carousels, elevators, bottling lines and food production lines. AS-Interface provides a basis for Functional Safety in machinery safety/emergency stop applications. Safety devices communicating over AS-Interface follow all the normal AS-Interface data rules. The AS-Interface specification is managed by AS-International, a member funded non-profit organization located in Gelnhausen/Germany. Several international subsidiaries exist around the world.
History
AS-Interface was developed during the late 1980s and early 1990s by a development partnership of 11 companies mostly known for their offering of industrial non-contact sensing devices like inductive sensors, photoelectric sensors, capacitive sensors and ultrasonic sensors. Once development was completed the consortium was resolved and a member organization, AS-International, was founded. The first operational system was shown at the 1994 Hanover fair.
In 2018, a new technology step was presented at SPS IPC Drives in Nuremberg. This technology is named ASi-5. ASi-5 was developed by well-known manufacturers of automation technology.
Use
The AS-Interface (from the Actuator Sensor Interface) has been developed as an alternative to conventional parallel cabling of sensors and actuators and offers the following advantages:
Flexibility
Cost saving
Simplicity
Reduction of installation errors
Widely distributed
Best networking opportunities
Compatibility
The main application areas of the system are factory automation, process technology and home automation.
Technology
The AS-Interface is a single-master system, this means a master device exchanges the input and output data with all configured devices.
The transmission medium is an unshielded two-wire yellow flat cable. The cable is used for signal transmission and at the same time for power supply (30 V). The communication electronics and participants with low power requirements, like light beams, can be powered directly. Consumers with a higher energy requirement, such as valve terminals, can use a separate, usually black flat cable for power supply (30 V).
The sensors or actuators are often connected via the so-called piercing technology. The insulation of the polarity profiled flat cable is pierced by means of two penetrating mandrels during assembly, this makes an easy and simple connection with reduced assembly efforts. Sensors and actuators that do not have an AS-Interface connection can be connected to the system via modules. For modules with penetration technology, the AS-Interface flat cable must be used.
The topology of the AS-Interface networks is arbitrary. You can build lines, star, ring or tree structures.
ASi-3 communication and transmission technology
A telegram (message frame) consists of 4-bit user data. The master communicates with the participants with a serial transmission protocol.
Each subscriber is assigned an address by an addressing device or via the master. A maximum of 31 (standard devices) or 62 (advanced devices) nodes can be connected. Each node can have up to four inputs and/or four outputs for actuators or sensors. This provides up to 124 (4 x 31 = 124) input and 124 outputs on a standard device network (nodes addressed 1-31) or 248 (4 x 62 = 248) inputs and 248 outputs on an advanced network (nodes addressed 1A - 31A and 1B - 31B ).
The serial communication is modulated onto the power supply. Manchester coding is used for the communication.
For 31 participants, the cycle is 5 ms. To address 62 participants, two cycles are necessary.
The topology of the AS-Interface is arbitrary, but without a repeater or extender, the cable length must not exceed 100 m. Due to a special terminating resistor (a combination of resistive and capacitive load), however, it is possible to increase the maximum cable length up to 300 m.
Diagnostic devices or masters with built-in diagnostics simplify network troubleshooting. Failed slaves can be easily replaced, the master automatically redirects them.
Safety at Work is the concept of functional safety used in ASi-3 technology. This means that safety-related components such as emergency stop actuations, door interlocks or light grids can be used in the AS-Interface network. This is a different transmission protocol overlaid onto the regular ASi protocol on the same cable and requires a separate safety master such as Siemens ASiMon devices on which your safety devices are configured.
AS-Interface can provide safety support up to SIL (Safety Integrity Level) 3 according to IEC 61508 as well as Performance Level e (PL e) according to EN ISO 13849-1.
ASi-5 communication and transmission technology
16 bits are available for cyclic transmission using ASi-5. A cyclic transmission of up to 32 bytes per participant is possible. An acyclic parameter channel with up to 256 bytes is available for each device as well. ASi-5 is to provide a set of up to 1536 binary inputs and 1536 binary outputs per Ethernet address.
1.2 ms cycle time can be achieved by the system for up to 24 participants. This fast cycle time allows new fields of applications that were previously reserved for expensive network systems.
96 participants can be addressed in an ASi-5 network. These 96 devices then have a cycle time of 5 ms.
Communication diagnostics describes the information about message transmission between the connected nodes and the master. The system continuously monitors the transmission quality of the messages from each device connected. This single channel diagnosis is available in the system. Disturbed transmission channels are redirected automatically.
Device diagnostics provides information about the connected device. The available diagnostics data are determined by the device manufacturer. Diagnostic data can be transmitted via the acyclic services with a data width of up to 256 bytes.
The ASi-5 system is open for the use of parameter interfaces such as e.g. IO-Link. IO-Link devices can be efficiently collected over long distances and could be integrated cyclically up to 32 bytes.
16 safe bits are available for safety-related switching devices, such as emergency stop, light curtains, safety switches and similar are available. It is possible to achieve Safety Integrity Level (SIL)3 Performance Level e according to IEC 13849 for functional safety.
ASi-5 uses a novel communication method. The new ASi-5 technology allows the use of ASi-3 components together with components of the new ASi-5 technology on a common cable. If ASi-3 devices are to be used together with ASi-5 components, then in addition to the corresponding devices the use of a new ASi-3 / ASi-5 master is required. Old systems can be easily upgraded by new devices with additional functions. New systems can be installed on a cable using proven ASi-3 components and new ASi-5 devices.
Standardization
The AS-Interface is described in the international standard IEC 62026-2: 2015.
Other local standards are:
Europe: EN 62026-2: 2015
Japan: JIS C 82026-2 (2013)
China: GB / T 18858.2 (2002)
Korea: KS C IEC 62026-2 (2007)
AS-Interface products are certified by the AS-International Association e.V. Tests and certifications ensure that devices from different manufacturers work together.
The concepts of functional safety at AS-Interface have been positively assessed by the TÜV (Technical Inspection Association) or the Institute for Occupational Safety of the German Social Accident Insurance and confirmed the fulfillment of the relevant safety standards.
References
W. Kriesel, O. Madelung (Hrsg.): ASI - Das Aktuator-Sensor-Interface für die Automation. Hanser Verlag, München; Wien 1994, , 2. Auflage 1999, .
W. R. Kriesel, O. W. Madelung (Eds.): ASI - The Actuator-Sensor-Interface for Automation. Hanser Verlag, München; Wien 1995, , 2. Auflage 1999, .
W. Kriesel, H. Rohr, A. Koch: Geschichte und Zukunft der Mess- und Automatisierungstechnik. VDI-Verlag, Düsseldorf 1995, .
W. Kriesel, T. Heimbold, D. Telschow: Bustechnologien für die Automation - Vernetzung, Auswahl und Anwendung von Kommunikationssystemen. Hüthig Verlag, Heidelberg 1998, 2. Aufl. 2000 .
R. Becker: Automatisieren ist einfach - mit AS-Interface. AS-International Association, Frankfurt a. M. 2008.
R. Becker: Automation is easy - with AS-Interface. AS-International Association, Frankfurt a. M. 2008.
W. Weller: Automatisierungstechnik im Überblick. Beuth Verlag, Berlin; Wien; Zürich 2008, sowie als E-Book.
W. Wahlster: (R)Evolution 4.0 – Interview. In: trends in automation. Das Kundenmagazin von Festo. Nr. 2, 2012, S. 9–11.
External links
AS-International Association
ASi-5 Technology
idw Informationsdienst Wissenschaft
Serial buses
Industrial computing
Industrial automation | AS-Interface | Technology,Engineering | 2,113 |
167,001 | https://en.wikipedia.org/wiki/Concrete%20category | In mathematics, a concrete category is a category that is equipped with a faithful functor to the category of sets (or sometimes to another category). This functor makes it possible to think of the objects of the category as sets with additional structure, and of its morphisms as structure-preserving functions. Many important categories have obvious interpretations as concrete categories, for example the category of topological spaces and the category of groups, and trivially also the category of sets itself. On the other hand, the homotopy category of topological spaces is not concretizable, i.e. it does not admit a faithful functor to the category of sets.
A concrete category, when defined without reference to the notion of a category, consists of a class of objects, each equipped with an underlying set; and for any two objects A and B a set of functions, called homomorphisms, from the underlying set of A to the underlying set of B. Furthermore, for every object A, the identity function on the underlying set of A must be a homomorphism from A to A, and the composition of a homomorphism from A to B followed by a homomorphism from B to C must be a homomorphism from A to C.
Definition
A concrete category is a pair (C,U) such that
C is a category, and
U : C → Set (the category of sets and functions) is a faithful functor.
The functor U is to be thought of as a forgetful functor, which assigns to every object of C its "underlying set", and to every morphism in C its "underlying function".
It is customary to call the morphisms in a concrete category homomorphisms (e.g., group homomorphisms, ring homomorphisms, etc.) Because of the faithfulness of the functor U, the homomorphisms of a concrete category may be formally identified with their underlying functions (i.e., their images under U); the homomorphisms then regain the usual interpretation as "structure-preserving" functions.
A category C is concretizable if there exists a concrete category (C,U);
i.e., if there exists a faithful functor U: C → Set. All small categories are concretizable: define U so that its object part maps each object b of C to the set of all morphisms of C whose codomain is b (i.e. all morphisms of the form f: a → b for any object a of C), and its morphism part maps each morphism g: b → c of C to the function U(g): U(b) → U(c) which maps each member f: a → b of U(b) to the composition gf: a → c, a member of U(c). (Item 6 under Further examples expresses the same U in less elementary language via presheaves.) The Counter-examples section exhibits two large categories that are not concretizable.
Remarks
Contrary to intuition, concreteness is not a property that a category may or may not satisfy, but rather a structure with which a category may or may not be equipped. In particular, a category C may admit several faithful functors into Set. Hence there may be several concrete categories (C, U) all corresponding to the same category C.
In practice, however, the choice of faithful functor is often clear and in this case we simply speak of the "concrete category C". For example, "the concrete category Set" means the pair (Set, I) where I denotes the identity functor Set → Set.
The requirement that U be faithful means that it maps different morphisms between the same objects to different functions. However, U may map different objects to the same set and, if this occurs, it will also map different morphisms to the same function.
For example, if S and T are two different topologies on the same set X, then
(X, S) and (X, T) are distinct objects in the category Top of topological spaces and continuous maps, but mapped to the same set X by the forgetful functor Top → Set. Moreover, the identity morphism (X, S) → (X, S) and the identity morphism (X, T) → (X, T) are considered distinct morphisms in Top, but they have the same underlying function, namely the identity function on X.
Similarly, any set with four elements can be given two non-isomorphic group structures: one isomorphic to , and the other isomorphic to .
Further examples
Any group G may be regarded as an "abstract" category with one arbitrary object, , and one morphism for each element of the group. This would not be counted as concrete according to the intuitive notion described at the top of this article. But every faithful G-set (equivalently, every representation of G as a group of permutations) determines a faithful functor G → Set. Since every group acts faithfully on itself, G can be made into a concrete category in at least one way.
Similarly, any poset P may be regarded as an abstract category with a unique arrow x → y whenever x ≤ y. This can be made concrete by defining a functor D : P → Set which maps each object x to and each arrow x → y to the inclusion map .
The category Rel whose objects are sets and whose morphisms are relations can be made concrete by taking U to map each set X to its power set and each relation to the function defined by . Noting that power sets are complete lattices under inclusion, those functions between them arising from some relation R in this way are exactly the supremum-preserving maps. Hence Rel is equivalent to a full subcategory of the category Sup of complete lattices and their sup-preserving maps. Conversely, starting from this equivalence we can recover U as the composite Rel → Sup → Set of the forgetful functor for Sup with this embedding of Rel in Sup.
The category Setop can be embedded into Rel by representing each set as itself and each function f: X → Y as the relation from Y to X formed as the set of pairs (f(x), x) for all x ∈ X; hence Setop is concretizable. The forgetful functor which arises in this way is the contravariant powerset functor Setop → Set.
It follows from the previous example that the opposite of any concretizable category C is again concretizable, since if U is a faithful functor C → Set then Cop may be equipped with the composite Cop → Setop → Set.
If C is any small category, then there exists a faithful functor P : SetCop → Set which maps a presheaf X to the coproduct . By composing this with the Yoneda embedding Y:C → SetCop one obtains a faithful functor C → Set.
For technical reasons, the category Ban1 of Banach spaces and linear contractions is often equipped not with the "obvious" forgetful functor but the functor U1 : Ban1 → Set which maps a Banach space to its (closed) unit ball.
The category Cat whose objects are small categories and whose morphisms are functors can be made concrete by sending each category C to the set containing its objects and morphisms. Functors can be simply viewed as functions acting on the objects and morphisms.
Counter-examples
The category hTop, where the objects are topological spaces and the morphisms are homotopy classes of continuous functions, is an example of a category that is not concretizable.
While the objects are sets (with additional structure), the morphisms are not actual functions between them, but rather classes of functions.
The fact that there does not exist any faithful functor from hTop to Set was first proven by Peter Freyd.
In the same article, Freyd cites an earlier result that the category of "small categories and natural equivalence-classes of functors" also fails to be concretizable.
Implicit structure of concrete categories
Given a concrete category (C, U) and a cardinal number N, let UN be the functor C → Set determined by UN(c) = (U(c))N.
Then a subfunctor of UN is called an N-ary predicate and a
natural transformation UN → U an N-ary operation.
The class of all N-ary predicates and N-ary operations of a concrete category (C,U), with N ranging over the class of all cardinal numbers, forms a large signature. The category of models for this signature then contains a full subcategory which is equivalent to C.
Relative concreteness
In some parts of category theory, most notably topos theory, it is common to replace the category Set with a different category X, often called a base category.
For this reason, it makes sense to call a pair (C, U) where C is a category and U a faithful functor C → X a concrete category over X.
For example, it may be useful to think of the models of a theory with N sorts as forming a concrete category over SetN.
In this context, a concrete category over Set is sometimes called a construct.
Notes
References
Adámek, Jiří, Herrlich, Horst, & Strecker, George E.; (1990). Abstract and Concrete Categories (4.2MB PDF). Originally publ. John Wiley & Sons. . (now free on-line edition).
Freyd, Peter; (1970). Homotopy is not concrete. Originally published in: The Steenrod Algebra and its Applications, Springer Lecture Notes in Mathematics Vol. 168. Republished in a free on-line journal: Reprints in Theory and Applications of Categories, No. 6 (2004), with the permission of Springer-Verlag.
Rosický, Jiří; (1981). Concrete categories and infinitary languages. Journal of Pure and Applied Algebra, Volume 22, Issue 3.
Category theory | Concrete category | Mathematics | 2,127 |
67,731,438 | https://en.wikipedia.org/wiki/Odevixibat | Odevixibat, sold under the brand name Bylvay among others, is a medication for the treatment of progressive familial intrahepatic cholestasis. It is taken by mouth. Odevixibat is a reversible, potent, selective inhibitor of the ileal bile acid transporter (IBAT). It was developed by Albireo Pharma.
The most common side effects include diarrhea, abdominal pain, hemorrhagic diarrhea, soft feces, and hepatomegaly (enlarged liver).
Odevixibat was approved for medical use in the United States and in the European Union in July 2021. The U.S. Food and Drug Administration considers it to be a first-in-class medication.
Medical uses
In the United States, odevixibat is indicated for the treatment of pruritus in people three months of age and older with progressive familial intrahepatic cholestasis. In the European Union it is indicated in people six months of age and older.
Mechanism of action
Odevixibat is a reversible inhibitor of the ileal sodium/bile acid co-transporter. This transporter is responsible for reabsorption of the majority of bile acids in the distal ileum. The reduced absorption of the bile acids in the distal ileum compounds and leads to a decrease in stimulation of FXR (farnesoid X receptor), decreasing the inhibition of bile acid synthesis.
Odevixibat works as a reversible, selective, small molecule inhibitor of the ileal bile acid transporter (IBAT).
Pharmacokinetics
Odevixibat is > 99% protein-bound in vitro. A dose of odevixibat that is 7.2 mg reaches a cmax concentration of 0.47 ng/mL with an AUC (0-24h) of 2.19 h*ng/mL. Adult and pediatric patients given the therapeautic dose of odevixibat did not display plasma concentrations of the drug. Odevixibat is eliminated majorly unchanged. Odevixibat has an average half-life of 2.36 hours.
The peak plasma time ranges from 1 to 5 hours after a single 7.2 mg dose in healthy adults. In healthy adults receiving a single 7.2 mg dose, the peak plasma concentration is 0.47 ng/mL, and the area under the concentration-time curve (AUC) is 2.19 ng·hr/mL. The plasma concentration of odevixibat in patients aged 6 months to 17 years ranges from 0.06 to 0.72 ng/mL. With once-daily dosing, there is no accumulation of odevixibat.
Odevixibat is metabolized through a process called mono-hydroxylation.The drug is primarily eliminated through the feces (97% unchanged), with a minimal amount excreted in the urine (0.002%).
Consuming a high-fat meal (800-1000 calories with approximately 50% of the total caloric content from fat) the peak plasma concentration is decreased by 72%, the AUC by 62%, and delays the peak plasma time by 3 to 4.5 hours. However, the impact of food on systemic exposures to odevixibat is not clinically significant.
Contraindications
Odevixibat cannot be given to a child on a liquid diet.
Adverse effects
Common side effects of odevixibat include diarrhea, stomach pain, vomiting, liver test abnormalities, abnormal liquid function tests, and a deficiency in vitamins A, D, E and K.
Pregnancy and lactation
There are no enough human data on odevixibat use during pregnancy to build a drug-associated risk of major birth defects, miscarriage, or adverse developmental outcomes.
There are no data on the presence of odevixibat in human milk, and how it affects milk production and breastfed babies.
History
Preclinical studies and early clinical trials were conducted to evaluate the safety and efficacy of odevixibat, to establish the appropriate dosage, assess its mechanism of action, and evaluate its effects on bile acid levels and symptoms in people with progressive familial intrahepatic cholestasis. A 24-week clinical trial, played a role in demonstrating the effectiveness and safety of odevixibat in treating pruritus in children with progressive familial intrahepatic cholestasis.
The US Food and Drug Administration (FDA) granted the application for odevixibat orphan drug designation. The FDA classified odevixibat as an orphan drug for the rare conditions of Alagille syndrome, biliary atresia, and primary biliary cholangitis.
Odevixibat was granted its initial approval in July 2021, in the European Union for the treatment of progressive familial intrahepatic cholestasis in people aged six months and older. In July 2021, it received approval in the United States for the treatment of pruritus (itching) in people aged three months and older with progressive familial intrahepatic cholestasis.
Society and culture
Legal status
In May 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) recommended granting a marketing authorization in the European Union for odevixibat for the treatment of progressive familial intrahepatic cholestasis in people aged six months or older. It was authorized for medical use in the European Union in July 2021.
In July 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization under exceptional circumstances for the medicinal product Kayfanda, intended for the treatment of cholestatic pruritus in people with Alagille syndrome aged six months or older. The applicant for this medicinal product is Ipsen Pharma. Kayfanda was authorized for medical use in the European Union in September 2024.
Research
A phase III randomized control trial showed odevixibat reduced pruritis and serum bile acids in children with progressive familial intrahepatic cholestasis.
References
External links
Orphan drugs
Thiadiazepines
4-Hydroxyphenyl compounds
Thioethers
Amides
Carboxylic acids
Tertiary amines | Odevixibat | Chemistry | 1,317 |
11,718,624 | https://en.wikipedia.org/wiki/Trend%20surface%20analysis | Trend surface analysis is a mathematical technique used in environmental sciences (archeology, geology, soil science, etc.). Trend surface analysis (also called trend surface mapping) is a method based on low-order polynomials of spatial coordinates for estimating a regular grid of points from scattered observations. For example, from archeological finds or from soil survey.
References
Methods in archaeology
Multivariate interpolation | Trend surface analysis | Mathematics | 82 |
40,138,324 | https://en.wikipedia.org/wiki/Humanities%20Indicators | The Humanities Indicators is a project of the American Academy of Arts and Sciences that provides statistical tools for answering questions about humanities education in the United States. Researchers use the Indicators to analyze primary and secondary humanities education, undergraduate and graduate education in the humanities, the humanities workforce, levels and sources of program funding, public understanding and impact of the humanities, and other areas of concern.
Data from the Humanities Indicators has been used in discussions about the US decline in the number of humanities college majors. To address questions about the workforce outcomes of humanities graduates, the Indicators issued reports on the State of the Humanities 2021: Workforce & Beyond.
The Humanities Indicators report examined not only graduates' employment and earnings relative to other fields, but also their satisfaction with their work after graduation and their lives more generally. The data reveal that despite disparities in median earnings, humanities majors are quite similar to graduates from other fields with respect to their perceived well-being. The report was widely cited in the media as an important intervention in the discussion.
In 2019, the Humanities Indicators also administered the first national survey on public attitudes about the humanities, finding wide engagement with the field (though often under different names) and substantial support for the field.
References
External links
The Humanities Indicators, a project of the American Academy
Science and Engineering Indicators
State of the Humanities 2021: Workforce & Beyond,
Data | Humanities Indicators | Technology | 271 |
15,062,704 | https://en.wikipedia.org/wiki/MNT%20%28gene%29 | MNT (Max-binding protein MNT) is a Max-binding protein that is encoded by the MNT gene
Function
The Myc/Max/Mad network comprises a group of transcription factors that co-interact to regulate gene-specific transcriptional activation or repression. This gene encodes a protein member of the Myc/Max/Mad network. This protein has a basic-Helix-Loop-Helix-zipper domain (bHLHzip) with which it binds the canonical DNA sequence CANNTG, known as the E box, following heterodimerization with Max proteins. Its delta signature is 44. This protein is a transcriptional repressor and an antagonist of Myc-dependent transcriptional activation and cell growth. This protein represses transcription by binding to DNA and recruiting Sin3 corepressor proteins through its N-terminal Sin3-interaction domain
Interactions
MNT (gene) has been shown to interact with MLX, SIN3A and MAX.
References
Further reading
External links
Transcription factors | MNT (gene) | Chemistry,Biology | 207 |
5,481,296 | https://en.wikipedia.org/wiki/Iterated%20binary%20operation | In mathematics, an iterated binary operation is an extension of a binary operation on a set S to a function on finite sequences of elements of S through repeated application. Common examples include the extension of the addition operation to the summation operation, and the extension of the multiplication operation to the product operation. Other operations, e.g., the set-theoretic operations union and intersection, are also often iterated, but the iterations are not given separate names. In print, summation and product are represented by special symbols; but other iterated operators often are denoted by larger variants of the symbol for the ordinary binary operator. Thus, the iterations of the four operations mentioned above are denoted
and , respectively.
More generally, iteration of a binary function is generally denoted by a slash: iteration of over the sequence is denoted by , following the notation for reduce in Bird–Meertens formalism.
In general, there is more than one way to extend a binary operation to operate on finite sequences, depending on whether the operator is associative, and whether the operator has identity elements.
Definition
Denote by aj,k, with and , the finite sequence of length of elements of S, with members (ai), for . Note that if , the sequence is empty.
For , define a new function Fl on finite nonempty sequences of elements of S, where
Similarly, define
If f has a unique left identity e, the definition of Fl can be modified to operate on empty sequences by defining the value of Fl on an empty sequence to be e (the previous base case on sequences of length 1 becomes redundant). Similarly, Fr can be modified to operate on empty sequences if f has a unique right identity.
If f is associative, then Fl equals Fr, and we can simply write F. Moreover, if an identity element e exists, then it is unique (see Monoid).
If f is commutative and associative, then F can operate on any non-empty finite multiset by applying it to an arbitrary enumeration of the multiset. If f moreover has an identity element e, then this is defined to be the value of F on an empty multiset. If f is idempotent, then the above definitions can be extended to finite sets.
If S also is equipped with a metric or more generally with topology that is Hausdorff, so that the concept of a limit of a sequence is defined in S, then an infinite iteration on a countable sequence in S is defined exactly when the corresponding sequence of finite iterations converges. Thus, e.g., if a0, a1, a2, a3, … is an infinite sequence of real numbers, then the infinite product is defined, and equal to if and only if that limit exists.
Non-associative binary operation
The general, non-associative binary operation is given by a magma. The act of iterating on a non-associative binary operation may be represented as a binary tree.
Notation
Iterated binary operations are used to represent an operation that will be repeated over a set subject to some constraints. Typically the lower bound of a restriction is written under the symbol, and the upper bound over the symbol, though they may also be written as superscripts and subscripts in compact notation. Interpolation is performed over positive integers from the lower to upper bound, to produce the set which will be substituted into the index (below denoted as i) for the repeated operations.
Common notations include the big Sigma (repeated sum) and big Pi (repeated product) notations.
It is possible to specify set membership or other logical constraints in place of explicit indices, in order to implicitly specify which elements of a set shall be used:
Multiple conditions may be written either joined with a logical and or separately:
Less commonly, any binary operator such as exclusive or or set union may also be used. For example, if S is a set of logical propositions:
which is true iff all of the elements of S are true.
See also
Unary operation
Unary function
Binary operation
Binary function
Ternary operation
References
External links
Bulk action
Parallel prefix operation
Nuprl iterated binary operations
Binary operations | Iterated binary operation | Mathematics | 867 |
2,794,117 | https://en.wikipedia.org/wiki/History%20of%20the%20Big%20Bang%20theory | The history of the Big Bang theory began with the Big Bang's development from observations and theoretical considerations. Much of the theoretical work in cosmology now involves extensions and refinements to the basic Big Bang model. The theory itself was originally formalised by Father Georges Lemaître in 1927. Hubble's law of the expansion of the universe provided foundational support for the theory.
Philosophy and medieval temporal finitism
In medieval philosophy, there was much debate over whether the universe had a finite or infinite past (see Temporal finitism). The philosophy of Aristotle held that the universe had an infinite past, which caused problems for past Jewish and Islamic philosophers who were unable to reconcile the Aristotelian conception of the eternal with the Abrahamic view of creation. As a result, a variety of logical arguments for the universe having a finite past were developed by John Philoponus, Al-Kindi, Saadia Gaon, Al-Ghazali and Immanuel Kant, among others.
English theologian Robert Grosseteste explored the nature of matter and the cosmos in his 1225 treatise De Luce (On Light). He described the birth of the universe in an explosion and the crystallization of matter to form stars and planets in a set of nested spheres around Earth. De Luce is the first attempt to describe the heavens and Earth using a single set of physical laws.
In 1610, Johannes Kepler used the dark night sky to argue for a finite universe. Seventy-seven years later, Isaac Newton described large-scale motion throughout the universe.
The description of a universe that expanded and contracted in a cyclic manner was first put forward in a poem published in 1791 by Erasmus Darwin. Edgar Allan Poe presented a similar cyclic system in his 1848 essay titled Eureka: A Prose Poem; it is obviously not a scientific work, but Poe, while starting from metaphysical principles, tried to explain the universe using contemporary physical and mental knowledge. Ignored by the scientific community and often misunderstood by literary critics, its scientific implications have been reevaluated in recent times.
According to Poe, the initial state of matter was a single "Primorial Particle". "Divine Volition", manifesting itself as a repulsive force, fragmented the Primordial Particle into atoms. Atoms spread evenly throughout space, until the repulsive force stops, and attraction appears as a reaction: then matter begins to clump together forming stars and star systems, while the material universe is drawn back together by gravity, finally collapsing and ending eventually returning to the Primordial Particle stage in order to begin the process of repulsion and attraction once again. This part of Eureka describes a Newtonian evolving universe which shares a number of properties with relativistic models, and for this reason Poe anticipates some themes of modern cosmology.
Early 20th century scientific developments
Observationally, in the 1910s, Vesto Slipher and later, Carl Wilhelm Wirtz, determined that most spiral nebulae (now called spiral galaxies) were receding from Earth. Slipher used spectroscopy to investigate the rotation periods of planets, the composition of planetary atmospheres, and was the first to observe the radial velocities of galaxies. Wirtz observed a systematic redshift of nebulae, which was difficult to interpret in terms of a cosmology in which the universe is filled more or less uniformly with stars and nebulae. They weren't aware of the cosmological implications, nor that the supposed nebulae were actually galaxies outside our own Milky Way.
Also in that decade, Albert Einstein's theory of general relativity was found to admit no static cosmological solutions, given the basic assumptions of cosmology described in the Big Bang's theoretical underpinnings. The universe (i.e., the space-time metric) was described by a metric tensor that was either expanding or shrinking (i.e., was not constant or invariant). This result, coming from an evaluation of the field equations of the general theory, at first led Einstein himself to consider that his formulation of the field equations of the general theory may be in error, and he tried to correct it by adding a cosmological constant. This constant would restore to the general theory's description of space-time an invariant metric tensor for the fabric of space/existence. The first person to seriously apply general relativity to cosmology without the stabilizing cosmological constant was Alexander Friedmann. Friedmann derived the expanding-universe solution to general relativity field equations in 1922. Friedmann's 1924 papers included "Über die Möglichkeit einer Welt mit konstanter negativer Krümmung des Raumes" (About the possibility of a world with constant negative curvature) which was published by the Berlin Academy of Sciences on 7 January 1924. Friedmann's equations describe the Friedmann–Lemaitre–Robertson–Walker universe.
In 1927, the Belgian physicist Georges Lemaitre proposed an expanding model for the universe to explain the observed redshifts of spiral nebulae, and calculated the Hubble law. He based his theory on the work of Einstein and De Sitter, and independently derived Friedmann's equations for an expanding universe. Also, the red shifts themselves were not constant, but varied in such manner as to lead to the conclusion that there was a definite relationship between amount of red-shift of nebulae, and their distance from observers.
In 1929, Edwin Hubble provided a comprehensive observational foundation for Lemaitre's theory. Hubble's experimental observations discovered that, relative to the Earth and all other observed bodies, galaxies are receding in every direction at velocities (calculated from their observed red-shifts) directly proportional to their distance from the Earth and each other. In 1929, Hubble and Milton Humason formulated the empirical Redshift Distance Law of galaxies, nowadays known as Hubble's law, which, once the Redshift is interpreted as a measure of recession speed, is consistent with the solutions of Einstein's General Relativity Equations for a homogeneous, isotropic expanding universe. The law states that the greater the distance between any two galaxies, the greater their relative speed of separation. In 1929, Edwin Hubble discovered that most of the universe was expanding and moving away from everything else. If everything is moving away from everything else, then it should be thought that everything was once closer together. The logical conclusion is that at some point, all matter started from a single point a few millimetres across before exploding outward. It was so hot that it consisted of only raw energy for hundreds of thousands of years before the matter could form. Whatever happened had to unleash an unfathomable force, since the universe is still expanding billions of years later. The theory he devised to explain what he found is called the Big Bang theory.
In 1931, Lemaître proposed in his "hypothèse de l'atome primitif" (hypothesis of the primeval atom) that the universe began with the "explosion" of the "primeval atom" – what was later called the Big Bang. Lemaître first took cosmic rays to be the remnants of the event, although it is now known that they originate within the local galaxy. Lemaitre had to wait until shortly before his death to learn of the discovery of cosmic microwave background radiation, the remnant radiation of a dense and hot phase in the early universe.
Big Bang theory vs. Steady State theory
Hubble's law had suggested that the universe was expanding, contradicting the cosmological principle whereby the universe, when viewed on sufficiently large distance scales, has no preferred directions or preferred places. Hubble's idea allowed for two opposing hypotheses to be suggested. One was Lemaître's Big Bang, advocated and developed by George Gamow. The other model was Fred Hoyle's Steady State theory, in which new matter would be created as the galaxies moved away from each other. In this model, the universe is roughly the same at any point in time. It was actually Hoyle who coined the name of Lemaître's theory, referring to it as "this 'big bang' idea" during a radio broadcast on 28 March 1949, on the BBC Third Programme. It is popularly reported that Hoyle, who favored an alternative "steady state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models.
Hoyle repeated the term in further broadcasts in early 1950, as part of a series of five lectures entitled The Nature of The Universe. The text of each lecture was published in The Listener a week after the broadcast, the first time that the term "big bang" appeared in print. As evidence in favour of the Big Bang model mounted, and the consensus became widespread, Hoyle himself, albeit somewhat reluctantly, admitted to it by formulating a new cosmological model that other scientists later referred to as the "Steady Bang".
1950 to 1990s
From around 1950 to 1965, the support for these theories was evenly divided, with a slight imbalance arising from the fact that the Big Bang theory could explain both the formation and the observed abundances of hydrogen and helium, whereas the Steady State could explain how they were formed, but not why they should have the observed abundances. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. Objects such as quasars and radio galaxies were observed to be much more common at large distances (therefore in the distant past) than in the nearby universe, whereas the Steady State predicted that the average properties of the universe should be unchanging with time. In addition, the discovery of the cosmic microwave background radiation in 1964 was considered the death knell of the Steady State, although this prediction was only qualitative, and failed to predict the exact temperature of the CMB. (The key big bang prediction is the black-body spectrum of the CMB, which was not measured with high accuracy until COBE in 1990). After some reformulation, the Big Bang has been regarded as the best theory of the origin and evolution of the cosmos. Before the late 1960s, many cosmologists thought the infinitely dense and physically paradoxical singularity at the starting time of Friedmann's cosmological model could be avoided by allowing for a universe which was contracting before entering the hot dense state, and starting to expand again. This was formalized as Richard Tolman's oscillating universe. In the sixties, Stephen Hawking and others demonstrated that this idea was unworkable, and the singularity is an essential feature of the physics described by Einstein's gravity. This led the majority of cosmologists to accept the notion that the universe as currently described by the physics of general relativity has a finite age. However, due to a lack of a theory of quantum gravity, there is no way to say whether the singularity is an actual origin point for the universe, or whether the physical processes that govern the regime cause the universe to be effectively eternal in character.
Through the 1970s and 1980s, most cosmologists accepted the Big Bang, but several puzzles remained, including the non-discovery of anisotropies in the CMB, and occasional observations hinting at deviations from a black-body spectrum; thus the theory was not very strongly confirmed.
1990 onwards
Huge advances in Big Bang cosmology were made in the 1990s and the early 21st century, as a result of major advances in telescope technology in combination with large amounts of satellite data, such as COBE, the Hubble Space Telescope and WMAP.
In 1990, measurements from the COBE satellite showed that the spectrum of the CMB matches a 2.725 K black-body to very high precision; deviations do not exceed 2 parts in . This showed that earlier claims of spectral deviations were incorrect, and essentially proved that the universe was hot and dense in the past, since no other known mechanism can produce a black-body to such high accuracy. Further observations from COBE in 1992 discovered the very small anisotropies of the CMB on large scales, approximately as predicted from Big Bang models with dark matter. From then on, models of non-standard cosmology without some form of Big Bang became very rare in the mainstream astronomy journals.
In 1998, measurements of distant supernovae indicated that the expansion of the universe is accelerating, and this was supported by other observations including ground-based CMB observations and large galaxy red-shift surveys. In 1999–2000, the Boomerang and Maxima balloon-borne
CMB observations showed that the geometry of the universe is close to flat, then in 2001 the
2dFGRS galaxy red-shift survey estimated the mean matter density around 25–30 percent of critical density.
From 2001 to 2010, NASA's WMAP spacecraft took very detailed pictures of the universe by means of the cosmic microwave background radiation. The images can be interpreted to indicate that the universe is 13.7 billion years old (within one percent error) and that the Lambda-CDM model and the inflationary theory are correct. No other cosmological theory can yet explain such a wide range of observed parameters, from the ratio of the elemental abundances in the early universe to the structure of the cosmic microwave background, the observed higher abundance of active galactic nuclei in the early universe and the observed masses of clusters of galaxies.
In 2013 and 2015, ESA's Planck spacecraft released even more detailed images of the cosmic microwave background, showing consistency with the Lambda-CDM model to still higher precision.
Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding what happened in the earliest times after the Big Bang, and reconciling observations with the basic theory. Cosmologists continue to calculate many of the parameters of the Big Bang to a new level of precision, and carry out more detailed observations which are hoped to provide clues to the nature of dark energy and dark matter, and to test the theory of General Relativity on cosmic scales.
See also
Theory of everything
Timeline of cosmological theories
References
Further reading
Big Bang
Big Bang theory
Big Bang theory
Big Bang theory | History of the Big Bang theory | Physics,Astronomy | 2,935 |
61,532 | https://en.wikipedia.org/wiki/Absolute%20convergence | In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if A convergent series that is not absolutely convergent is called conditionally convergent.
Absolute convergence is important for the study of infinite series, because its definition guarantees that a series will have some "nice" behaviors of finite sums that not all convergent series possess. For instance, rearrangements do not change the value of the sum, which is not necessarily true for conditionally convergent series.
Background
When adding a finite number of terms, addition is both associative and commutative, meaning that grouping and rearrangment do not alter the final sum. For instance, is equal to both and . However, associativity and commutativity do not necessarily hold for infinite sums. One example is the alternating harmonic series
whose terms are fractions that alternate in sign. This series is convergent and can be evaluated using the Maclaurin series for the function , which converges for all satisfying :
Substituting reveals that the original sum is equal to . The sum can also be rearranged as follows:
In this rearrangement, the reciprocal of each odd number is grouped with the reciprocal of twice its value, while the reciprocals of every multiple of 4 are evaluated separately. However, evaluating the terms inside the parentheses yields
or half the original series. The violation of the associativity and commutativity of addition reveals that the alternating harmonic series is conditionally convergent. Indeed, the sum of the absolute values of each term is , or the divergent harmonic series. According to the Riemann series theorem, any conditionally convergent series can be permuted so that its sum is any finite real number or so that it diverges. When an absolutely convergent series is rearranged, its sum is always preserved.
Definition for real and complex numbers
A sum of real numbers or complex numbers is absolutely convergent if the sum of the absolute values of the terms converges.
Sums of more general elements
The same definition can be used for series whose terms are not numbers but rather elements of an arbitrary abelian topological group. In that case, instead of using the absolute value, the definition requires the group to have a norm, which is a positive real-valued function on an abelian group (written additively, with identity element 0) such that:
The norm of the identity element of is zero:
For every implies
For every
For every
In this case, the function induces the structure of a metric space (a type of topology) on
Then, a -valued series is absolutely convergent if
In particular, these statements apply using the norm (absolute value) in the space of real numbers or complex numbers.
In topological vector spaces
If is a topological vector space (TVS) and is a (possibly uncountable) family in then this family is absolutely summable if
is summable in (that is, if the limit of the net converges in where is the directed set of all finite subsets of directed by inclusion and ), and
for every continuous seminorm on the family is summable in
If is a normable space and if is an absolutely summable family in then necessarily all but a countable collection of 's are 0.
Absolutely summable families play an important role in the theory of nuclear spaces.
Relation to convergence
If is complete with respect to the metric then every absolutely convergent series is convergent. The proof is the same as for complex-valued series: use the completeness to derive the Cauchy criterion for convergence—a series is convergent if and only if its tails can be made arbitrarily small in norm—and apply the triangle inequality.
In particular, for series with values in any Banach space, absolute convergence implies convergence. The converse is also true: if absolute convergence implies convergence in a normed space, then the space is a Banach space.
If a series is convergent but not absolutely convergent, it is called conditionally convergent. An example of a conditionally convergent series is the alternating harmonic series. Many standard tests for divergence and convergence, most notably including the ratio test and the root test, demonstrate absolute convergence. This is because a power series is absolutely convergent on the interior of its disk of convergence.
Proof that any absolutely convergent series of complex numbers is convergent
Suppose that is convergent. Then equivalently, is convergent, which implies that and converge by termwise comparison of non-negative terms. It suffices to show that the convergence of these series implies the convergence of and for then, the convergence of would follow, by the definition of the convergence of complex-valued series.
The preceding discussion shows that we need only prove that convergence of implies the convergence of
Let be convergent. Since we have
Since is convergent, is a bounded monotonic sequence of partial sums, and must also converge. Noting that is the difference of convergent series, we conclude that it too is a convergent series, as desired.
Alternative proof using the Cauchy criterion and triangle inequality
By applying the Cauchy criterion for the convergence of a complex series, we can also prove this fact as a simple implication of the triangle inequality. By the Cauchy criterion, converges if and only if for any there exists such that for any But the triangle inequality implies that so that for any which is exactly the Cauchy criterion for
Proof that any absolutely convergent series in a Banach space is convergent
The above result can be easily generalized to every Banach space Let be an absolutely convergent series in As is a Cauchy sequence of real numbers, for any and large enough natural numbers it holds:
By the triangle inequality for the norm , one immediately gets:
which means that is a Cauchy sequence in hence the series is convergent in
Rearrangements and unconditional convergence
Real and complex numbers
When a series of real or complex numbers is absolutely convergent, any rearrangement or reordering of that series' terms will still converge to the same value. This fact is one reason absolutely convergent series are useful: showing a series is absolutely convergent allows terms to be paired or rearranged in convenient ways without changing the sum's value.
The Riemann rearrangement theorem shows that the converse is also true: every real or complex-valued series whose terms cannot be reordered to give a different value is absolutely convergent.
Series with coefficients in more general space
The term unconditional convergence is used to refer to a series where any rearrangement of its terms still converges to the same value. For any series with values in a normed abelian group , as long as is complete, every series which converges absolutely also converges unconditionally.
Stated more formally:
For series with more general coefficients, the converse is more complicated. As stated in the previous section, for real-valued and complex-valued series, unconditional convergence always implies absolute convergence. However, in the more general case of a series with values in any normed abelian group , the converse does not always hold: there can exist series which are not absolutely convergent, yet unconditionally convergent.
For example, in the Banach space ℓ∞, one series which is unconditionally convergent but not absolutely convergent is:
where is an orthonormal basis. A theorem of A. Dvoretzky and C. A. Rogers asserts that every infinite-dimensional Banach space has an unconditionally convergent series that is not absolutely convergent.
Proof of the theorem
For any we can choose some such that:
Let
where so that is the smallest natural number such that the list includes all of the terms (and possibly others).
Finally for any integer let
so that
and thus
This shows that
that is:
Q.E.D.
Products of series
The Cauchy product of two series converges to the product of the sums if at least one of the series converges absolutely. That is, suppose that
The Cauchy product is defined as the sum of terms where:
If the or sum converges absolutely then
Absolute convergence over sets
A generalization of the absolute convergence of a series, is the absolute convergence of a sum of a function over a set. We can first consider a countable set and a function We will give a definition below of the sum of over written as
First note that because no particular enumeration (or "indexing") of has yet been specified, the series cannot be understood by the more basic definition of a series. In fact, for certain examples of and the sum of over may not be defined at all, since some indexing may produce a conditionally convergent series.
Therefore we define only in the case where there exists some bijection such that is absolutely convergent. Note that here, "absolutely convergent" uses the more basic definition, applied to an indexed series. In this case, the value of the sum of over is defined by
Note that because the series is absolutely convergent, then every rearrangement is identical to a different choice of bijection Since all of these sums have the same value, then the sum of over is well-defined.
Even more generally we may define the sum of over when is uncountable. But first we define what it means for the sum to be convergent.
Let be any set, countable or uncountable, and a function. We say that the sum of over converges absolutely if
There is a theorem which states that, if the sum of over is absolutely convergent, then takes non-zero values on a set that is at most countable. Therefore, the following is a consistent definition of the sum of over when the sum is absolutely convergent.
Note that the final series uses the definition of a series over a countable set.
Some authors define an iterated sum to be absolutely convergent if the iterated series This is in fact equivalent to the absolute convergence of That is to say, if the sum of over converges absolutely, as defined above, then the iterated sum converges absolutely, and vice versa.
Absolute convergence of integrals
The integral of a real or complex-valued function is said to converge absolutely if One also says that is absolutely integrable. The issue of absolute integrability is intricate and depends on whether the Riemann, Lebesgue, or Kurzweil-Henstock (gauge) integral is considered; for the Riemann integral, it also depends on whether we only consider integrability in its proper sense ( and both bounded), or permit the more general case of improper integrals.
As a standard property of the Riemann integral, when is a bounded interval, every continuous function is bounded and (Riemann) integrable, and since continuous implies continuous, every continuous function is absolutely integrable. In fact, since is Riemann integrable on if is (properly) integrable and is continuous, it follows that is properly Riemann integrable if is. However, this implication does not hold in the case of improper integrals. For instance, the function is improperly Riemann integrable on its unbounded domain, but it is not absolutely integrable:
Indeed, more generally, given any series one can consider the associated step function defined by Then converges absolutely, converges conditionally or diverges according to the corresponding behavior of
The situation is different for the Lebesgue integral, which does not handle bounded and unbounded domains of integration separately (see below). The fact that the integral of is unbounded in the examples above implies that is also not integrable in the Lebesgue sense. In fact, in the Lebesgue theory of integration, given that is measurable, is (Lebesgue) integrable if and only if is (Lebesgue) integrable. However, the hypothesis that is measurable is crucial; it is not generally true that absolutely integrable functions on are integrable (simply because they may fail to be measurable): let be a nonmeasurable subset and consider where is the characteristic function of Then is not Lebesgue measurable and thus not integrable, but is a constant function and clearly integrable.
On the other hand, a function may be Kurzweil-Henstock integrable (gauge integrable) while is not. This includes the case of improperly Riemann integrable functions.
In a general sense, on any measure space the Lebesgue integral of a real-valued function is defined in terms of its positive and negative parts, so the facts:
integrable implies integrable
measurable, integrable implies integrable
are essentially built into the definition of the Lebesgue integral. In particular, applying the theory to the counting measure on a set one recovers the notion of unordered summation of series developed by Moore–Smith using (what are now called) nets. When is the set of natural numbers, Lebesgue integrability, unordered summability and absolute convergence all coincide.
Finally, all of the above holds for integrals with values in a Banach space. The definition of a Banach-valued Riemann integral is an evident modification of the usual one. For the Lebesgue integral one needs to circumvent the decomposition into positive and negative parts with Daniell's more functional analytic approach, obtaining the Bochner integral.
See also
Notes
References
Works cited
General references
Walter Rudin, Principles of Mathematical Analysis (McGraw-Hill: New York, 1964).
Mathematical series
Integral calculus
Summability theory
Convergence (mathematics) | Absolute convergence | Mathematics | 2,888 |
38,911,785 | https://en.wikipedia.org/wiki/Wilton%20International | Wilton International is a multi-occupancy industrial area between Eston and Redcar, North Yorkshire, England. Originally a chemical plant, it has businesses in a variety of fields including energy generation, plastic recycling and process research. It is named after a village to the south of the area.
History and occupants
The site was formerly wholly owned and operated by ICI, a large chemical company which opened the site in 1949.
Following the fragmentation of ICI, the owner of the site since 1995, Enron owned the facility briefly before it was acquired by Sembcorp Industries, a Singapore company. A number of multinational chemical companies now operate on the site. Sembcorp have built the UK's first wood-fired power station, Wilton 10, and in 2013 announced a new waste-to-energy plant known as Wilton 11.
There have been both closures and new chemical plants built at the Wilton site as the process industry continues to change and rejuvenate in line with changing consumer demands.
In 2001, BP closed its polythene plant (Polythene 5), which it had bought from ICI in 1982. In the same year Basell closed its polypropylene plant. In January 2009, Invista announced it was to close all of its plants on the site. The Teesside Power Station, a partially mothballed gas-fired power station built in 1993 for Enron, was being dismantled in 2014 and 2015.
Companies currently operating on the site include SABIC, who built the world's largest low-density polyethylene plant (LDPE) in 2009 and still operate an ethylene cracker there.
Lotte Chemicals stopped PTA production, however built a new PET production plant. Huntsman manufacture polyurethane intermediates and Ensus built their bioethanol facility in 2009, which at the time was the largest of its type in the United Kingdom. Biffa Polymers opened a polymer recycling plant that handles up to 30% of the UK's plastic milk bottles in March 2011. UK Wood Recycling Limited have a facility on the site providing waste wood to fuel the Wilton 10 power station.
Wilton International is a multi-occupancy business and research centre and is one of the main offices used by the economic cluster body the Northeast of England Process Industry Cluster (NEPIC). Many occupants of the Wilton International facility are members of the Cluster. The complex is one of the largest R&D facilities in Europe and is home to the Centre for Process Innovation (CPI), part of the Technology Strategy Board's High Value Manufacturing Catapult network. The site has laboratories and scale-up facilities for the chemistry-using process industries, that are accessed by many companies on a commercial or contract basis.
References
External links
Chemical plants
Manufacturing plants in England
Redcar and Cleveland | Wilton International | Chemistry | 577 |
25,517,382 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20February%2016%2C%202083 | A partial solar eclipse will occur at the Moon's ascending node of orbit on Tuesday, February 16, 2083, with a magnitude of 0.9433. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
The partial solar eclipse will be visible for much of Hawaii and North America.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2083
A total lunar eclipse on February 2.
A partial solar eclipse on February 16.
A partial solar eclipse on July 15.
A total lunar eclipse on July 29.
A partial solar eclipse on August 13.
Metonic
Preceded by: Solar eclipse of May 1, 2079
Followed by: Solar eclipse of December 6, 2086
Tzolkinex
Preceded by: Solar eclipse of January 6, 2076
Followed by: Solar eclipse of March 31, 2090
Half-Saros
Preceded by: Lunar eclipse of February 11, 2074
Followed by: Lunar eclipse of February 23, 2092
Tritos
Preceded by: Solar eclipse of March 19, 2072
Followed by: Solar eclipse of January 16, 2094
Solar Saros 151
Preceded by: Solar eclipse of February 5, 2065
Followed by: Solar eclipse of February 28, 2101
Inex
Preceded by: Solar eclipse of March 9, 2054
Followed by: Solar eclipse of January 29, 2112
Triad
Preceded by: Solar eclipse of April 17, 1996
Followed by: Solar eclipse of December 18, 2169
Solar eclipses of 2080–2083
Saros 151
Metonic series
Tritos series
Inex series
References
External links
2083 in science
2083 2 16
2083 2 16 | Solar eclipse of February 16, 2083 | Astronomy | 507 |
40,646,055 | https://en.wikipedia.org/wiki/Alignment-free%20sequence%20analysis | In bioinformatics, alignment-free sequence analysis approaches to molecular sequence and structure data provide alternatives over alignment-based approaches.
The emergence and need for the analysis of different types of data generated through biological research has given rise to the field of bioinformatics. Molecular sequence and structure data of DNA, RNA, and proteins, gene expression profiles or microarray data, metabolic pathway data are some of the major types of data being analysed in bioinformatics. Among them sequence data is increasing at the exponential rate due to advent of next-generation sequencing technologies. Since the origin of bioinformatics, sequence analysis has remained the major area of research with wide range of applications in database searching, genome annotation, comparative genomics, molecular phylogeny and gene prediction. The pioneering approaches for sequence analysis were based on sequence alignment either global or local, pairwise or multiple sequence alignment. Alignment-based approaches generally give excellent results when the sequences under study are closely related and can be reliably aligned, but when the sequences are divergent, a reliable alignment cannot be obtained and hence the applications of sequence alignment are limited. Another limitation of alignment-based approaches is their computational complexity and are time-consuming and thus, are limited when dealing with large-scale sequence data. The advent of next-generation sequencing technologies has resulted in generation of voluminous sequencing data. The size of this sequence data poses challenges on alignment-based algorithms in their assembly, annotation and comparative studies.
Alignment-free methods
Alignment-free methods can broadly be classified into five categories: a) methods based on k-mer/word frequency, b) methods based on the length of common substrings, c) methods based on the number of (spaced) word matches, d) methods based on micro-alignments, e) methods based on information theory and f) methods based on graphical representation. Alignment-free approaches have been used in sequence similarity searches, clustering and classification of sequences, and more recently in phylogenetics (Figure 1).
Such molecular phylogeny analyses employing alignment-free approaches are said to be part of next-generation phylogenomics. A number of review articles provide in-depth review of alignment-free methods in sequence analysis.
The AFproject is an international collaboration to benchmark and compare software tools for alignment-free sequence comparison.
Methods based on k-mer/word frequency
The popular methods based on k-mer/word frequencies include feature frequency profile (FFP), Composition vector (CV), Return time distribution (RTD), frequency chaos game representation (FCGR). and Spaced Words.
Feature frequency profile (FFP)
The methodology involved in FFP based method starts by calculating the count of each possible k-mer (possible number of k-mers for nucleotide sequence: 4k, while that for protein sequence: 20k) in sequences. Each k-mer count in each sequence is then normalized by dividing it by total of all k-mers' count in that sequence. This leads to conversion of each sequence into its feature frequency profile. The pair wise distance between two sequences is then calculated Jensen–Shannon (JS) divergence between their respective FFPs. The distance matrix thus obtained can be used to construct phylogenetic tree using clustering algorithms like neighbor-joining, UPGMA etc.
Composition vector (CV)
In this method frequency of appearance of each possible k-mer in a given sequence is calculated. The next characteristic step of this method is the subtraction of random background of these frequencies using Markov model to reduce the influence of random neutral mutations to highlight the role of selective evolution. The normalized frequencies are put a fixed order to form the composition vector (CV) of a given sequence. Cosine distance function is then used to compute pairwise distance between CVs of sequences. The distance matrix thus obtained can be used to construct phylogenetic tree using clustering algorithms like neighbor-joining, UPGMA etc. This method can be extended through resort to efficient pattern matching algorithms to include in the computation of the composition vectors: (i) all k-mers for any value of k, (ii) all substrings of any length up to an arbitrarily set maximum k value, (iii) all maximal substrings, where a substring is maximal if extending it by any character would cause a decrease in its occurrence count.
Return time distribution (RTD)
The RTD based method does not calculate the count of k-mers in sequences, instead it computes the time required for the reappearance of
k-mers. The time refers to the number of residues in successive appearance of particular k-mer. Thus the occurrence of each k-mer in a sequence is calculated in the form of RTD, which is then summarised using two statistical parameters mean (μ) and standard deviation (σ). Thus each sequence is represented in the form of numeric vector of size 2⋅4k containing μ and σ of 4k RTDs. The pair wise distance between sequences is calculated using Euclidean distance measure. The distance matrix thus obtained can be used to construct phylogenetic tree using clustering algorithms like neighbor-joining, UPGMA etc. A recent approach Pattern Extraction through Entropy Retrieval (PEER) provides direct detection of the k-mer length and summarised the occurrence interval using entropy.
Frequency chaos game representation (FCGR)
The FCGR methods have evolved from chaos game representation (CGR) technique, which provides scale independent representation for genomic sequences. The CGRs can be divided by grid lines where each grid square denotes the occurrence of oligonucleotides of a specific length in the sequence. Such representation of CGRs is termed as Frequency Chaos Game Representation (FCGR). This leads to representation of each sequence into FCGR. The pair wise distance between FCGRs of sequences can be calculated using the Pearson distance, the Hamming distance or the Euclidean distance.
Spaced-word frequencies
While most alignment-free algorithms compare the word-composition of sequences, Spaced Words uses a pattern of care and don't care positions. The occurrence of a spaced word in a sequence is then defined by the characters at the match positions only, while the characters at the don't care positions are ignored. Instead of comparing the frequencies of contiguous words in the input sequences, this approach compares the frequencies of the spaced words according to the pre-defined pattern. Note that the pre-defined pattern can be selected by analysis of the Variance of the number of matches, the probability of the first occurrence on several models, or the Pearson correlation coefficient between the expected word frequency and the true alignment distance.
Methods based on length of common substrings
The methods in this category employ the similarity and differences of substrings in a pair of sequences. These algorithms
were mostly used for string processing in computer science.
Average common substring (ACS)
In this approach, for a chosen pair of sequences (A and B of lengths n and m respectively), longest substring starting at some position is identified in one sequence (A) which exactly matches in the other sequence (B) at any position. In this way, lengths of longest substrings starting at different positions in sequence A and having exact matches at some positions in sequence B are calculated. All these lengths are averaged to derive a measure . Intuitively, larger the , the more similar the two sequences are. To account for the differences in the length of sequences, is normalized [i.e. ]. This gives the similarity measure between the sequences.
In order to derive a distance measure, the inverse of similarity measure is taken and a correction term is subtracted from it to assure that will be zero. Thus
This measure is not symmetric, so one has to compute , which gives final ACS measure between the two strings (A and B). The subsequence/substring search can be efficiently performed by
using suffix trees.
k-mismatch average common substring approach (kmacs)
This approach is a generalization of the ACS approach. To define the distance between two DNA or protein sequences, kmacs estimates for each position i of the first sequence the longest substring starting at i and matching a substring of the second sequence with up to k mismatches. It defines the average of these values as a measure of similarity between the sequences and turns this into a symmetric distance measure. Kmacs does not compute exact k-mismatch substrings, since this would be computational too costly, but approximates such substrings.
Mutation distances (Kr)
This approach is closely related to the ACS, which calculates the number of substitutions per site between two DNA sequences using the shortest
absent substring (termed as ).
Length distribution of k-mismatch common substrings
This approach uses the program kmacs to calculate longest common substrings with up to k mismatches for a pair of DNA sequences. The phylogenetic distance between the sequences can then be estimated from a local maximum in the length distribution of the k-mismatch common substrings.
Methods based on the number of (spaced) word matches
and
These approachese are variants of the statistics that counts the number of -mer matches between two sequences. They improve the simple statistics by taking the background distribution of the compared sequences into account.
MASH
This is an extremely fast method that uses the MinHash bottom sketch strategy for estimating the Jaccard index of the multi-sets of -mers of two input sequences. That is, it estimates the ratio of -mer matches to the total number of -mers of the sequences. This can be used, in turn, to estimate the evolutionary distances between the compared sequences, measured as the number of substitutions per sequence position since the sequences evolved from their last common ancestor.
Slope-Tree
This approach calculates a distance value between two protein sequences based on the decay of the number of -mer matches if increases.
Slope-SpaM
This method calculates the number of -mer or spaced-word matches
(SpaM) for different values for the word length or number of match positions in the underlying pattern, respectively. The slope of an affine-linear function that depends on is calculated to estimate the Jukes-Cantor distance between the input sequences .
Skmer
Skmer calculates distances between species from unassembled sequencing reads. Similar to MASH, it uses the Jaccard index on the sets of -mers from the input sequences. In contrast to MASH, the program is still accurate for low sequencing coverage, so it can be used for genome skimming.
Methods based on micro-alignments
Strictly spoken, these methods are not alignment-free. They are using simple gap-free micro-alignments where sequences are required to match at certain pre-defined positions. The positions aligned at the remaining positions of the micro-alignments where mismatches are allowed, are then used for phylogeny inference.
Co-phylog
This method searches for so-called structures that are defined as pairs of k-mer matches between two DNA sequences that are one position apart in both sequences. The two k-mer matches are called the context, the position between them is called the object. Co-phylog then defines the distance between two sequences the fraction of such structures for which the two nucleotides in the object are different. The approach can be applied to unassembled sequencing reads.
andi
andi estimates phylogenetic distances between genomic sequences based on ungapped local alignments that are flanked by maximal exact word matches. Such word matches can be efficiently found using suffix arrays. The gapfree alignments between the exact word matches are then used to estimate phylogenetic distances between genome sequences. The resulting distance estimates are accurate for up to around 0.6 substitutions per position.
Filtered Spaced-Word Matches (FSWM)
FSWM uses a pre-defined binary pattern P representing so-called match positions and don't-care positions. For a pair of input DNA sequences, it then searches for spaced-word matches w.r.t. P, i.e. for local gap-free alignments with matching nucleotides at the match positions of P and possible mismatches at the don't-care positions. Spurious low-scoring spaced-word matches are discarded, evolutionary distances between the input sequences are estimated based on the nucleotides aligned to each other at the don't-care positions of the remaining, homologous spaced-word matches. FSWM has been adapted to estimate distances based on unassembled NGS reads, this version of the program is called Read-SpaM.
Prot-SpaM
Prot-SpaM (Proteome-based Spaced-word Matches) is an implementation of the FSWM algorithm for partial or whole proteome sequences.
Multi-SpaM
Multi-SpaM (MultipleSpaced-word Matches) is an approach to genome-based phylogeny reconstruction that extends the FSWM idea to multiple sequence comparison. Given a binary pattern P of match positions and don't-care positions, the program searches for P-blocks, i.e. local gap-free four-way alignments with matching nucleotides at the match positions of P and possible mismatches at the don't-care positions. Such four-way alignments are randomly sampled from a set of input genome sequences. For each P-block, an unrooted tree topology is calculated using RAxML. The program Quartet MaxCut is then used to calculate a supertree from these trees.
Methods based on information theory
Information Theory has provided successful methods for alignment-free sequence analysis and comparison. The existing applications of information theory include global and local characterization of DNA, RNA and proteins, estimating genome entropy to motif and region classification. It also holds promise in gene mapping, next-generation sequencing analysis and metagenomics.
Base–base correlation (BBC)
Base–base correlation (BBC) converts the genome sequence into a unique 16-dimensional numeric vector using the following equation,
The and denotes the probabilities of bases i and j in the genome. The indicates the probability of bases i and j at distance ℓ in the genome. The parameter K indicates the maximum distance between the bases i and j. The variation in the values of 16 parameters reflect variation in the genome content and length.
Information correlation and partial information correlation (IC-PIC)
IC-PIC (information correlation and partial information correlation) based method employs the base correlation property of DNA sequence. IC and PIC were calculated using following formulas,
The final vector is obtained as follows:
which defines the range of distance between bases.
The pairwise distance between sequences is calculated using Euclidean distance measure. The distance matrix thus obtained can be used to construct phylogenetic tree using clustering algorithms like neighbor-joining, UPGMA, etc..
Compression
Examples are effective approximations to Kolmogorov complexity, for example Lempel-Ziv complexity. In general compression-based methods use the mutual information between the sequences. This is expressed in conditional Kolmogorov complexity, that is, the length of the shortest self-delimiting program required to generate a string given the prior knowledge of the other string. This measure has a relation to measuring k-words in a sequence, as they can be easily used to generate the sequence. It is sometimes a computationally intensive method. The theoretic basis for the Kolmogorov complexity approach was
laid by Bennett, Gacs, Li, Vitanyi, and Zurek (1998) by proposing the information distance. The Kolmogorov complexity being incomputable it was approximated by compression algorithms. The better they compress the better they are. Li, Badger, Chen, Kwong,, Kearney, and Zhang (2001) used a non-optimal but normalized form of this approach, and the optimal normalized form by Li, Chen, Li, Ma, and Vitanyi (2003) appeared in and more extensively and proven by Cilibrasi and Vitanyi (2005) in.
Otu and Sayood (2003) used the Lempel-Ziv complexity method to construct five different distance measures for phylogenetic tree construction.
Context modeling compression
In the context modeling complexity the next-symbol predictions, of one or more statistical models, are combined or competing to yield a prediction that is based on events recorded in the past. The algorithmic information content derived from each symbol prediction can be used to compute algorithmic information profiles with a time proportional to the length of the sequence. The process has been applied to DNA sequence analysis.
Methods based on graphical representation
Iterated maps
The use of iterated maps for sequence analysis was first introduced by HJ Jefferey in 1990 when he proposed to apply the Chaos Game to map genomic sequences into a unit square. That report coined the procedure as Chaos Game Representation (CGR). However, only 3 years later this approach was first dismissed as a projection of a Markov transition table by N Goldman. This objection was overruled by the end of that decade when the opposite was found to be the case – that CGR bijectively maps Markov transition is into a fractal, order-free (degree-free) representation. The realization that iterated maps provide a bijective map between the symbolic space and numeric space led to the identification of a variety of alignment-free approaches to sequence comparison and characterization. These developments were reviewed in late 2013 by JS Almeida in. A number of web apps such as https://github.com/usm/usm.github.com/wiki, are available to demonstrate how to encode and compare arbitrary symbolic sequences in a manner that takes full advantage of modern MapReduce distribution developed for cloud computing.
Comparison of alignment based and alignment-free methods
Applications of alignment-free methods
Genomic rearrangements
Molecular phylogenetics
Metagenomics
Next generation sequence data analysis
Epigenomics
Barcoding of species
Population genetics
Horizontal gene transfer
Sero/genotyping of viruses
Allergenicity prediction
SNP discovery
Recombination detection
Viral Classification
Archaea Taxonomic Identification
Taxonomic Classification
Temporal Analysis
Low-complexity Regions Identification
List of web servers/software for alignment-free methods
See also
Sequence analysis
Multiple sequence alignment
Phylogenomics
Bioinformatics
Metagenomics
Next-generation sequencing
Population genetics
SNPs
Recombination detection program
Genome skimming
References
Bioinformatics
Computational biology | Alignment-free sequence analysis | Engineering,Biology | 3,828 |
75,333,139 | https://en.wikipedia.org/wiki/C17H18N6 | {{DISPLAYTITLE:C17H18N6}}
The molecular formula C17H18N6 may refer to:
Deuruxolitinib
Ruxolitinib | C17H18N6 | Chemistry | 40 |
14,346,669 | https://en.wikipedia.org/wiki/Relativistic%20speed | Relativistic speed refers to speed at which relativistic effects become significant to the desired accuracy of measurement of the phenomenon being observed. Relativistic effects are those discrepancies between values calculated by models considering and not considering relativity. Related words are velocity, rapidity, and celerity which is proper velocity. Speed is a scalar, being the magnitude of the velocity vector which in relativity is the four-velocity and in three-dimension Euclidean space a three-velocity. Speed is empirically measured as average speed, although current devices in common use can estimate speed over very small intervals and closely approximate instantaneous speed. Non-relativistic discrepancies include cosine error which occurs in speed detection devices when only one scalar component of the three-velocity is measured and the Doppler effect which may affect observations of wavelength and frequency. Relativistic effects are highly non-linear and for everyday purposes are insignificant because the Newtonian model closely approximates the relativity model. In special relativity the Lorentz factor is a measure of time dilation, length contraction and the relativistic mass increase of a moving object.
See also
Lorentz factor
Relative velocity
Relativistic beaming
Relativistic jet
Relativistic mass
Relativistic particle
Relativistic plasma
Relativistic wave equations
Special relativity
Ultrarelativistic limit
References
Speed
Velocity | Relativistic speed | Physics | 288 |
5,311,525 | https://en.wikipedia.org/wiki/Phonovision | Phonovision was a patented concept to create pre-recorded mechanically scanned television recordings on gramophone records. Attempts at developing Phonovision were undertaken in the late 1920s in London by its inventor, Scottish television pioneer John Logie Baird. The objective was not simply to record video, but to record it synchronously, as Baird intended playback from an inexpensive playback device, which he called a 'Phonovisor'. Baird stated that he had several records made of the sound of the vision signal but that the quality was poor. Unlike Baird's other experiments (including stereoscopy, colour and infra-red night-vision), there is no evidence of him having demonstrated playback of pictures, though he did play back the sound of the vision signal to audiences. Baird moved on leaving behind several discs in the hands of museums and favoured company members. Until 1982, this was the extent of knowledge regarding Phonovision.
Discoveries and restoration
From 1982, Donald F. McLean undertook a forensic-level investigation that identified a total of five different disc recordings dated 1927-28 that closely aligned with the principles of Baird's Phonovision patents. In addition, study of the distortions in the recordings led to a new understanding of the mechanical problems Baird had encountered explaining why these discs were never good enough for picture playback. The problems were largely corrected by software, and the resultant images project a far better quality image than what would have been seen in Baird's laboratories at the time.
Despite its technical problems, Phonovision remains the very earliest means of recording a television signal. In a sense, it can be seen as the progenitor of other disc-based systems, such as the German TelDec system of the early 1970s and the USA's Capacitance Electronic Disc, also known as SelectaVision.
The Experimental Phonovision Discs (1927-28)
The earliest surviving Phonovision disc depicts one of the dummy heads that Baird employed for tests. It was recorded on 20 September 1927 and most likely was used during tests prior to Baird's trans-Atlantic television demonstration in February 1928. This disc and the surviving documentation from that demonstration are held at the University of Glasgow Archives and Special Collections.
The Phonovision recordings of 10 January 1928 reveal the earliest recording of a human face identified as one of Baird's staff, Wally Fowlkes. On 28 March 1928, a recording labelled 'Miss Pounsford' was made which is, in many ways, the best of the experimental discs. In the 1990s, her relatives identified her as Mabel Pounsford, who had always claimed that she had been a secretary working for J L Baird.
Recordings from the BBC 30-line Television Service (1932-35)
Baird's experimental discs are not the only such video recordings that have survived. In April 1933, an early television enthusiast used a Silvatone home sound recording outfit that indented a signal-modulated groove into a bare aluminium disc thereby capturing the video signal from a BBC 30-line 12.5 frames per second live broadcast. This disc preserves about four minutes of video without the corresponding audio from what McLean identified as the BBC programme 'Looking In'. This was advertised by the BBC as 'the world's first television revue'. McLean digitally recovered the televised image, revealing a high level of production values that belies the conventional wisdom that all mechanical television programming was very simple and static and offered little entertainment value.
Several other undated domestic recordings of 30-line television were discovered in 1998 and show high quality 30-line television (most likely from the BBC 30-line Television Service) of singers that include one recognised by Betty Bolton as being herself.
A comprehensive history of the recovery of Phonovision as well as the domestic recordings of the vision signal from the BBC 30-line television service was published in 2000.
References
External links
World's Earliest Television Recordings - Restored!
Television technology | Phonovision | Technology | 796 |
34,149,964 | https://en.wikipedia.org/wiki/Genic%20capture | Genic capture is a hypothesis explaining the maintenance of genetic variance in traits under sexual selection. A classic problem in sexual selection is the fixation of alleles that are beneficial under strong selection, thereby eliminating the benefits of mate choice. Genic capture resolves this paradox by suggesting that additive genetic variance of sexually selected traits reflects the genetic variance in total condition. A deleterious mutation anywhere in the genome will adversely affect condition, and thereby adversely affect a condition-dependent sexually selected trait. Genic capture therefore resolves the lek paradox by proposing that recurrent deleterious mutation maintains additive genetic variance in fitness by incorporating the entire mutation load of an individual. Thus any condition-dependent trait "captures" the overall genetic variance in condition. Rowe and Houle argued that genic capture ensures that good genes will become a central feature of the evolution of any sexually selected trait.
Condition
The key quantity for genic capture is vaguely defined as "condition." The hypothesis only defines condition as a quantity that correlates tightly with overall fitness, such that directional selection will always increase average condition over time. Condition should, in general, reflect overall energy acquisition, such that life-history variation reflects differential allocation to survival and sexual signalling. Genetic variation in condition should be very broadly affected by any changes in the genome. Close to equilibrium any mutation should be deleterious, thereby leading to non-zero overall mutation rate, maintaining variance in fitness.
Rowe and Houle's Quantitative Genetic Model
Rowe and Houle's simple model defines a trait as the result of three heritable components, a condition-independent component , epistatic modification and condition, suggesting the following function for a trait:
where is the condition of an individual. Loci contributing to are loosely linked and independent of loci contributing to and . Rowe and Houle then find the expected variance of and ignored higher-order terms (i.e. products of variances):
where represents the genetic variance in the signal and analogously for other traits. Under directional selection on , the loci underlying and may lose all genetic variance. However, there is no qualitative difference in directional selection on between stabilizing selection (i.e. no sexual selection) and directional selection on . Therefore, the second term will remain positive (due to biased mutation) and dominate under sexual selection.
Other Applications
Genic capture can also play a role in accelerating adaptation to new environments.
Comparisons
Genic capture was proposed as a simpler alternative to another theory explaining the lek paradox that proposed that sexual selection creates disruptive selection, i.e. positive selection for genetic variance. Genic capture does not require any particular fitness function.
References
Evolutionary biology
Sexual selection | Genic capture | Biology | 548 |
4,664,852 | https://en.wikipedia.org/wiki/Salt%20and%20light | Salt and light are images used by Jesus in the Sermon on the Mount, one of the main teachings of Jesus on morality and discipleship. These images are in Matthew 5:13, 14, 15 and 16
The general theme of Matthew 5:13–16 is promises and expectations, and these expectations follow the promises of the first part.
The first verse of this passage introduces the phrase "salt of the earth" ():
The second verse introduces "City upon a Hill" ():
The later verses refer to not hiding a lamp under a bushel, which also occurs in and the phrase "Light of the World", which also appears in .
See also
Five Discourses of Matthew
Matthew 5:13
Salt in the Bible
Salt of the earth
References
Gospel of Luke
Gospel of Matthew
Light and religion
Metaphors
New Testament words and phrases
Sayings of Jesus
Sermon on the Mount
Matthew 5
Edible salt | Salt and light | Chemistry | 182 |
26,712,298 | https://en.wikipedia.org/wiki/Undefined%20value | In computing (particularly, in programming), undefined value is a condition where an expression does not have a correct value, although it is syntactically correct. An undefined value must not be confused with empty string, Boolean "false" or other "empty" (but defined) values. Depending on circumstances, evaluation to an undefined value may lead to exception or undefined behaviour, but in some programming languages undefined values can occur during a normal, predictable course of program execution.
Dynamically typed languages usually treat undefined values explicitly when possible. For instance, Perl has undef operator which can "assign" such value to a variable. In other type systems an undefined value can mean an unknown, unpredictable value, or merely a program failure on attempt of its evaluation. Nullable types offer an intermediate approach; see below.
Handling
The value of a partial function is undefined when its argument is out of its domain of definition. This include numerous arithmetical cases such as division by zero, square root or logarithm of a negative number etc. Another common example is accessing an array with an index which is out of bounds, as is the value in an associative array for a key which it does not contain. There are various ways that these situations are handled in practice:
Reserved value
In applications where undefined values must be handled gracefully, it is common to reserve a special null value which is distinguishable from normal values. This resolves the difficulty by creating a defined value to represent the formerly undefined case. There are many examples of this:
The C standard I/O library reserves the special value EOF to indicate that no more input is available. The getchar() function returns the next available input character, or EOF if there is no more available. (The ASCII character code defines a null character for this purpose, but the standard I/O library wishes to be able to send and receive null characters, so it defines a separate EOF value.)
The IEEE 754 floating-point arithmetic standard defines a special "not a number" value which is returned when an arithmetic operation has no defined value. Examples are division by zero, or the square root or logarithm of a negative number.
Structured Query Language has a special NULL value to indicate missing data.
The Perl language lets the definedness of an expression be checked via the defined() predicate.
Many programming languages support the concept of a null pointer distinct from any valid pointer, and often used as an error return.
Some languages allow most types to be nullable, for example C#.
Most Unix system calls return the special value −1 to indicate failure.
While dynamically typed languages often ensure that uninitialized variables default to a null value, statically typed values often do not, and distinguish null values (which are well-defined) from uninitialized values (which are not).
Exception handling
Some programming languages have a concept of exception handling for dealing with failure to return a value. The function returns in a defined way, but it does not return a value, so there is no need to invent a special value to return.
A variation on this is signal handling, which is done at the operating system level and not integrated into a programming language. Signal handlers can attempt some forms of recovery, such as terminating part of a computation, but without as much flexibility as fully integrated exception handling.
Non-returning functions
A function which never returns has an undefined value because the value can never be observed. Such functions are formally assigned the bottom type, which has no values. Examples fall into two categories:
Functions which loop forever. This may arise deliberately, or as a result of a search for something which will never be found. (For example, in the case of failed μ operator in a partial recursive function.)
Functions which terminate the computation, such as the exit system call. From within the program, this is indistinguishable from the preceding case, but it makes a difference to the invoker of the program.
Undefined behaviour
All of the preceding methods of handling undefined values require that the undefinedness be detected. That is, the called function determines that it cannot return a normal result and takes some action to notify the caller. At the other end of the spectrum, undefined behaviour places the onus on the caller to avoid calling a function with arguments outside of its domain. There is no limit on what might happen. At best, an easily detectable crash; at worst, a subtle error in a seemingly unrelated computation.
(The formal definition of "undefined behaviour" includes even more extreme possibilities, including things like "halt and catch fire" and "make demons fly out of your nose".)
The classic example is a dangling pointer reference. It is very fast to dereference a valid pointer, but can be very complex to determine if a pointer is valid. Therefore, computer hardware and low-level languages such as C do not attempt to validate pointers before dereferencing them, instead passing responsibility to the programmer. This offers speed at the expense of safety.
Undefined value sensu stricto
The strict definition of an undefined value is a superficially valid (non-null) output which is meaningless but does not trigger undefined behaviour. For example, passing a negative number to the fast inverse square root function will produce a number. Not a very useful number, but the computation will complete and return something.
Undefined values occur particularly often in hardware. If a wire is not carrying useful information, it still exists and has some voltage level. The voltage should not be abnormal (e.g. not a damaging overvoltage), but the particular logic level is unimportant.
The same situation occurs in software when a data buffer is provided but not completely filled. For example, the C library strftime function converts a timestamp to human-readable form in a supplied output buffer. If the output buffer is not large enough to hold the result, an error is returned and the buffer's contents are undefined.
In the other direction, the open system call in POSIX takes three arguments: a file name, some flags, and a file mode. The file mode is only used if the flags include O_CREAT. It is common to use a two-argument form of open, which provides an undefined value for the file mode, when O_CREAT is omitted.
Sometimes it is useful to work with such undefined values in a limited way. The overall computation can still be well-defined if the undefined value is later ignored.
As an example of this, the C language permits converting a pointer to an integer, although the numerical value of that integer is undefined. It may still be useful for debugging, for comparing two pointers for equality, or for creating an XOR linked list.
Safely handling undefined values is important in optimistic concurrency control systems, which detect race conditions after the fact. For example, reading a shared variable protected by seqlock will produce an undefined value before determining that a race condition happened. It will then discard the undefined data and retry the operation. This produces a defined result as long as the operations performed on the undefined values do not produce full-fledged undefined behaviour.
Other examples of undefined values being useful are random number generators and hash functions. The specific values returned are undefined, but they have well-defined properties and may be used without error.
Notation
In computability theory, undefinedness of an expression is denoted as expr↑, and definedness as expr↓.
See also
Defined and undefined (mathematics)
Null (SQL)
References
Software anomalies
Theory of computation | Undefined value | Technology | 1,597 |
1,164,286 | https://en.wikipedia.org/wiki/Sea%20lettuce | The sea lettuces comprise the genus Ulva, a group of edible green algae that is widely distributed along the coasts of the world's oceans. The type species within the genus Ulva is Ulva lactuca, lactuca being Latin for "lettuce". The genus also includes the species previously classified under the genus Enteromorpha, the former members of which are known under the common name green nori.
Description
Individual blades of Ulva can grow to be more than 400 mm (16 in) in size, but this occurs only when the plants are growing in sheltered areas. A macroscopic alga which is light to dark green in colour, it is attached by disc holdfast. Their structure is a leaflike flattened thallus.
Nutrition and contamination
Sea lettuce is eaten by a number of different sea animals, including manatees and the sea slugs known as sea hares. Many species of sea lettuce are a food source for humans in Scandinavia, Great Britain, Ireland, China, and Japan (where this food is known as aosa). Sea lettuce as a food for humans is eaten raw in salads and cooked in soups. It is high in protein, soluble dietary fiber, and a variety of vitamins and minerals, especially iron. However, contamination with toxic heavy metals at certain sites where it can be collected makes it dangerous for human consumption.
Aquarium trade
Sea lettuce species are commonly found in the saltwater aquarium trade, where the plants are valued for their high nutrient uptake and edibility. Many reef aquarium keepers use sea lettuce species in refugia or grow it as a food source for herbivorous fish. Sea lettuce is very easy to keep, tolerating a wide range of lighting and temperature conditions. In the refugium, sea lettuce can be attached to live rock or another surface, or simply left to drift in the water.
Health concerns
In August 2009, unprecedented amounts of these algae washed up on the beaches of Brittany, France, causing a major public health scare as it decomposed. The rotting leaves produced large quantities of hydrogen sulfide, a toxic gas. In one incident near Saint-Michel-en-Grève, a horse rider lost consciousness and his horse died after breathing the seaweed fumes; in another, a lorry driver driving a load of decomposing sea lettuce passed out, crashed, and died, with toxic fumes claimed to be the cause. Environmentalists blamed the phenomenon on excessive nitrogenous compounds washed out to sea from improper disposal of pig and poultry animal waste from industrial farms.
Species
Species in the genus Ulva include:
Accepted species
Ulva acanthophora (Kützing) Hayden, Blomster, Maggs, P.C. Silva, Stanhope & J.R. Waaland, 2003
Ulva anandii Amjad & Shameel, 1993
Ulva arasakii Chihara, 1969
Ulva atroviridis Levring, 1938
Ulva australis Areschoug, 1854
Ulva beytensis Thivy & Sharma, 1966
Ulva bifrons Ardré, 1967
Ulva brevistipita V.J. Chapman, 1956
Ulva burmanica (Zeller) De Toni, 1889
Ulva californica Wille, 1899
Ulva chaetomorphoides (Børgesen) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003
Ulva clathrata (Roth) C. Agardh, 1811
Ulva compressa Linnaeus, 1753
Ulva conglobata Kjellman, 1897
Ulva cornuta Lightfoot, 1777
Ulva covelongensis V. Krishnamurthy & H. Joshi, 1969
Ulva crassa V.J. Chapman, 1956
Ulva crassimembrana (V.J. Chapman) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003
Ulva curvata (Kützing) De Toni, 1889
Ulva denticulata P.J.L. Dangeard, 1959
Ulva diaphana Hudson, 1778
Ulva elegans Gayral, 1960
Ulva enteromorpha Le Jolis, 1863
Ulva erecta (Lyngbye) Fries
Ulva expansa (Setchell) Setchell & N.L. Gardner, 1920
Ulva fasciata Delile, 1813
Ulva flexuosa Wulfen, 1803
Ulva geminoidea V.J. Chapman, 1956
Ulva gigantea (Kützing) Bliding, 1969
Ulva grandis Saifullah & Nizamuddin, 1977
Ulva hookeriana (Kützing) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland
Ulva hopkirkii (M'Calla ex Harvey) P. Crouan & H. Crouan
Ulva howensis (A.H.S. Lucas) Kraft, 2007
Ulva indica Roth, 1806
Ulva intestinalis Linnaeus, 1753
Ulva intestinaloides (R.P.T. Koeman & Hoek) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003
Ulva javanica N.L. Burman, 1768
Ulva kylinii (Bliding) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003
Ulva lactuca Linnaeus, 1753
Ulva laetevirens J.E. Areschoug, 1854
Ulva laingii V.J. Chapman, 1956
Ulva linearis P.J.L. Dangeard, 1957
Ulva linza Linnaeus, 1753
Ulva lippii Lamouroux
Ulva litoralis Suhr ex Kützing
Ulva littorea Suhr
Ulva lobata (Kützing) Harvey, 1855
Ulva maeotica (Proshkina-Lavrenko) P.M.Tsarenko, 2011
Ulva marginata (J. Agardh) Le Jolis
Ulva micrococca (Kützing) Gobi
Ulva mutabilis Föyn, 1958
Ulva neapolitana Bliding, 1960
Ulva nematoidea Bory de Saint-Vincent, 1828
Ulva ohnoi Hiraoka & Shimada, 2004
Ulva olivascens P.J.L. Dangeard
Ulva pacifica Endlicher
Ulva papenfussii Pham-Hoang Hô, 1969
Ulva parva V.J. Chapman, 1956
Ulva paschima Bast
Ulva patengensis Salam & Khan, 1981
Ulva percursa (C. Agardh) C. Agardh
Ulva pertusa Kjellman, 1897
Ulva phyllosa (V.J. Chapman) Papenfuss
Ulva polyclada Kraft, 2007
Ulva popenguinensis P.J.L. Dangeard, 1958
Ulva porrifolia (S.G. Gmelin) J.F. Gmelin
Ulva profunda W.R. Taylor, 1928
Ulva prolifera O.F.Müller, 1778
Ulva pseudocurvata Koeman & Hoek, 1981
Ulva pseudolinza (R.P.T. Koeman & Hoek) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003
Ulva pulchra Jaasund, 1976
Ulva quilonensis Sindhu & Panikkar, 1995
Ulva radiata (J. Agardh) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003
Ulva ralfsii (Harvey) Le Jolis, 1863
Ulva ranunculata Kraft & A.J.K. Millar, 2000
Ulva reticulata Forsskål, 1775
Ulva rhacodes (Holmes) Papenfuss, 1960
Ulva rigida C. Agardh, 1823
Ulva rotundata Bliding, 1968
Ulva saifullahii Amjad & Shameel, 1993
Ulva serrata A.P.de Candolle
Ulva simplex (K.L. Vinogradova) Hayden, Blomster, Maggs, P.C. Silva, M.J. Stanhope & J.R. Waaland, 2003
Ulva sorensenii V.J. Chapman, 1956
Ulva spinulosa Okamura & Segawa, 1936
Ulva stenophylla Setchell & N.L. Gardner, 1920
Ulva sublittoralis Segawa, 1938
Ulva subulata (Wulfen) Naccari
Ulva taeniata (Setchell) Setchell & N.L. Gardner, 1920
Ulva tanneri H.S. Hayden & J.R. Waaland, 2003
Ulva tenera Kornmann & Sahling
Ulva torta (Mertens) Trevisan, 1841
Ulva tuberosa Palisot de Beauvois
Ulva uncialis (Kützing) Montagne, 1850
Ulva uncinata Mohr
Ulva uncinata Mertens
Ulva usneoides Bonnemaison
Ulva utricularis (Roth) C. Agardh
Ulva utriculosa C. Agardh
Ulva uvoides Bory de Saint-Vincent
Ulva ventricosa A.P.de Candolle
Nomina dubia
Ulva costata Wollny, 1881
Ulva repens Clemente, 1807
Ulva tetragona A.P.de Candolle, 1807
A newly discovered Indian endemic species of Ulva with tubular thallus indistinguishable from Ulva intestinalis has been formally established in 2014 as Ulva paschima Bast.
Ten new species have been discovered in New Caledonia: Ulva arbuscula, Ulva planiramosa, Ulva batuffolosa, Ulva tentaculosa, Ulva finissima, Ulva pluriramosa, Ulva scolopendra and Ulva spumosa.
See also
Green laver
References
External links
Marine botany: Ulva
Toxic seaweed clogs French coast Caledonia: morphological diversity, and bloom potential.t (BBC)
Other References
Beer,Sven. 2023 Photosynthetic traits of ubiquitous and prolific macroalga Ulva (Chlorophyta): a review. European Journal of Phycology 58:390 - 398.
Ulvaceae
Edible seaweeds
Edible algae
Taxa named by Carl Linnaeus | Sea lettuce | Biology | 2,300 |
496,315 | https://en.wikipedia.org/wiki/Operator%20topologies | In the mathematical field of functional analysis there are several standard topologies which are given to the algebra of bounded linear operators on a Banach space .
Introduction
Let be a sequence of linear operators on the Banach space . Consider the statement that converges to some operator on .
This could have several different meanings:
If , that is, the operator norm of (the supremum of , where ranges over the unit ball in ) converges to 0, we say that in the uniform operator topology.
If for all , then we say in the strong operator topology.
Finally, suppose that for all we have in the weak topology of . This means that for all continuous linear functionals on . In this case we say that in the weak operator topology.
List of topologies on B(H)
There are many topologies that can be defined on besides the ones used above; most are at first only defined when is a Hilbert space, even though in many cases there are appropriate generalisations.
The topologies listed below are all locally convex, which implies that they are defined by a family of seminorms.
In analysis, a topology is called strong if it has many open sets and weak if it has few open sets, so that the corresponding modes of convergence are, respectively, strong and weak.
(In topology proper, these terms can suggest the opposite meaning, so strong and weak are replaced with, respectively, fine and coarse.)
The diagram on the right is a summary of the relations, with the arrows pointing from strong to weak.
If is a Hilbert space, the linear space of Hilbert space operators has a (unique) predual ,
consisting of the trace class operators, whose dual is .
The seminorm for w positive in the predual is defined to be
.
If is a vector space of linear maps on the vector space , then is defined to be the weakest topology on such that all elements of are continuous.
The norm topology or uniform topology or uniform operator topology is defined by the usual norm ||x|| on . It is stronger than all the other topologies below.
The weak (Banach space) topology is , in other words the weakest topology such that all elements of the dual are continuous. It is the weak topology on the Banach space . It is stronger than the ultraweak and weak operator topologies. (Warning: the weak Banach space topology and the weak operator topology and the ultraweak topology are all sometimes called the weak topology, but they are different.)
The Mackey topology or Arens-Mackey topology is the strongest locally convex topology on such that the dual is , and is also the uniform convergence topology on , -compact convex subsets of . It is stronger than all topologies below.
The σ-strong-* topology or ultrastrong-* topology is the weakest topology stronger than the ultrastrong topology such that the adjoint map is continuous. It is defined by the family of seminorms and for positive elements of . It is stronger than all topologies below.
The σ-strong topology or ultrastrong topology or strongest topology or strongest operator topology is defined by the family of seminorms for positive elements of . It is stronger than all the topologies below other than the strong* topology. Warning: in spite of the name "strongest topology", it is weaker than the norm topology.)
The σ-weak topology or ultraweak topology or weak-* operator topology or weak-* topology or weak topology or ) topology is defined by the family of seminorms |(w, x)| for elements w of . It is stronger than the weak operator topology. (Warning: the weak Banach space topology and the weak operator topology and the ultraweak topology are all sometimes called the weak topology, but they are different.)
The strong-* operator topology or strong-* topology is defined by the seminorms ||x(h)|| and ||x*(h)|| for . It is stronger than the strong and weak operator topologies.
The strong operator topology (SOT) or strong topology is defined by the seminorms ||x(h)|| for . It is stronger than the weak operator topology.
The weak operator topology (WOT) or weak topology is defined by the seminorms |(x(h1), h2)| for . (Warning: the weak Banach space topology, the weak operator topology, and the ultraweak topology are all sometimes called the weak topology, but they are different.)
Relations between the topologies
The continuous linear functionals on for the weak, strong, and strong* (operator) topologies are the same, and are the finite linear combinations of the linear functionals
(xh1, h2) for .
The continuous linear functionals on for the ultraweak, ultrastrong, ultrastrong* and Arens-Mackey topologies are the same, and are the elements of the predual .
By definition, the continuous linear functionals in the norm topology are the same as those in the weak Banach space topology.
This dual is a rather large space with many pathological elements.
On norm bounded sets of , the weak (operator) and ultraweak topologies coincide. This can be seen via, for instance, the Banach–Alaoglu theorem.
For essentially the same reason, the ultrastrong
topology is the same as the strong topology on any (norm) bounded subset of .
Same is true for the Arens-Mackey topology, the ultrastrong*, and the strong* topology.
In locally convex spaces, closure of convex sets can be characterized by the continuous linear functionals. Therefore, for a convex subset of , the conditions that be closed in the ultrastrong*, ultrastrong, and ultraweak topologies are all equivalent and are also equivalent to the conditions that
for all , has closed intersection with the closed ball of radius in the strong*, strong, or weak (operator) topologies.
The norm topology is metrizable and the others are not; in fact they fail to be first-countable.
However, when is separable, all the topologies above are metrizable when restricted to the unit ball (or to any norm-bounded subset).
Topology to use
The most commonly used topologies are the norm, strong, and weak operator topologies.
The weak operator topology is useful for compactness arguments, because the unit ball is compact by the Banach–Alaoglu theorem.
The norm topology is fundamental because it makes into a Banach space, but it is too strong for many purposes; for example, is not separable in this topology.
The strong operator topology could be the most commonly used.
The ultraweak and ultrastrong topologies are better-behaved than the weak and strong operator topologies, but their definitions are more complicated, so they are usually not used unless their better properties are really needed.
For example, the dual space of in the weak or strong operator topology is too small to have much analytic content.
The adjoint map is not continuous in the strong operator and ultrastrong topologies, while the strong* and ultrastrong* topologies are modifications so that the adjoint becomes continuous. They are not used very often.
The Arens–Mackey topology and the weak Banach space topology are relatively rarely used.
To summarize, the three essential topologies on are the norm, ultrastrong, and ultraweak topologies.
The weak and strong operator topologies are widely used as convenient approximations to the ultraweak and ultrastrong topologies. The other topologies are relatively obscure.
See also
References
Functional analysis, by Reed and Simon,
Theory of Operator Algebras I, by M. Takesaki (especially chapter II.2)
Functional analysis
Topological vector spaces | Operator topologies | Mathematics | 1,618 |
5,075,716 | https://en.wikipedia.org/wiki/Realcompact%20space | In mathematics, in the field of topology, a topological space is said to be realcompact if it is completely regular Hausdorff and it contains every point of its Stone–Čech compactification which is real (meaning that the quotient field at that point of the ring of real functions is the reals). Realcompact spaces have also been called Q-spaces, saturated spaces, functionally complete spaces, real-complete spaces, replete spaces and Hewitt–Nachbin spaces (named after Edwin Hewitt and Leopoldo Nachbin). Realcompact spaces were introduced by .
Properties
A space is realcompact if and only if it can be embedded homeomorphically as a closed subset in some (not necessarily finite) Cartesian power of the reals, with the product topology. Moreover, a (Hausdorff) space is realcompact if and only if it has the uniform topology and is complete for the uniform structure generated by the continuous real-valued functions (Gillman, Jerison, p. 226).
For example Lindelöf spaces are realcompact; in particular all subsets of are realcompact.
The (Hewitt) realcompactification υX of a topological space X consists of the real points of its Stone–Čech compactification βX. A topological space X is realcompact if and only if it coincides with its Hewitt realcompactification.
Write C(X) for the ring of continuous real-valued functions on a topological space X. If Y is a real compact space, then ring homomorphisms from C(Y) to C(X) correspond to continuous maps from X to Y. In particular the category of realcompact spaces is dual to the category of rings of the form C(X).
In order that a Hausdorff space X is compact it is necessary and sufficient that X is realcompact and pseudocompact (see Engelking, p. 153).
See also
Compact space
Paracompact space
Normal space
Pseudocompact space
Tychonoff space
References
Gillman, Leonard; Jerison, Meyer, "Rings of continuous functions". Reprint of the 1960 edition. Graduate Texts in Mathematics, No. 43. Springer-Verlag, New York-Heidelberg, 1976. xiii+300 pp.
.
.
.
Compactness (mathematics)
Properties of topological spaces | Realcompact space | Mathematics | 500 |
17,509,366 | https://en.wikipedia.org/wiki/Alexander%20van%20Oudenaarden | Alexander van Oudenaarden (19 March 1970) is a Dutch biophysicist and systems biologist. He is a researcher in stem cell biology, specialising in single cell techniques. In 2012 he started as director of the Hubrecht Institute and was awarded three times an ERC Advanced Grant, in 2012, 2017, and 2022. He was awarded the Spinoza Prize in 2017.
Biography
Van Oudenaarden was born 19 March 1970, in Zuidland, a small town in the Dutch province of South Holland. He studied at the Delft University of Technology, where he obtained an MSc degree in Materials Science and Engineering (cum laude) and an MSc degree in Physics, both in 1993, and subsequently a PhD degree in Physics (cum laude) in 1998 in experimental condensed matter physics, under the supervision of professor J.E. Mooij. He received the Andries Miedema Award (best doctoral research in the field of condensed matter physics in the Netherlands) for his thesis on "Quantum vortices and quantum interference effects in circuits of small tunnel junctions". In 1998, he moved to Stanford University, where he was a postdoctoral researcher in the departments of Biochemistry and of Microbiology & Immunology, working on force generation of polymerising actin filaments in the Theriot lab and
a postdoctoral researcher in the department of Chemistry, working on Micropatterning of supported phospholipid bi-layers in the Boxer lab. In 2000 he joined the department of Physics at MIT as an assistant professor, was tenured in 2004 and became a full professor. In 2001 he received the NSF CAREER award, and was both an Alfred Sloan Research Fellow and the Keck Career Development Career Development Professor in Biomedical Engineering. In 2012 Alexander became the director of the Hubrecht Institute as the successor of Hans Clevers. In 2017 he received his second ERC Advanced Grant, for his study titled "a single-cell genomics approach integrating gene expression, lineage, and physical interactions". In 2022 he received his third ERC Advanced Grant, titled "scTranslatomics".
In 2014 van Oudenaarden became a member of the Royal Netherlands Academy of Arts and Sciences. In 2017 he was one of four winners of the Spinoza Prize. In 2022 he was elected to the American Academy of Arts and Sciences (International Honorary Member).
He is married and has three children.
Work
During his time at MIT his lab started with parallel lines of research in actin dynamics
and noise in gene networks, and then focused on stochasticity in gene networks biological networks as control systems, and the evolution of small networks.
Today, Van Oudenaarden works at the Hubrecht Institute and focuses on stochastic gene expression, developing new tools for quantifying gene expression in single cells and MicroRNAs.
References
External links
Alexander van Oudenaarden's Lab at the Hubrecht Institute
1970 births
Delft University of Technology alumni
Dutch academics
Dutch biophysicists
Living people
Massachusetts Institute of Technology School of Science faculty
Members of the Royal Netherlands Academy of Arts and Sciences
People from Bernisse
Probability theorists
Synthetic biologists
Systems biologists
Spinoza Prize winners
European Research Council grantees | Alexander van Oudenaarden | Biology | 652 |
38,669,991 | https://en.wikipedia.org/wiki/The%20Berlin%20Key | "The Berlin key or how to do words with things" is an essay by sociologist Bruno Latour that originally appeared as , , in 1993. It was later published as the first chapter in P.M. Graves-Brown's Matter, Materiality and Modern Culture.
In the 15-page chapter, written informally in third-person narrative, Latour describes a common object used in Berlin, a "Berlin Key" which is constructed so that after unlocking one's apartment door to leave from inside one can only retrieve the key from outside in a manner which locks the door behind oneself; upon entering, after one unlocks the door with the key, one must retrieve it from the inside, again locking the door behind oneself; this prevents leaving the door open but unlocked. The essay shows how many layers of significance a key can connote. In the P.M. Graves-Brown version, Lydia Davis translated the piece into English. Additional editing was completed and illustrations redrawn by Graves-Brown. The title could have been chosen as a witty word-play off of J.L. Austin's How to Do Things With Words.
Latour argues that while an object's purposefully designed material nature may recommend or permit a highly controlled set of functional purposes, it may also offer a broad range of valuable possibilities.
Latour uses the Berlin key to show that there are social constraints which force people to do whatever it is that the object makes them do; thus, the object (the Berlin key) is a sign, of sorts, telling the inhabitants to "lock their doors at night, but never during the day."
Latour discusses the relationship between the social realm and the technological realm. He asserts that the Sociologist and the Technologist are "enemy brothers", thinking they will come to an end—the sociologist with the social and the technologist with objects.
References
1993 documents
Works by Bruno Latour
Science and technology studies works | The Berlin Key | Technology | 395 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.