id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
17,591,563 | https://en.wikipedia.org/wiki/E%20chart | An E chart, also known as a tumbling E chart, is an eye chart used to measure a patient's visual acuity.
Uses
This chart does not depend on a patient's easy familiarity with a particular writing system (such as the Latin alphabet). This is often desirable, for instance with very young children. This also allows use with patients not readily fluent in the alphabetfor example, in China.
The chart contains rows of the letter "E" in various kinds of rotation. The patient is asked to state (usually by pointing) where the limbs of the E are pointing, "up, down, left or right." Depending on how far the patient can "read", his or her visual acuity is quantified. It works on the same principle as Snellen's distant vision chart.
See also
Visual acuity
Landolt C
References
Charts
Diagnostic ophthalmology
Medical tests
Optotypes | E chart | Mathematics | 189 |
2,075,950 | https://en.wikipedia.org/wiki/Process%20%28anatomy%29 | In anatomy, a process () is a projection or outgrowth of tissue from a larger body. For instance, in a vertebra, a process may serve for muscle attachment and leverage (as in the case of the transverse and spinous processes), or to fit (forming a synovial joint), with another vertebra (as in the case of the articular processes). The word is also used at the microanatomic level, where cells can have processes such as cilia or pedicels. Depending on the tissue, processes may also be called by other terms, such as apophysis, tubercle, or protuberance.
Examples
Examples of processes include:
The many processes of the human skull:
The mastoid and styloid processes of the temporal bone
The zygomatic process of the temporal bone
The zygomatic process of the frontal bone
The orbital, temporal, lateral, frontal, and maxillary processes of the zygomatic bone
The anterior, middle, and posterior clinoid processes and the petrosal process of the sphenoid bone
The uncinate process of the ethmoid bone
The jugular process of the occipital bone
The alveolar, frontal, zygomatic, and palatine processes of the maxilla
The ethmoidal and maxillary processes of the inferior nasal concha
The pyramidal, orbital, and sphenoidal processes of the palatine bone
The coronoid and condyloid processes of the mandible
The xiphoid process at the end of the sternum
The acromion and coracoid processes of the scapula
The coronoid process of the ulna
The radial and ulnar styloid processes
The uncinate processes of ribs found in birds and reptiles
The uncinate process of the pancreas
The spinous, articular, transverse, accessory, uncinate, and mammillary processes of the vertebrae
The trochlear process of the heel
The appendix, which is sometimes called the "vermiform process", notably in Gray's Anatomy
The olecranon process of the ulna
See also
Eminence
Tubercle
Appendage
Pedicle of vertebral arch
Notes
References
Dorland's Medical Dictionary
Anatomy | Process (anatomy) | Biology | 473 |
18,650,576 | https://en.wikipedia.org/wiki/PlanMaker | PlanMaker is a spreadsheet program that is part of the SoftMaker Office suite. It is available on Microsoft Windows, MacOS, Linux and Android and iOS.
PlanMaker is largely similar to Microsoft Excel in function and workflow and uses the same file format .xlsx. The syntax of the formulas is identical, pivot tables are possible. Furthermore it can import SQLite databases.
Macros and VBA scripts contained in .xlsm cannot be executed, but are retained when saving. BasicMaker provides a VBA-like scripting language under Windows for SoftMaker Office.
References
External links
SoftMaker's PlanMaker for Windows, Linux and MacOS
SoftMaker's PlanMaker for Android and iOS
Android (operating system) software
Spreadsheet software
Presentation software for Windows
Linux software
Windows Mobile software | PlanMaker | Mathematics | 166 |
68,825,404 | https://en.wikipedia.org/wiki/Rhizostomins | Rhizostomins are proteins that are part of a pigment family only found in jellyfish in the order Rhizostomeae. These proteins are composed of a Kringle domain inserted within a cysteine-rich Frizzled domain, first identified in 2004 as the blue pigment in the barrel jellyfish Rhizostoma pulmo. It also appears in rhizostome jellyfish that do not appear blue, such as in Nemopilema nomurai, which typically presents red-brown coloration. It has been hypothesized that pigments in this family act as a sunscreen, protecting from harmful ultraviolet radiation. Natural blue pigments, such as some of the rhizostomins, are rare and there is a growing need for industrial purposes.
References
Protein domains
Biological pigments
Rhizostomeae | Rhizostomins | Biology | 179 |
39,441,867 | https://en.wikipedia.org/wiki/Leelamine | Leelamine (dehydroabietylamine) is a diterpene amine that has weak affinity for the cannabinoid receptors CB1 and CB2, as well as being an inhibitor of pyruvate dehydrogenase kinase. Optically active leelamine is also used as a chiral resolving agent for carboxylic acids. Leelamine has been shown to be effective against certain cancer cells, independent from its activity on CB receptors or PDK1 - it accumulates inside the acidic lysosomes leading to disruption of intracellular cholesterol transport, autophagy and endocytosis followed by cell death.
See also
Abietic acid
References
External links
Simple Leaf CBD
Amines
Cannabinoids
Diterpene alkaloids
Isopropyl compounds
Phenanthrenes | Leelamine | Chemistry | 174 |
40,546,894 | https://en.wikipedia.org/wiki/Signal-to-interference-plus-noise%20ratio | In information theory and telecommunication engineering, the signal-to-interference-plus-noise ratio (SINR) (also known as the signal-to-noise-plus-interference ratio (SNIR)) is a quantity used to give theoretical upper bounds on channel capacity (or the rate of information transfer) in wireless communication systems such as networks. Analogous to the signal-to-noise ratio (SNR) used often in wired communications systems, the SINR is defined as the power of a certain signal of interest divided by the sum of the interference power (from all the other interfering signals) and the power of some background noise. If the power of noise term is zero, then the SINR reduces to the signal-to-interference ratio (SIR). Conversely, zero interference reduces the SINR to the SNR, which is used less often when developing mathematical models of wireless networks such as cellular networks.
The complexity and randomness of certain types of wireless networks and signal propagation has motivated the use of stochastic geometry models in order to model the SINR, particularly for cellular or mobile phone networks.
Description
SINR is commonly used in wireless communication as a way to measure the quality of wireless connections. Typically, the energy of a signal fades with distance, which is referred to as a path loss in wireless networks. Conversely, in wired networks the existence of a wired path between the sender or transmitter and the receiver determines the correct reception of data. In a wireless network one has to take other factors into account (e.g. the background noise, interfering strength of other simultaneous transmission). The concept of SINR attempts to create a representation of this aspect.
Mathematical definition
The definition of SINR is usually defined for a particular receiver (or user). In particular, for a receiver located at some point x in space (usually, on the plane), then its corresponding SINR given by
where P is the power of the incoming signal of interest, I is the interference power of the other (interfering) signals in the network, and N is some noise term, which may be a constant or random. Like other ratios in electronic engineering and related fields, the SINR is often expressed in decibels or dB.
Propagation model
To develop a mathematical model for estimating the SINR, a suitable mathematical model is needed to represent the propagation of the incoming signal and the interfering signals. A common model approach is to assume the propagation model consists of a random component and non-random (or deterministic) component.
The deterministic component seeks to capture how a signal decays or attenuates as it travels a medium such as air, which is done by introducing a path-loss or attenuation function. A common choice for the path-loss function is a simple power-law. For example, if a signal travels from point x to point y, then it decays by a factor given by the path-loss function
,
where the path-loss exponent α>2, and |x-y| denotes the distance between point y of the user and the signal source at point x. Although this model suffers from a singularity (when x=y), its simple nature results in it often being used due to the relatively tractable models it gives. Exponential functions are sometimes used to model fast decaying signals.
The random component of the model entails representing multipath fading of the signal, which is caused by signals colliding with and reflecting off various obstacles such as buildings. This is incorporated into the model by introducing a random variable with some probability distribution. The probability distribution is chosen depending on the type of fading model and include Rayleigh, Rician, log-normal shadow (or shadowing), and Nakagami.
SINR model
The propagation model leads to a model for the SINR. Consider a collection of base stations located at points to in the plane or 3D space. Then for a user located at, say , then the SINR for a signal coming from base station, say, , is given by
,
where are fading random variables of some distribution. Under the simple power-law path-loss model becomes
.
Stochastic geometry models
In wireless networks, the factors that contribute to the SINR are often random (or appear random) including the signal propagation and the positioning of network transmitters and receivers. Consequently, in recent years this has motivated research in developing tractable stochastic geometry models in order to estimate the SINR in wireless networks. The related field of continuum percolation theory has also been used to derive bounds on the SINR in wireless networks.
See also
Signal-to-noise ratio
Stochastic geometry models of wireless networks
Continuum percolation theory
References
Noise (electronics)
Telecommunications
Digital audio
Engineering ratios | Signal-to-interference-plus-noise ratio | Mathematics,Technology,Engineering | 967 |
47,477,895 | https://en.wikipedia.org/wiki/Cape%20Provinces | The Cape Provinces of South Africa is a biogeographical area used in the World Geographical Scheme for Recording Plant Distributions (WGSRPD). It is part of the WGSRPD region 27 Southern Africa. The area has the code "CPP". It includes the South African provinces of the Eastern Cape, the Northern Cape and the Western Cape, together making up most of the former Cape Province.
The area includes the Cape Floristic Region, the smallest of the six recognised floral kingdoms of the world, an area of extraordinarily high diversity and endemism, home to more than 9,000 vascular plant species, of which 69 percent are endemic.
See also
Northern Provinces
References
Bibliography
Biogeography | Cape Provinces | Biology | 148 |
696,066 | https://en.wikipedia.org/wiki/Beatrice%20M.%20Tinsley%20Prize | The Beatrice M. Tinsley Prize is awarded every other year by the American Astronomical Society in recognition of an outstanding research contribution to astronomy or astrophysics of an exceptionally creative or innovative character. The prize is named in honor of the cosmologist and astronomer Beatrice Tinsley.
The prize is normally awarded every second year, but was awarded in 2021 out of the established sequence.
Recipients
Recipients of the Beatrice M. Tinsley Prize include:
1986 Jocelyn Bell Burnell — discovery of first pulsar
1988 Harold I. Ewen, Edward M. Purcell — discovery of 21 cm radiation from hydrogen
1990 Antoine Labeyrie — speckle interferometry
1992 Robert H. Dicke — lock-in amplifier
1994 Raymond Davis, Jr. — neutrino detectors; first measurement of solar neutrinos
1996 Aleksander Wolszczan — first pulsar planet
1998 Robert E. Williams — astronomical spectroscopy, particularly in gas clouds
2000 Charles R. Alcock — search for massive compact halo objects
2002 Geoffrey Marcy, R. Paul Butler, Steven S. Vogt — ultra-high-resolution Doppler spectroscopy; discovery of extrasolar planets by radial velocity measurements
2004 Ronald J. Reynolds — studies of the interstellar medium
2006 John E. Carlstrom — cosmic microwave background using the Sunyaev–Zeldovich effect
2008 Mark Reid — astrometry experiments with the VLBI and the VLBA; pioneering use of cosmic masers as astronomical tools
2010 Drake Deming — thermal infrared emission from transiting extrasolar planets
2012 Ronald L. Gilliland — ultra-high signal-to-noise observations related to time-domain photometry
2014 Chris Lintott — engaging non-scientists in cutting edge research
2016 Andrew Gould — gravitational microlensing
2018 Julianne Dalcanton — low-surface-brightness galaxies; Hubble Space Telescope surveys
2020 Krzysztof Stanek, Christopher Kochanek — time-domain astronomy; leadership in the All Sky Automated Survey for SuperNovae (ASAS-SN)
2021 Bill Paxton — MESA software for computational stellar astrophysics
2024 Dennis Zaritsky — innovative observations probing the structure and evolution of galaxies
See also
List of astronomy awards
References
External links
Official website at the American Astronomical Society website
Astronomy prizes
Awards established in 1986
American science and technology awards
American Astronomical Society | Beatrice M. Tinsley Prize | Astronomy,Technology | 470 |
11,633,574 | https://en.wikipedia.org/wiki/G%20117-B15A | G117-B15A is a small, well-observed variable white dwarf star of the DAV, or ZZ Ceti, type in the constellation of Leo Minor.
G117-B15A was found to be variable in 1974 by Richer and Ulrych, and this was confirmed in 1976 by McGraw and Robinson. In 1984 it was demonstrated that the star's variability is due to nonradial gravity wave pulsations. As a consequence, its timescale for period change is directly proportional to its cooling timescale, allowing its cooling rate to be measured using astroseismological techniques. Its age is estimated at 400 million years. Its light curve has a dominant period of 215.2 seconds, which is estimated to increase by approximately one second each 14 million years. G117-B15A has been claimed to be the most stable optical clock ever found, much more stable than the ticks of an atomic clock. It is also the first pulsating white dwarf to have its main pulsation mode index identified.
An X-ray source in the constellation Leo Minor is the white dwarf G117-B15A.
Notes
See also
Ross 548
Pulsating white dwarfs
Leo Minor
Astronomical X-ray sources
X-ray astronomy
Leonis Minoris, RY | G 117-B15A | Astronomy | 268 |
8,500,320 | https://en.wikipedia.org/wiki/Chervone-Pustohorod | Chervone-Pustohorod () was an airfield in Ukraine located 22 km northeast of Hlukhiv. It was built as a forward operating base for wartime use.
It is named after a town of Esman that in 1957–2016 was known as Chervone.
References
RussianAirFields.com
Airports in Ukraine
Transport in Sumy Oblast | Chervone-Pustohorod | Physics | 74 |
531,587 | https://en.wikipedia.org/wiki/Depolarization | In biology, depolarization or hypopolarization is a change within a cell, during which the cell undergoes a shift in electric charge distribution, resulting in less negative charge inside the cell compared to the outside. Depolarization is essential to the function of many cells, communication between cells, and the overall physiology of an organism.
Most cells in higher organisms maintain an internal environment that is negatively charged relative to the cell's exterior. This difference in charge is called the cell's membrane potential. In the process of depolarization, the negative internal charge of the cell temporarily becomes more positive (less negative). This shift from a negative to a more positive membrane potential occurs during several processes, including an action potential. During an action potential, the depolarization is so large that the potential difference across the cell membrane briefly reverses polarity, with the inside of the cell becoming positively charged.
The change in charge typically occurs due to an influx of sodium ions into a cell, although it can be mediated by an influx of any kind of cation or efflux of any kind of anion. The opposite of a depolarization is called a hyperpolarization.
Usage of the term "depolarization" in biology differs from its use in physics, where it refers to situations in which any form of polarity (i.e. the presence of any electrical charge, whether positive or negative) changes to a value of zero.
Depolarization is sometimes referred to as "hypopolarization" (as opposed to hyperpolarization).
Physiology
The process of depolarization is entirely dependent upon the intrinsic electrical nature of most cells. When a cell is at rest, the cell maintains what is known as a resting potential. The resting potential generated by nearly all cells results in the interior of the cell having a negative charge compared to the exterior of the cell. To maintain this electrical imbalance, ions are transported across the cell's plasma membrane. The transport of the ions across the plasma membrane is accomplished through several different types of transmembrane proteins embedded in the cell's plasma membrane that function as pathways for ions both into and out of the cell, such as ion channels, sodium potassium pumps, and voltage-gated ion channels.
Resting potential
The resting potential must be established within a cell before the cell can be depolarized. There are many mechanisms by which a cell can establish a resting potential, however there is a typical pattern of generating this resting potential that many cells follow. The generation of a negative resting potential within the cell involves the utilization of ion channels, ion pumps, and voltage-gated ion channels by the cell. However, the process of generating the resting potential within the cell also creates an environment outside the cell that favors depolarization. The sodium potassium pump is largely responsible for the optimization of conditions on both the interior and the exterior of the cell for depolarization.
By pumping three positively charged sodium ions (Na+) out of the cell for every two positively charged potassium ions (K+) pumped into the cell, not only is the resting potential of the cell established, but an unfavorable concentration gradient is created by increasing the concentration of sodium outside the cell and increasing the concentration of potassium within the cell. While there is an excessive amount of potassium in the cell and sodium outside the cell, the generated resting potential maintains the closure of voltage-gated ion channels in the plasma membrane. This not only prevents the diffusion of ions pumped across the membrane but also involves the activity of potassium leak channels, allowing a controlled passive efflux of potassium ions, which contributes to the establishment of the negative resting potential. Additionally, despite the high concentration of positively-charged potassium ions, most cells contain internal components (of negative charge), which accumulate to establish a negative inner charge.
Depolarization
After a cell has established a resting potential, that cell has the capacity to undergo depolarization. Depolarization is the process by which the membrane potential becomes less negative, facilitating the generation of an action potential. For this rapid change to take place within the interior of the cell, several events must occur along the plasma membrane of the cell.
While the sodium–potassium pump continues to work, the voltage-gated sodium and calcium channels that had been closed while the cell was at resting potential are opened in response to an initial change in voltage. As a change in the neuronal charge leads to the opening of voltage-gated sodium channels, this results in an influx of sodium ions down their electrochemical gradient. Sodium ions enter the cell, and they contribute a positive charge to the cell interior, causing a change in the membrane potential from negative to positive. The initial sodium ion influx triggers the opening of additional sodium channels (positive-feedback loop), leading to further sodium ion transfer into the cell and sustaining the depolarization process until the positive equilibrium potential is reached.
Sodium channels possess an inherent inactivation mechanism that prompts rapid reclosure, even as the membrane remains depolarized. During this equilibrium, the sodium channels enter an inactivated state, temporarily halting the influx of sodium ions until the membrane potential becomes negatively charged again. Once the cell's interior is sufficiently positively charged, depolarization concludes, and the channels close once more.
Repolarization
After a cell has been depolarized, it undergoes one final change in internal charge. Following depolarization, the voltage-gated sodium ion channels that had been open while the cell was undergoing depolarization close again. The increased positive charge within the cell now causes the potassium channels to open. Potassium ions (K+) begin to move down the electrochemical gradient (in favor of the concentration gradient and the newly established electrical gradient). As potassium moves out of the cell the potential within the cell decreases and approaches its resting potential once more. The sodium potassium pump works continuously throughout this process.
Hyperpolarization
The process of repolarization causes an overshoot in the potential of the cell. Potassium ions continue to move out of the axon so much that the resting potential is exceeded and the new cell potential becomes more negative than the resting potential. The resting potential is ultimately re-established by the closing of all voltage-gated ion channels and the activity of the sodium potassium ion pump.
Neurons
Depolarization is essential to the functions of many cells in the human body, which is exemplified by the transmission of stimuli both within a neuron and between two neurons. The reception of stimuli, neural integration of those stimuli, and the neuron's response to stimuli all rely upon the ability of neurons to utilize depolarization to transmit stimuli either within a neuron or between neurons.
Response to stimulus
Stimuli to neurons can be physical, electrical, or chemical, and can either inhibit or excite the neuron being stimulated. An inhibitory stimulus is transmitted to the dendrite of a neuron, causing hyperpolarization of the neuron. The hyperpolarization following an inhibitory stimulus causes a further decrease in voltage within the neuron below the resting potential. By hyperpolarizing a neuron, an inhibitory stimulus results in a greater negative charge that must be overcome for depolarization to occur.
Excitation stimuli, on the other hand, increase the voltage in the neuron, which leads to a neuron that is easier to depolarize than the same neuron in the resting state. Regardless of it being excitatory or inhibitory, the stimulus travels down the dendrites of a neuron to the cell body for integration.
Integration of stimuli
Once the stimuli have reached the cell body, the nerve must integrate the various stimuli before the nerve can respond. The stimuli that have traveled down the dendrites converge at the axon hillock, where they are summed to determine the neuronal response. If the sum of the stimuli reaches a certain voltage, known as the threshold potential, depolarization continues from the axon hillock down the axon.
Response
The surge of depolarization traveling from the axon hillock to the axon terminal is known as an action potential. Action potentials reach the axon terminal, where the action potential triggers the release of neurotransmitters from the neuron. The neurotransmitters that are released from the axon continue on to stimulate other cells such as other neurons or muscle cells. After an action potential travels down the axon of a neuron, the resting membrane potential of the axon must be restored before another action potential can travel the axon. This is known as the recovery period of the neuron, during which the neuron cannot transmit another action potential.
Rod cells of the eye
The importance and versatility of depolarization within cells can be seen in the relationship between rod cells in the eye and their associated neurons. When rod cells are in the dark, they are depolarized. In the rod cells, this depolarization is maintained by ion channels that remain open due to the higher voltage of the rod cell in the depolarized state. The ion channels allow calcium and sodium to pass freely into the cell, maintaining the depolarized state. Rod cells in the depolarized state constantly release neurotransmitters which in turn stimulate the nerves associated with rod cells. This cycle is broken when rod cells are exposed to light; the absorption of light by the rod cell causes the channels that had facilitated the entry of sodium and calcium into the rod cell to close. When these channels close, the rod cells produce fewer neurotransmitters, which is perceived by the brain as an increase in light. Therefore, in the case of rod cells and their associated neurons, depolarization actually prevents a signal from reaching the brain as opposed to stimulating the transmission of the signal.
Vascular endothelium
Endothelium is a thin layer of simple squamous epithelial cells that line the interior of both blood and lymph vessels. The endothelium that lines blood vessels is known as vascular endothelium, which is subject to and must withstand the forces of blood flow and blood pressure from the cardiovascular system. To withstand these cardiovascular forces, endothelial cells must simultaneously have a structure capable of withstanding the forces of circulation while also maintaining a certain level of plasticity in the strength of their structure. This plasticity in the structural strength of the vascular endothelium is essential to overall function of the cardiovascular system. Endothelial cells within blood vessels can alter the strength of their structure to maintain the vascular tone of the blood vessel they line, prevent vascular rigidity, and even help to regulate blood pressure within the cardiovascular system. Endothelial cells accomplish these feats by using depolarization to alter their structural strength. When an endothelial cell undergoes depolarization, the result is a marked decrease in the rigidity and structural strength of the cell by altering the network of fibers that provide these cells with their structural support. Depolarization in vascular endothelium is essential not only to the structural integrity of endothelial cells, but also to the ability of the vascular endothelium to aid in the regulation of vascular tone, prevention of vascular rigidity, and the regulation of blood pressure.
Heart
Depolarization occurs in the four chambers of the heart: both atria first, and then both ventricles.
The sinoatrial (SA) node on the wall of the right atrium initiates depolarization in the right and left atria, causing contraction, which corresponds to the P wave on an electrocardiogram.
The SA node sends the depolarization wave to the atrioventricular (AV) node which—with about a 100 ms delay to let the atria finish contracting—then causes contraction in both ventricles, seen in the QRS wave. At the same time, the atria re-polarize and relax.
The ventricles are re-polarized and relaxed at the T wave.
This process continues regularly, unless there is a problem in the heart.
Depolarization blockers
There are drugs, called depolarization blocking agents, that cause prolonged depolarization by opening channels responsible for depolarization and not allowing them to close, preventing repolarization. Examples include the nicotinic agonists, suxamethonium and decamethonium.
References
Further reading
External links
Membrane biology
Electrophysiology
Electrochemistry
Cellular neuroscience | Depolarization | Chemistry | 2,586 |
36,751,542 | https://en.wikipedia.org/wiki/Flowers%20of%20the%20Four%20Seasons | The Flowers of the Four Seasons () are a traditional grouping of flowers found in Chinese culture that spread to and influenced other East Asian arts.
In Chinese art and culture, the flowers that represent the four seasons consist of:
(春兰) Chūnlán – Spring – orchid
(夏荷) Xiahé – Summer – lotus
(秋菊) Qiūjú – Autumn – chrysanthemum
and (冬梅) Dōngméi – Winter – plum blossom
They contain three of the elements of the Four Gentlemen.
Gallery
See also
Flower emblems in China
Flower emblems in Vietnam
Three Friends of Winter
List of Chinese symbols, designs, and art motifs
Notes
References
Further reading
Flowers Of The Four Seasons: The Fundamentals Of Chinese Floral Painting, Su-Sing Chow (in English and Mandarin Chinese). Art Book Publishing Co. (1983)
External links
Winter flowers (Article)
Chinese culture
Chinese iconography
Chinese painting
Culture of Japan
Japanese painting
Plants in art
Symbols | Flowers of the Four Seasons | Mathematics | 193 |
2,903,699 | https://en.wikipedia.org/wiki/45%20Bo%C3%B6tis | 45 Boötis is a single star located 63 light years away from the Sun in the northern constellation of Boötes. It has the Bayer designation c Boötis; 45 Boötis is the Flamsteed designation. This body is visible to the naked eye as a faint, yellow-white hued star with an apparent visual magnitude of 4.93. It has a relatively high proper motion, traversing the celestial sphere at the rate of per year. The star is moving closer to the Earth with a heliocentric radial velocity of −11 km/s, and is a stream member of the Ursa Major Moving Group.
This is an F-type main-sequence star with a stellar classification of F5 V. It is around 1.6 billion years old and is spinning with a projected rotational velocity of 40 km/s. The star has 1.2 times the mass of the Sun and 1.46 times the Sun's radius. It is radiating 3.3 times the luminosity of the Sun from its photosphere at an effective temperature of 6,435 K. 45 Boötis is a source of X-ray emission.
There is a magnitude 11.53 visual companion at an angular separation of along a position angle (PA) of 40°, as of 2012. A magnitude 10.23 star can be found at a separation of with a PA of 358°, as of 2015.
References
External links
HR 5634
CCDM J15074+2453
Image 45 Boötis
F-type main-sequence stars
Ursa Major moving group
Double stars
Boötes
Bootis, c
Durchmusterung objects
Bootis, 45
0578
134083
073996
5634 | 45 Boötis | Astronomy | 351 |
325,028 | https://en.wikipedia.org/wiki/Technological%20escalation | Technological escalation describes the situation where two parties in competition tend to employ continual technological improvements in their attempt to defeat each other. Technology is defined here as a creative invention, either in the form of an object or a methodology. An example is the mutual escalation seen between e-mail spammers and the programmers of spam filters and other anti-spam techniques. Although escalation is usually meant negatively, if two companies are in an escalating competition to produce the best widget, the consumer benefits because they get a choice between better and better widgets.
See also
Arms race
Competition
Conflict (disambiguation)
Industrial Revolution
Second Industrial Revolution
Technological escalation during World War II
War
References
Notes
Conflict (process)
Military technology
Technological races | Technological escalation | Biology | 157 |
3,879,454 | https://en.wikipedia.org/wiki/Pitch%20angle%20%28particle%20motion%29 | The pitch angle of a charged particle is the angle between the particle's velocity vector and the local magnetic field. This is a common measurement and topic when studying the magnetosphere, magnetic mirrors, biconic cusps and polywells. See Aurora and Ring current
Usage: particle motion
It is customary to discuss the direction a particle is heading by its pitch angle. A pitch angle of 0 degrees is a particle whose parallel motion is perfectly along the local magnetic field. In the northern hemisphere this particle would be heading down toward the Earth (and the opposite in the southern hemisphere). A pitch angle of 90 degrees is a particle that is locally mirroring (see: Magnetosphere particle motion).
Special case: equatorial pitch angle
The equatorial pitch angle of a particle is the pitch angle of the particle at the Earth's geomagnetic equator. This angle defines the loss cone of a particle. The loss cone is the set of angles where the particle will strike the atmosphere and no longer be trapped in the magnetosphere while particles with pitch angles outside the loss cone will mirror and continue to be trapped.
Where is the equatorial pitch angle of the particle, is the equatorial magnetic field strength at the surface of the earth, and is the field strength at the mirror point. Notice that this is independent of charge, mass, or kinetic energy.
This is due to the invariance of the magnetic moment . At the point of reflection, the particle has zero parallel velocity or a pitch angle of 90 degrees. As a result,
See also
Adiabatic invariant
Magnetic mirror
External links
Oulu Space Physics Textbook
IMAGE mission glossary
Electromagnetism | Pitch angle (particle motion) | Physics | 332 |
435,760 | https://en.wikipedia.org/wiki/Telophase | Telophase () is the final stage in both meiosis and mitosis in a eukaryotic cell. During telophase, the effects of prophase and prometaphase (the nucleolus and nuclear membrane disintegrating) are reversed. As chromosomes reach the cell poles, a nuclear envelope is re-assembled around each set of chromatids, the nucleoli reappear, and chromosomes begin to decondense back into the expanded chromatin that is present during interphase. The mitotic spindle is disassembled and remaining spindle microtubules are depolymerized. Telophase accounts for approximately 2% of the cell cycle's duration.
Cytokinesis typically begins before late telophase and, when complete, segregates the two daughter nuclei between a pair of separate daughter cells.
Telophase is primarily driven by the dephosphorylation of mitotic cyclin-dependent kinase (Cdk) substrates.
Dephosphorylation of Cdk substrates
The phosphorylation of the protein targets of M-Cdks (Mitotic Cyclin-dependent Kinases) drives spindle assembly, chromosome condensation and nuclear envelope breakdown in early mitosis. The dephosphorylation of these same substrates drives spindle disassembly, chromosome decondensation and the reformation of daughter nuclei in telophase. Establishing a degree of dephosphorylation permissive to telophase events requires both the inactivation of Cdks and the activation of phosphatases.
Cdk inactivation is primarily the result of the destruction of its associated cyclin. Cyclins are targeted for proteolytic degradation by the anaphase promoting complex (APC), also known as the cyclosome, a ubiquitin-ligase. The active, CDC20-bound APC (APC/CCDC20) targets mitotic cyclins for degradation starting in anaphase. Experimental addition of non-degradable M-cyclin to cells induces cell cycle arrest in a post-anaphase/pre-telophase-like state with condensed chromosomes segregated to cell poles, an intact mitotic spindle, and no reformation of the nuclear envelope. This has been shown in frog (Xenopus) eggs, fruit flies (Drosophilla melanogaster), budding (Saccharomyces cerevisiae) and fission (Schizosaccharomyces pombe) yeast, and in multiple human cell lines.
The requirement for phosphatase activation can be seen in budding yeast, which do not have redundant phosphatases for mitotic exit and rely on the phosphatase cdc14. Blocking cdc14 activation in these cells results in the same phenotypic arrest as does blocking M-cyclin degradation.
Historically, it has been thought that anaphase and telophase are events that occur passively after satisfaction of the spindle-assembly checkpoint (SAC) that defines the metaphase-anaphase transition. However, the existence of differential phases to cdc14 activity between anaphase and telophase is suggestive of additional, unexplored late-mitotic checkpoints. Cdc14 is activated by its release into the nucleus, from sequestration in the nucleolus, and subsequent export into the cytoplasm. The Cdc-14 Early Anaphase Release pathway, which stabilizes the spindle, also releases cdc14 from the nucleolus but restricts it to the nucleus. Complete release and maintained activation of cdc14 is achieved by the separate Mitotic Exit Network (MEN) pathway to a sufficient degree (to trigger the spindle disassembly and nuclear envelope assembly) only after late anaphase.
Cdc14-mediated dephosphorylation activates downstream regulatory processes unique to telophase. For example, the dephosphorylation of CDH1 allows the APC/C to bind CDH1. APC/CCDH1 targets CDC20 for proteolysis, resulting in a cellular switch from APC/CCDC20 to APC/CCDH1 activity. The ubiquitination of mitotic cyclins continues along with that of APC/CCDH1-specific targets such as the yeast mitotic spindle component, Ase1, and cdc5, the degradation of which is required for the return of cells to the G1 phase.
Additional mechanisms driving telophase
A shift in the whole-cell phosphoprotein profile is only the broadest of many regulatory mechanisms contributing to the onset of individual telophase events.
The anaphase-mediated distancing of chromosomes from the metaphase plate may trigger spatial cues for the onset of telophase.
An important regulator and effector of telophase is cdc48 (homologous to yeast cdc48 is human p97, both structurally and functionally), a protein that mechanically employs its ATPase activity to alter target protein conformation. Cdc48 is necessary for spindle disassembly, nuclear envelope assembly, and chromosome decondensation. Cdc48 modifies proteins structurally involved in these processes and also some ubiquitinated proteins which are thus targeted to the proteasome.
Mitotic spindle disassembly
The breaking of the mitotic spindle, common to the completion of mitosis in all eukaryotes, is the event most often used to define the anaphase-B to telophase transition, although the initiation of nuclear reassembly tends to precede that of spindle disassembly.
Spindle disassembly is an irreversible process which must effect not the ultimate degradation, but the reorganization of constituent microtubules; microtubules are detached from kinetochores and spindle pole bodies and return to their interphase states.
Spindle depolymerization during telophase occurs from the plus end and is, in this way, a reversal of spindle assembly. Subsequent microtubule array assembly is, unlike that of the polarized spindle, interpolar. This is especially apparent in animal cells which must immediately, following mitotic spindle disassembly, establish the antiparallel bundle of microtubules known as the central spindle in order to regulate cytokinesis. The ATPase p97 is required for the establishment of the relatively stable and long interphase microtubule arrays following disassembly of the highly dynamic and relatively short mitotic ones.
While spindle assembly has been well studied and characterized as a process where tentative structures are edified by the SAC, the molecular basis of spindle disassembly is not understood in comparable detail. The late-mitotic dephosphorylation cascade of M-Cdk substrates by the MEN is broadly held to be responsible for spindle disassembly. The phosphorylation states of microtubule stabilizing and destabilizing factors, as well as microtubule nucleators are key regulators of their activities. For example, NuMA is a minus-end crosslinking protein and Cdk substrate whose dissociation from the microtubule is effected by its dephosphorylation during telophase.
A general model for spindle disassembly in yeast is that the three functionally overlapping subprocesses of spindle disengagement, destabilization, and depolymerization are primarily effected by APC/CCDH1, microtubule-stabilizer-specific kinases, and plus-end directed microtubule depolymerases, respectively. These effectors are known to be highly conserved between yeast and higher eukaryotes. The APC/CCDH1 targets crosslinking microtubule-associated proteins (NuMA, Ase1, Cin1 and more). AuroraB (yeast IpI1) phosphorylates the spindle-associated stabilizing protein EB1 (yeast Bim1), which then dissociates from microtubules, and the destabilizer She1, which then associates with microtubules. Kinesin8 (yeast Kip3), an ATP-dependent depolymerase, accelerate microtubule depolymerization at the plus end. It was shown the concurrent disruption of these mechanisms, but not of any one, results in dramatic spindle hyperstability during telophase, suggesting functional overlap despite the diversity of the mechanisms.
Nuclear envelope reassembly
The main components of the nuclear envelope are a double membrane, nuclear pore complexes, and a nuclear lamina internal to the inner nuclear membrane. These components are dismantled during prophase and prometaphase and reconstructed during telophase, when the nuclear envelope reforms on the surface of separated sister chromatids. The nuclear membrane is fragmented and partly absorbed by the endoplasmic reticulum during prometaphase and the targeting of inner nuclear membrane protein-containing ER vesicles to the chromatin occurs during telophase in a reversal of this process. Membrane-forming vesicles aggregate directly to the surface of chromatin, where they fuse laterally into a continuous membrane.
Ran-GTP is required for early nuclear envelope assembly at the surface of the chromosomes: it releases envelope components sequestered by importin β during early mitosis. Ran-GTP localizes near chromosomes throughout mitosis, but does not trigger the dissociation of nuclear envelope proteins from importin β until M-Cdk targets are dephosphorylated in telophase. These envelope components include several nuclear pore components, the most studied of which is the nuclear pore scaffold protein ELYS, which can recognize DNA regions rich in A:T base pairs (in vitro), and may therefore bind directly to the DNA. However, experiments in Xenopus egg extracts have concluded that ELYS fails to associate with bare DNA and will only directly bind histone dimers and nucleosomes. After binding to chromatin, ELYS recruits other components of the nuclear pore scaffold and nuclear pore trans-membrane proteins. The nuclear pore complex is assembled and integrated in the nuclear envelope in an organized manner, consecutively adding Nup107-160, POM121, and FG Nups.
It is debated whether the mechanism of nuclear membrane reassembly involves initial nuclear pore assembly and subsequent recruitment of membrane vesicles around the pores or if the nuclear envelope forms primarily from extended ER cisternae, preceding nuclear pore assembly:
In cells where the nuclear membrane fragments into non-ER vesicles during mitosis, a Ran-GTP–dependent pathway can direct these discrete vesicle populations to chromatin where they fuse to reform the nuclear envelope.
In cells where the nuclear membrane is absorbed into the endoplasmic reticulum during mitosis, reassembly involves the lateral expansion around the chromatin with stabilization of the expanding membrane over the surface of the chromatin. Studies claiming this mechanism is a prerequisite to nuclear pore formation have found that bare-chromatin-associated Nup107–160 complexes are present in single units instead of as assembled pre-pores.
The envelope smoothens and expands following its enclosure of the whole chromatid set. This probably occurs due to the nuclear pores' import of lamin, which can be retained within a continuous membrane. The nuclear envelopes of Xenopus egg extracts failed to smoothen when nuclear import of lamin was inhibited, remaining wrinkled and closely bound to condensed chromosomes. However, in the case of ER lateral expansion, nuclear import is initiated before completion of the nuclear envelope reassembly, leading to a temporary intra-nuclear protein gradient between the distal and medial aspects of the forming nucleus.
Lamin subunits disassembled in prophase are inactivated and sequestered during mitosis. Lamina reassembly is triggered by lamin dephosphorylation (and additionally by methyl-esterification of COOH residues on lamin-B). Lamin-B can target chromatin as early as mid-anaphase. During telophase, when nuclear import is reestablished, lamin-A enters the reforming nucleus but continues to slowly assemble into the peripheral lamina over several hours in throughout the G1 phase.
Xenopus egg extracts and human cancer cell lines have been the primary models used for studying nuclear envelope reassembly.
Yeast lack lamins; their nuclear envelope remains intact throughout mitosis and nuclear division happens during cytokinesis.
Chromosome decondensation
Chromosome decondensation (also known as relaxation or decompaction) into expanded chromatin is necessary for the cell's resumption of interphase processes, and occurs in parallel to nuclear envelope assembly during telophase in many eukaryotes. MEN-mediated Cdk dephosphorylation is necessary for chromosome decondensation.
In vertebrates, chromosome decondensation is initiated only after nuclear import is reestablished. If lamin transport through nuclear pores is prevented, chromosomes remain condensed following cytokinesis, and cells fail to reenter the next S phase. In mammals, DNA licensing for S phase (the association of chromatin to the multiple protein factors necessary for its replication) also occurs coincidentally with the maturation of the nuclear envelope during late telophase. This can be attributed to and provides evidence for the nuclear import machinery's reestablishment of interphase nuclear and cytoplasmic protein localizations during telophase.
See also
References
External links
Cell cycle
Mitosis
de:Mitose#Telophase | Telophase | Biology | 2,929 |
19,181,338 | https://en.wikipedia.org/wiki/Ackermann%E2%80%93Teubner%20Memorial%20Award | The Alfred Ackermann–Teubner Memorial Award for the Promotion of Mathematical Sciences recognized work in mathematical analysis. It was established in 1912 by engineer Alfred Ackermann-Teubner and was an endowment of the University of Leipzig.
It was awarded 14 times between 1914 and 1941. Subsequent awards were to be made every other year until a surplus of 60,000 marks was accumulated within the endowment, at which time, the prize was to be awarded annually. The subjects included:
History, philosophy, teaching
Mathematics, especially arithmetic and algebra
Mechanics
Mathematical physics
Mathematics, especially analysis
Astronomy and theory of errors
Mathematics, especially geometry
Applied mathematics, especially geodesy and geophysics.
Honorees
The fifteen honorees between 1914 and 1941 are:
1914: Felix Klein
1916: Ernst Zermelo, prize of 1,000 marks
1918: Ludwig Prandtl
1920: Gustav Mie
1922: Paul Koebe
1924: Arnold Kohlschütter
1926: Wilhelm Blaschke
1928: Albert Defant
1930: Johannes Tropfke
1932: Emmy Noether and Emil Artin, co-honorees
1934: Erich Trefftz(de)
1937: Pascual Jordan
1938: Erich Hecke
1941: Paul ten Bruggencate
Jurists
In 1937, Constantin Carathéodory and Erhard Schmidt were invited to jury the award. Along with Wilhelm Blaschke, Carathéodory was invited again in 1944 by the German Union of Mathematicians.
See also
List of mathematics awards
References
Mathematics awards
Awards established in 1912
Leipzig University
Awards disestablished in the 1940s
1912 establishments in Germany | Ackermann–Teubner Memorial Award | Technology | 323 |
5,594,272 | https://en.wikipedia.org/wiki/Slush | Slush, also called slush ice, is a slurry mixture of small ice crystals (e.g. snow) and liquid water.
In the natural environment, slush forms when ice or snow melts or during mixed precipitation. This often mixes with dirt and other pollutants on the surface, resulting in a gray or muddy brown color. Often, solid ice or snow can block the drainage of fluid water from slushy areas, so slush often goes through multiple freeze/thaw cycles before being able to completely drain and disappear.
In areas where road salt is used to clear roadways, slush forms at lower temperatures in salted areas than it would ordinarily. This can produce a number of different consistencies over the same geographical area with scattered salted areas covered with slush and others covered with frozen precipitation.
Hazards
Because slush behaves like a non-Newtonian fluid, which means it behaves like a mostly solid mass until its inner shear forces rise beyond a specific threshold and beyond can very suddenly become fluid, it is very difficult to predict its behavior. This is the underlying mechanism causing slush avalanches and their unpredictability and thus hidden potential to become a natural hazard without caution.
Slush can also be a problem on an aircraft runway since the effect of excess slush acting on the aircraft's wheels can have a resisting effect during takeoff, making its projection unstable, which can cause an accident such as the Munich air disaster. Slush on roads can also make roads slippery and increase the braking distances for cars and trucks, increasing the possibility of rear end crashes and other road accidents.
Slush can refreeze and become hazardous to vehicles and pedestrians.
In some cases though, slush can be beneficial. When snow hits the slush, it partially melts and also becomes slush on contact. This prevents roads from becoming too congested with snow or sleet.
References
Snow or ice weather phenomena
Forms of water
Water ice | Slush | Physics,Chemistry | 402 |
63,468,037 | https://en.wikipedia.org/wiki/Estrone%20phosphate | Estrone phosphate (E1P), or estrone 3-phosphate, is an estrogen and steroid sulfatase inhibitor which was never marketed. It has similar affinity for steroid sulfatase as estrone sulfate and acts as a competitive inhibitor of the enzyme. In contrast to estrone sulfate however, it is not hydrolyzed by steroid sulfatase and is instead metabolized by phosphatases.
See also
List of estrogen esters § Estrone esters
Estradiol phosphate
Estriol phosphate
References
Abandoned drugs
Estrone esters
Phosphate esters
Sex hormone esters and conjugates
Steroid sulfatase inhibitors
Synthetic estrogens | Estrone phosphate | Chemistry | 141 |
575,603 | https://en.wikipedia.org/wiki/Five%20Suns | In creation myths, the term "Five Suns" refers to the belief of certain Nahua cultures and Aztec peoples that the world has gone through five distinct cycles of creation and destruction, with the current era being the fifth. It is primarily derived from a combination of myths, cosmologies, and eschatological beliefs that were originally held by pre-Columbian peoples in the Mesoamerican region, including central Mexico, and it is part of a larger mythology of Fifth World or Fifth Sun beliefs.
The late Postclassic Aztecs created and developed their own version of the "Five Suns" myth, which incorporated and transformed elements from previous Mesoamerican creation myths, while also introducing new ideas that were specific to their culture.
In the Aztec and other Nahua creation myths, it was believed that the universe had gone through four iterations before the current one, and each of these prior worlds had been destroyed by Gods due to the behavior of its inhabitants.
The current world is a product of the Aztecs' self-imposed mission to provide Tlazcaltiliztli to the sun, giving it the nourishment it needs to stay in existence and ensuring that the entire universe remains in balance. Thus, the Aztecs’ sacrificial rituals were essential to the functioning of the world, and ultimately to its continued survival.
Legend
According to the legend, from the void that was the rest of the universe, the first god, Ometeotl, created itself. The nature of Ometeotl, the "God of duality" was both male and female, shared by Ometecuhtli, "Lord of duality," and Omecihuatl, "Lady of duality". Ometeotl gave birth to four children, the four Tezcatlipocas, who each preside over one of the four cardinal directions. Over the West presides the White Tezcatlipoca, Quetzalcoatl, the god of light, mercy and wind. Over the South presides the Blue Tezcatlipoca, Huitzilopochtli, the god of war. Over the East presides the Red Tezcatlipoca, Xipe Totec, the god of gold, farming and spring time. And over the North presides the Black Tezcatlipoca, also called simply Tezcatlipoca, the god of judgment, night, deceit, sorcery and the Earth.
The Aztecs believed that the gods created the universe at Teotihuacan. The name was given by the Nahuatl-speaking Aztecs centuries after the fall of the city around 550 CE. The term has been glossed as "birthplace of the gods", or "place where gods were born", reflecting Nahua creation myths that were said to occur in Teotihuacan.
First sun
It was four gods who eventually created all the other gods and the world we know today, but before they could create they had to destroy, for every time they attempted to create something, it would fall into the water beneath them and be eaten by Cipactli, the giant earth crocodile, who swam through the water with mouths at every one of her joints. From the four Tezcatlipocas descended the first people who were giants. They created the other gods, the most important of whom were the water gods: Tlaloc, the god of rain and fertility and Chalchiuhtlicue, the goddess of lakes, rivers and oceans and also the goddess of beauty. To give light, they needed a god to become the sun and the Black Tezcatlipoca was chosen, but either because he had lost a leg or because he was god of the night, he only managed to become half a sun. The world continued on in this way for some time, but a sibling rivalry grew between Quetzalcoatl and his brother the mighty sun, who Quetzalcoatl eventually decided to knock from the sky with a stone club. With no sun, the world was totally black and in his anger, Tezcatlipoca commanded his jaguars to eat all the people.
Second sun
The gods created humans who were of normal stature, with Quetzalcoatl serving as the sun for the new civilization, as an attempt to bring balance to the world, but their attempts ultimately failed as humans began to drift away from the beliefs and teachings of the gods and instead embraced greed and corruption.
As a consequence, Tezcatlipoca showcased his dominance and strength as a god of magic and justice by transforming the human-like people into monkeys. Quetzalcoatl, who had held the flawed people in great regard, was greatly distressed and sent away the monkeys with a powerful hurricane. After they were banished, Quetzalcoatl stepped down from his role as the sun and crafted a new, more perfect race of humans.
Third sun
Tlaloc was crowned the new sun, but Tezcatlipoca, the mischievous god, tricked and deceived him, snatching away the love of his life, Xochiquetzal, the deity of beauty, flowers, and corn.
Tlaloc had become so consumed by his own grief and sorrow that he was no longer able to fulfil his duties as the sun; therefore, a great drought befell the people of the world. People desperately prayed for rain and begged for mercy, but their pleas fell on deaf ears.
In a fit of rage, Tlaloc unleashed a rain of fire upon the earth, completely destroying it and leaving nothing but ashes in its wake. Following this cataclysmic event, the gods then worked together to create a new earth, allowing life to be reborn from the seemingly lifeless and barren land.
Fourth sun
The next sun and also Tlaloc's new wife, was Chalchiuhtlicue. She was very loving towards the people, but Tezcatlipoca was not. Both the people and Chalchiuhtlicue felt his judgement when he told the water goddess that she was not truly loving and only faked kindness out of selfishness to gain the people's praise. Chalchiuhtlicue was so crushed by these words that she cried blood for the next fifty-two years, causing a horrific flood that drowned everyone on Earth. Humans became fish in order to survive.
Fifth sun
Quetzalcoatl would not accept the destruction of his people and went to the underworld where he stole their bones from the god Mictlantecuhtli. He dipped these bones in his own blood to resurrect his people, who reopened their eyes to a sky illuminated by the current sun, Huitzilopochtli.
The Centzonhuītznāhua, or the stars of the south, became jealous of their brighter, more important brother Huitzilopochtli. Their leader, Coyolxauhqui, goddess of the moon, led them in an assault on the sun and every night they come close to victory when they shine throughout the sky, but are beaten back by the mighty Huitzilopochtli who rules the daytime sky. To aid this all-important god in his continuing war, the Aztecs offer him the nourishment of human sacrifices. They also offer human sacrifices to Tezcatlipoca in fear of his judgment, offer their own blood to Quetzalcoatl, who opposes fatal sacrifices, in thanks of his blood sacrifice for them and give offerings to many other gods for many purposes. Should these sacrifices cease, or should mankind fail to please the gods for any other reason, this fifth sun will go black, the world will be shattered by a catastrophic earthquake, and the Tzitzimitl will slay Huitzilopochtli and all of humanity.
Variations and alternative myths
Most of what is known about the ancient Aztecs comes from the few codices to survive the Spanish conquest. Their myths can be confusing because of the lack of documentation and also because there are many popular myths that seem to contradict one another. This happened due to the fact that they were originally passed down by word of mouth and because the Aztecs adopted many of their gods from other tribes, both assigning their own new aspects to these gods and endowing them with those of similar gods from various other cultures. Older myths can be very similar to newer myths while contradicting one another by claiming that a different god performed the same action, probably because myths changed in correlation to the popularity of each of the gods at a given time.
Other variations on this myth state that Coatlicue, the earth goddess, was the mother of the four Tezcatlipocas and the Tzitzimitl. Some versions say that Quetzalcoatl was born to her first, while she was still a virgin, often mentioning his twin brother Xolotl, the guide of the dead and god of fire. Tezcatlipoca was then born to her by an obsidian knife, followed by the Tzitzimitl and then Huitzilopochtli. The most popular variation including Coatlicue depicts her giving birth first to the Tzitzimitl. Much later she gave birth to Huitzilopochtli when a mysterious ball of feathers appeared to her. The Tzitzimitl then decapitated the pregnant Coatlicue, believing it to be insulting that she had given birth to another child. Huitzilopochtli then sprang forth from her womb wielding a serpent of fire and began his epic war with the Tzitzimitl, who were also referred to as the Centzon Huitznahuas. Sometimes he is said to have decapitated Coyolxauhqui and either used her head to make the moon or thrown it into a canyon. Further variations depict the ball of feathers as being the father of Huitzilopochtli or the father of Quetzalcoatl and sometimes Xolotl.
Other variations of this myth claim that only Quetzalcoatl and Tezcatlipoca were born to Ometeotl, who was replaced by Coatlicue in this myth probably because it had absolutely no worshipers or temples by the time the Spanish arrived. It is sometimes said that the male characteristic of Ometeotl is named Ometecuhtli and that the female characteristic is named Omecihualt. Further variations on this myth state that it was only Quetzalcoatl and Tezcatlipoca who pulled apart Cipactli, also known as Tlaltecuhtli, and that Xipe Totec and Huitzilopochtli then constructed the world from her body. Some versions claim that Tezcatlipoca actually used his leg as bait for Cipactli, before dismembering her.
The order of the first four suns varies as well, though the above version is the most common. Each world's end correlates consistently to the god that was the sun at the time throughout all variations of the myth, though the loss of Xochiquetzal is not always identified as Tlaloc's reason for the rain of fire, which is not otherwise given and it is sometimes said that Chalchiuhtlicue flooded the world on purpose, without the involvement of Tezcatlipoca. It is also said that Tezcatlipoca created half a sun, which his jaguars then ate before eating the giants.
The fifth sun however is sometimes said to be a god named Nanauatzin. In this version of the myth, the gods convened in darkness to choose a new sun, who was to sacrifice himself by jumping into a gigantic bonfire. The two volunteers were the young son of Tlaloc and Chalchiuhtlicue, Tecuciztecatl, and the old Nanauatzin. It was believed that Nanauatzin was too old to make a good sun, but both were given the opportunity to jump into the bonfire. Tecuciztecatl tried first but was not brave enough to walk through the heat near the flames and turned around. Nanauatzin then walked slowly towards and then into the flames and was consumed. Tecuciztecatl then followed. The braver Nanauatzin became what is now the sun and Tecuciztecatl became the much less spectacular moon. A god that bridges the gap between Nanauatzin and Huitzilopochtli is Tonatiuh, who was sick, but rejuvenated himself by burning himself alive and then became the warrior sun and wandered through the heavens with the souls of those who died in battle, refusing to move if not offered enough sacrifices.
Brief summation
Nāhui-Ocēlōtl (Jaguar Sun) – Inhabitants were giants who were devoured by jaguars. The world was destroyed.
Nāhui-Ehēcatl (Wind Sun) – Inhabitants were transformed into monkeys. This world was destroyed by hurricanes.
Nāhui-Quiyahuitl (Rain Sun) – Inhabitants were destroyed by rain of fire. Only birds survived (or inhabitants survived by becoming birds).
Nāhui-Ātl (Water Sun) – This world was flooded turning the inhabitants into fish. A couple escaped but were transformed into dogs.
Nāhui-Olīn (Earthquake Sun) – Current humans are the inhabitants of this world. Should the gods be displeased, this world will be destroyed by earthquakes (or one large earthquake) and the Tzitzimimeh will annihilate all its inhabitants.
In popular culture
The version of the myth with Nanahuatzin serves as a framing device for the 1991 Mexican film, In Necuepaliztli in Aztlan (Return a Aztlán), by Juan Mora Catlett.
The version of the myth with Nanahuatzin is in the 1996 film, The Five Suns: A Sacred History of Mexico, by Patricia Amlin.
Rage Against the Machine refers to intercultural violence as "the fifth sunset" in their song "People of the Sun", on the album Evil Empire.
Thomas Harlan's science fiction series "In the Time of the Sixth Sun" uses this myth as a central plot point, where an ancient star-faring civilization ("people of the First Sun") had disappeared and left the galaxy with many dangerous artifacts.
The Shadowrun role-playing game takes place in the "Sixth World."
The concept of the five suns is alluded to in Onyx Equinox, where Quetzalcoatl claims that the gods made humanity four times before. Tezcatlipoca seeks to end the current human era, since he believes humans are too greedy and waste their blood in battle rather than as sacrifices.
The final episode of Victor and Valentino is called "The Fall of the Fifth Sun", and also features Tezcatlipoca in a central role.
See also
Aztec mythology
Aztec religion
Aztec philosophy
Fifth World (mythology)
Mesoamerican creation accounts
Sun stone
Thirteen Heavens
References
Further reading
Eschatology
Creation myths
Eschatology
Aztec philosophy | Five Suns | Astronomy | 3,096 |
13,552,978 | https://en.wikipedia.org/wiki/Behavioral%20confirmation | Behavioral confirmation is a type of self-fulfilling prophecy whereby people's social expectations lead them to behave in ways that cause others to confirm their expectations. The phenomenon of belief creating reality is known by several names in literature: self-fulfilling prophecy, expectancy confirmation, and behavioral confirmation, which was first coined by social psychologist Mark Snyder in 1984. Snyder preferred this term because it emphasizes that it is the target's actual behavior that confirms the perceiver's beliefs.
Self-fulfilling prophecy
Preconceived beliefs and expectations are used by human beings when they interact with others, as guides to action. Their actions may then guide the interacting partner to behave in a way that confirms the individual's initial beliefs. The self-fulfilling prophecy is essentially the idea that beliefs and expectations can and do create their own reality. Sociologist Robert K. Merton defined a self-fulfilling prophecy as, in the beginning, a false definition of the situation evoking a new behavior which makes the originally false conception come true.
Self-fulfilling prophecy focuses on the behavior of the perceiver in electing expected behavior from the target, whereas behavioral confirmation focuses on the role of the target's behavior in confirming the perceiver's beliefs.
Research
Research has shown that a person (referred to as a perceiver) who possesses beliefs about another person (referred to as a target) will often act on these beliefs in ways that lead the target to actually behave in ways that confirm the perceiver's original beliefs.
In one demonstration of behavioral confirmation in social interaction, Snyder and colleagues had previously unacquainted male and female partners get acquainted through a telephone-like intercom system. The male participants were referred to as the perceivers, and the female participants were referred to as the targets. Prior to their conversations, the experimenter gave the male participants a Polaroid picture and led them to believe that it depicted their female partners. The male participants were unaware that, in fact, the pictures were not of their partners. The experimenter gave the perceivers pictures which portrayed either physically attractive or physically unattractive women in order to activate the perceiver's stereotypes that they may possess concerning attractive and unattractive people. The perceiver-target dyads engaged in a 10-minute, unstructured conversation, which was initiated by the perceivers. Individuals, identified as the raters, listened in on only the targets' contributions to the conversations and rated their impressions of the targets. Results showed that targets whose partners believed them to be physically attractive came to behave in a more sociable, warm, and outgoing manner than targets whose partners believed them to be physically unattractive. Consequently, targets behaviorally confirmed the perceivers' beliefs, thus turning the perceivers' beliefs into self-fulfilling prophecies. The study also supported and displayed the physical attractiveness stereotype.
These findings suggest that human beings, who are the targets of many perceivers in everyday life, may routinely act in ways which are consistent not with their own attitudes, beliefs, or feelings; but rather with the perceptions and stereotypes which others hold of them and their attributes. This seems to suggest that the power of others' beliefs over one's behaviour is extremely strong.
Mechanisms
Snyder proposed a four-step sequence in which behavioral confirmation occurs:
The perceiver adopts beliefs about the target
The perceiver acts as if these beliefs were true and treats the target accordingly
The target assimilates his or her behavior to the perceiver's overtures
The perceiver interprets the target's behavior as confirmation of his or her original beliefs.
Motivational foundations
The perceiver and the target have a common goal of getting acquainted with one other, and they do so in different functions. Behavioral confirmation occurs from the combination of a perceiver who is acting in the service of the knowledge function and a target whose behaviors serve an adjustive function.
The perceiver uses knowledge motivations in order to get a stable and predictable view of those whom one interacts, eliciting behavioral confirmation. Perceivers use knowledge-oriented strategies, which occur when perceivers view their interactions with targets as opportunities to find out about their targets' personality and to check their impressions of targets, leading perceivers to ask belief-confirming questions. The perceiver asks the target questions in order to form stable and predictable impressions of their partner, and perceivers tend to confidently assume that possession of even the limited information gathered about the other person gives them the ability to predict that that person's future will be consistent with the impressions gathered.
When the target is motivated by adjustive functions, they are motivated to try to get along with their partners and to have a smooth and pleasant conversation with the perceiver. The adjustive function motivates the targets to reciprocate perceivers' overtures and thereby to behaviorally confirm perceivers' erroneous beliefs. Without the adjustive function, this may lead to behavioral disconfirmation.
Examples
Physical attractiveness – When one interacts with another person of high or low physical attractiveness, they influence that person's social prowess. When a target (unbeknownst to themselves) is tagged physically attractive, that target, through interaction with the perceiver, in turn comes to behave in a friendlier manner than do those tagged unattractive.
Race – In a 1997 study by Chen and Bargh, it was shown that participants who were subliminally primed with an African-American stereotype observed more hostility from the target they interacted with than those who were in the control condition. This study suggests that behavioral confirmation caused targets to become more hostile when their perceiver had been negatively primed.
Gender – When participants were made aware of their targets' gender in a division of labour task, targets fell into their gender-specific roles through behavioral confirmation.
Loneliness – Adults who were presented with a hypothetically lonely peer and a non-lonely hypothetical peer were found to report greater rejection of the lonely peer, with evidence that this was due to individuals stigmatizing loneliness as a discredited attribute.
Critique
The principle objection to the idea of behavioral confirmation is that the laboratory situations that are used in the research often do not map onto real-world social interaction easily. In addition, it is argued that behavioral disconfirmation is just as likely to develop out of expectancies as are self-fulfilling expectations. A strong criticism by Lee Jussim is the allegation that, in all previous behavioral confirmation studies, the participants have been falsely misled about the targets' characteristics; however, in real life, people's expectations are generally correct. To combat such critique, behavioral confirmation has adapted to introduce a non-conscious element. Even though there are clearly pitfalls to the phenomenon, it has continuously been studied over the past few decades, highlighting its importance in psychology.
References
Human behavior | Behavioral confirmation | Biology | 1,390 |
74,819 | https://en.wikipedia.org/wiki/Pupil | The pupil is a hole located in the center of the iris of the eye that allows light to strike the retina. It appears black because light rays entering the pupil are either absorbed by the tissues inside the eye directly, or absorbed after diffuse reflections within the eye that mostly miss exiting the narrow pupil. The size of the pupil is controlled by the iris, and varies depending on many factors, the most significant being the amount of light in the environment. The term "pupil" was coined by Gerard of Cremona.
In humans, the pupil is circular, but its shape varies between species; some cats, reptiles, and foxes have vertical slit pupils, goats and sheep have horizontally oriented pupils, and some catfish have annular types. In optical terms, the anatomical pupil is the eye's aperture and the iris is the aperture stop. The image of the pupil as seen from outside the eye is the entrance pupil, which does not exactly correspond to the location and size of the physical pupil because it is magnified by the cornea. On the inner edge lies a prominent structure, the collarette, marking the junction of the embryonic pupillary membrane covering the embryonic pupil.
Function
The iris is a contractile structure, consisting mainly of smooth muscle, surrounding the pupil. Light enters the eye through the pupil, and the iris regulates the amount of light by controlling the size of the pupil. This is known as the pupillary light reflex.
The iris contains two groups of smooth muscles; a circular group called the sphincter pupillae, and a radial group called the dilator pupillae. When the sphincter pupillae contract, the iris decreases or constricts the size of the pupil. The dilator pupillae, innervated by sympathetic nerves from the superior cervical ganglion, cause the pupil to dilate when they contract. These muscles are sometimes referred to as intrinsic eye muscles.
The sensory pathway (rod or cone, bipolar, ganglion) is linked with its counterpart in the other eye by a partial crossover of each eye's fibers. This causes the effect in one eye to carry over to the other.
Effect of light
The pupil gets wider in the dark and narrower in light. When narrow, the diameter is 2 to 4 millimeters. In the dark it will be the same at first, but will approach the maximum distance for a wide pupil 3 to 8 mm. However, in any human age group there is considerable variation in maximal pupil size. For example, at the peak age of 15, the dark-adapted pupil can vary from 4 mm to 9 mm with different individuals. After 25 years of age, the average pupil size decreases, though not at a steady rate. At this stage the pupils do not remain completely still, therefore may lead to oscillation, which may intensify and become known as hippus. The constriction of the pupil and near vision are closely tied. In bright light, the pupils constrict to prevent aberrations of light rays and thus attain their expected acuity; in the dark, this is not necessary, so it is chiefly concerned with admitting sufficient light into the eye.
When bright light is shone on the eye, light-sensitive cells in the retina, including rod and cone photoreceptors and melanopsin ganglion cells, will send signals to the oculomotor nerve, specifically the parasympathetic part coming from the Edinger-Westphal nucleus, which terminates on the circular iris sphincter muscle. When this muscle contracts, it reduces the size of the pupil. This is the pupillary light reflex, which is an important test of brainstem function. Furthermore, the pupil will dilate if a person sees an object of interest.
Clinical significance
Effect of drugs
If the drug pilocarpine is administered, the pupils will constrict and accommodation is increased due to the parasympathetic action on the circular muscle fibers, conversely, atropine will cause paralysis of accommodation (cycloplegia) and dilation of the pupil.
Certain drugs cause constriction of the pupils, such as opioids. Other drugs, such as atropine, LSD, MDMA, mescaline, psilocybin mushrooms, cocaine and amphetamines may cause pupil dilation.
The sphincter muscle has a parasympathetic innervation, and the dilator has a sympathetic innervation. In pupillary constriction induced by pilocarpine, not only is the sphincter nerve supply activated but that of the dilator is inhibited. The reverse is true, so control of pupil size is controlled by differences in contraction intensity of each muscle.
Another term for the constriction of the pupil is miosis. Substances that cause miosis are described as miotic. Dilation of the pupil is mydriasis. Dilation can be caused by mydriatic substances such as an eye drop solution containing tropicamide.
Diseases
A condition called bene dilitatism occurs when the optic nerves are partially damaged. This condition is typified by chronically widened pupils due to the decreased ability of the optic nerves to respond to light. In normal lighting, people affected by this condition normally have dilated pupils, and bright lighting can cause pain. At the other end of the spectrum, people with this condition have trouble seeing in darkness. It is necessary for these people to be especially careful when driving at night due to their inability to see objects in their full perspective. This condition is not otherwise dangerous.
Size
The size of the pupil (often measured as diameter) can be a symptom of an underlying disease. Dilation of the pupil is known as mydriasis and contraction as miosis.
Not all variations in size are indicative of disease however. In addition to dilation and contraction caused by light and darkness, it has been shown that solving simple multiplication problems affects the size of the pupil. The simple act of recollection can dilate the size of the pupil, however when the brain is required to process at a rate above its maximum capacity, the pupils contract. There is also evidence that pupil size is related to the extent of positive or negative emotional arousal experienced by a person.
Myopic individuals have larger resting and dark dilated pupils than hyperopic and emmetropic individuals, likely due to requiring less accommodation (which results in pupil constriction).
Some humans are able to exert direct control over their iris muscles, giving them the ability to manipulate the size of their pupils (i.e. dilating and constricting them) on command, without any changes in lighting condition or eye accommodation state. However, this ability is likely very rare and its purpose or advantages over those without it are unclear.
Animals
Not all animals have circular pupils. Some have slits or ovals which may be oriented vertically, as in crocodiles, vipers, cats and foxes, or horizontally as in some rays, flying frogs, mongooses and artiodactyls such as elk, red deer, reindeer and hippopotamus, as well as the domestic horse. Goats, sheep, toads and octopus pupils tend to be horizontal and rectangular with rounded corners. Some skates and rays have crescent shaped pupils, gecko pupils range from circular, to a slit, to a series of pinholes, and the cuttlefish pupil is a smoothly curving W shape. Although human pupils are normally circular, abnormalities like colobomas can result in unusual pupil shapes, such as teardrop, keyhole or oval pupil shapes.
There may be differences in pupil shape even between closely related animals. In felids, there are differences between small- and large eyed species. The domestic cat (Felis sylvestris domesticus) has vertical slit pupils, its large relative the Siberian tiger (Panthera tigris altaica) has circular pupils and the Eurasian lynx (Lynx lynx) is intermediate between those of the domestic cat and the Siberian tiger. A similar difference between small and large species may be present in canines. The small red fox (Vulpes vulpes) has vertical slit pupils whereas their large relatives, the gray wolf (Canis lupus lupus) and domestic dogs (Canis lupus familiaris) have round pupils.
Evolution and adaptation
One explanation for the evolution of slit pupils is that they can exclude light more effectively than a circular pupil. This would explain why slit pupils tend to be found in the eyes of animals with a crepuscular or nocturnal lifestyle that need to protect their eyes during daylight. Constriction of a circular pupil (by a ring-shaped muscle) is less complete than closure of a slit pupil, which uses two additional muscles that laterally compress the pupil. For example, the cat's slit pupil can change the light intensity on the retina 135-fold compared to 10-fold in humans. However, this explanation does not account for circular pupils that can be closed to a very small size (e.g., 0.5 mm in the tarsier) and the rectangular pupils of many ungulates which do not close to a narrow slit in bright light. An alternative explanation is that a partially constricted circular pupil shades the peripheral zones of the lens which would lead to poorly focused images at relevant wavelengths. The vertical slit pupil allows for use of all wavelengths across the full diameter of the lens, even in bright light. It has also been suggested that in ambush predators such as some snakes, vertical slit pupils may aid in camouflage, breaking up the circular outline of the eye.
Activity pattern and behavior
In a study of Australian snakes, pupil shapes correlated both with diel activity times and with foraging behavior. Most snake species with vertical pupils were nocturnal and also ambush foragers, and most snakes with circular pupils were diurnal and active foragers. Overall, foraging behaviour predicted pupil shape accurately in more cases than did diel time of activity, because many active-foraging snakes with circular pupils were not diurnal. It has been suggested that there may be a similar link between foraging behaviour and pupil shape amongst the felidae and canidae discussed above.
A 2015 study confirmed the hypothesis that elongated pupils have increased dynamic range, and furthered the correlations with diel activity. However it noted that other hypotheses could not explain the orientation of the pupils. They showed that vertical pupils enable ambush predators to optimise their depth perception, and horizontal pupils to optimise the field of view and image quality of horizontal contours. They further explained why elongated pupils are correlated with the animal's height.
Society and culture
The pupil plays a role in eye contact and nonverbal communication. The voluntary or involuntary enlargement or dilation of the pupils indicates cognitive arousal, interest in the subject of attention, and/or sexual arousal. On the other hand, when the pupil is voluntarily or involuntarily contracted, it could indicate the opposite - disinterest or disgust. Exceptionally large or dilated pupils are also perceived to be an attractive feature in body language.
In a surprising number of unrelated languages, the etymological meaning of the term for pupil is "little person". This is true, for example, of the word pupil itself: this comes into English from Latin pūpilla, which means "doll, girl", and is a diminutive form of pupa, "girl". (The double meaning in Latin is preserved in English, where pupil means both "schoolchild" and "dark central portion of the eye within the iris".) This may be because the reflection of one's image in the pupil is a minuscule version of one's self. In the Old Babylonian period (c. 1800-1600 BC) in ancient Mesopotamia, the expression "protective spirit of the eye" is attested, perhaps arising from the same phenomenon.
The English phrase apple of my eye arises from an Old English usage, in which the word apple meant not only the fruit but also the pupil or eyeball.
See also
Pupillary response
Pupil function
Dilated fundus examination
Eye contact
Horner's syndrome
Mydriasis
Synechia (eye)
Anisocoria
Adie's pupil
Argyll Robertson pupil
Light-near dissociation
Marcus Gunn Pupil
References
External links
— "Sagittal Section Through the Eyeball"
— "Sagittal Section Through the Eyeball"
A pupil examination simulator, demonstrating the changes in pupil reactions for various nerve lesions.
Ethology
Articles containing video clips | Pupil | Biology | 2,574 |
76,683,450 | https://en.wikipedia.org/wiki/IC%205337 | IC 5337 or JW100, is a spiral galaxy located 800 million light-years away from the Solar System in the constellation of Pegasus.
It was discovered by French astronomer, Stephane Javelle on November 25, 1897 and is probably gravitationally bound to IC 5338, the brightest cluster galaxy in Abell 2626. According to SIMBAD, IC 5337 is considered an emission-line galaxy.
IC 5337 is a jellyfish galaxy, mainly due to ram pressure. Star-forming gas are thrown about, as the galaxy penetrates through the thin gas layer and causing them to drip from the galaxy's disc, giving it its unique appearance of a cosmic jellyfish. It has a stellar mass of 3.2 × 1011 M⊙ and contains an active galactic nucleus likely trigged by accretion of matter into its supermassive black hole.
In addition, IC 5337 also shows an X-ray source.
See also
IC 4141
PGC 2456
Jellyfish galaxy
References
Pegasus (constellation)
Spiral galaxies
5337
+03-60-012
071875
071875
Astronomical objects discovered in 1897 | IC 5337 | Astronomy | 229 |
3,838,745 | https://en.wikipedia.org/wiki/Solaris%20IP%20network%20multipathing | The IP network multipathing or IPMP is a facility provided by Solaris to provide fault-tolerance and load spreading for network interface cards (NICs). With IPMP, two or more NICs are dedicated for each network to which the host connects. Each interface can be assigned a static "test" IP address, which is used to assess the operational state of the interface. Each virtual IP address is assigned to an interface, though there may be more interfaces than virtual IP addresses, some of the interfaces being purely for standby purposes. When the failure of an interface is detected its virtual IP addresses are swapped to an operational interface in the group.
The IPMP load spreading feature increases the machine's bandwidth by spreading the outbound load between all the cards in the same IPMP group.
in.mpathd is the daemon in the Solaris OS responsible for IPMP functionality.
See also
Multihoming
Multipath routing
Multipath TCP
Common Address Redundancy Protocol
External links
Enterprise Networking Article, February 2, 2006
Introducing IPMP - Oracle Solaris 11
IPMP section from Sun Solaris 10 System Administration Guide
Networking standards
Sun Microsystems software | Solaris IP network multipathing | Technology,Engineering | 235 |
472,972 | https://en.wikipedia.org/wiki/Vera%20Rubin | Vera Florence Cooper Rubin (; July 23, 1928 – December 25, 2016) was an American astronomer who pioneered work on galaxy rotation rates. She uncovered the discrepancy between the predicted and observed angular motion of galaxies by studying galactic rotation curves. These results were later confirmed over subsequent decades. Her work on the galaxy rotation problem was cited by others as evidence for the existence of dark matter. The Vera C. Rubin Observatory in Chile is named in her honor.
Beginning her academic career as the sole undergraduate in astronomy at Vassar College, Rubin went on to graduate studies at Cornell University and Georgetown University, where she observed deviations from Hubble flow in galaxies and provided evidence for the existence of galactic superclusters. She was honored throughout her career for her work, receiving the Bruce Medal, the Gold Medal of the Royal Astronomical Society, and the National Medal of Science, among others.
Rubin spent her life advocating for women in science, and she was known for her mentorship of aspiring female astronomers. Her legacy was described by The New York Times as "ushering in a Copernican-scale change" in cosmological theory.
Early life
Vera Cooper was born on July 23, 1928, in Philadelphia, Pennsylvania. She was the younger of two sisters. Her parents were Philip Cooper, an electrical engineer at Bell Telephone and Rose Cooper who worked at Bell Telephone until they married.
The Coopers moved to Washington, D.C., in 1938, where ten-year-old Vera developed an interest in astronomy while watching the stars from her window. "Even then I was more interested in the question than in the answer," she remembered. "I decided at an early age that we inhabit a very curious world." She built a crude telescope out of cardboard with her father, and began to observe and track meteors. She attended Coolidge Senior High School, graduating in 1944.
Rubin's older sister, Ruth Cooper Burg, was an attorney who later worked as an administrative law judge in the United States Department of Defense. Her father, a mathematically talented electrical engineer, supported her passion by helping her build a telescope.
Education
Rubin was inspired to pursue an undergraduate education at Vassar College (then an all-women's school), and she was also inspired by Maria Mitchell, who had been a professor in that same college in 1865. She ignored advice she had received from a high school science teacher to avoid a scientific career and become an artist. She graduated Phi Beta Kappa and earned her bachelor's degree in astronomy in 1948, the only graduate in astronomy that year. She attempted to enroll in a graduate program at Princeton, but was barred due to her gender. Princeton would not accept women as astronomy graduate students for 27 more years. Rubin also turned down an offer from Harvard University.
She married in 1948, and her husband, Robert Joshua Rubin, was a graduate student at Cornell University.
Rubin then enrolled at Cornell University, and earned a master's degree in 1951. During her graduate studies, she studied the motions of 109 galaxies and made one of the first observations of deviations from Hubble flow (how the galaxies move apart from one another). She worked with astronomer Martha Carpenter on galactic dynamics, and studied under Philip Morrison, Hans Bethe, and Richard Feynman. Though the conclusion she came to – that there was an orbital motion of galaxies around a particular pole – was disproven, the idea that galaxies were moving held true and sparked further research. Her research also provided early evidence of the supergalactic plane. This information and the data she discovered was immensely controversial. She struggled to be allowed to present her work at the American Astronomical Society as she was visibly pregnant and not a member of the society, the talk received - to her recollection - universally negative feedback and the paper was not published.
Rubin studied for her Ph.D. at Georgetown University, the only university in Washington, D.C., that offered a graduate degree in astronomy.
She was 23 years old and pregnant when she began her doctoral studies, and the Rubins had one young child at home. She began to take classes with Francis Heyden, who recommended her to George Gamow of the neighboring George Washington University, her eventual doctoral advisor. Her dissertation, completed in 1954, concluded that galaxies clumped together, rather than being randomly distributed through the universe, a controversial idea not pursued by others for two decades. Throughout her graduate studies, she encountered discouraging sexism; in one incident she was not allowed to meet with her advisor in his office, because women were not allowed in that area of the Catholic university.
Career
For the next eleven years, Rubin held various academic positions. She served for a year as an instructor of Mathematics and Physics at Montgomery College. From 1955 to 1965 she worked at Georgetown University as a research associate astronomer, lecturer (1959–1962), and finally, assistant professor of astronomy (1962–1965). She joined the Carnegie Institution of Washington (later called Carnegie Institution of Science) in 1965 as a staff member in the Department of Terrestrial Magnetism. There she met her long-time collaborator, instrument-maker Kent Ford. Because she had young children, she did much of her work from home.
In 1963, Rubin began a year-long collaboration with Geoffrey and Margaret Burbidge, during which she made her first observations of the rotation of galaxies while using the McDonald Observatory's 82-inch telescope. During her work at the Carnegie Institution, Rubin applied to observe at the Palomar Observatory in 1965, despite the fact that the building did not have facilities for women. She created her own women's restroom, sidestepping the lack of facilities available for her. She became the first female astronomer to observe there.
At the Carnegie Institution, Rubin began work related to her controversial thesis regarding galaxy clusters with Ford, making hundreds of observations using Ford's image-tube spectrograph. This image intensifier allowed resolving the spectra of astronomical objects that were previously too dim for spectral analysis. The Rubin–Ford effect, an apparent anisotropy in the expansion of the Universe on the scale of 100 million light years, was discovered through studies of spiral galaxies, particularly the Andromeda Galaxy, chosen for its brightness and proximity to Earth. The idea of peculiar motion on this scale in the universe was a highly controversial proposition, which was first published in journals in 1976. It was dismissed by leading astronomers but ultimately shown to be valid. The effect is now known as large scale streaming. The pair also briefly studied quasars, which had been discovered in 1963 and were a popular topic of research.
Wishing to avoid controversial areas of astronomy, including quasars and galactic motion, Rubin began to study the rotation and outer reaches of galaxies, an interest sparked by her collaboration with the Burbidges. She investigated the rotation curves of spiral galaxies, again beginning with Andromeda, by looking at their outermost material. She observed flat rotation curves: the outermost components of the galaxy were moving as quickly as those close to the center. She further uncovered the discrepancy between the predicted angular motion of galaxies based on the visible light and the observed motion. Her research showed that spiral galaxies rotate quickly enough that they should fly apart, if the gravity of their constituent stars was all that was holding them together; because they stay intact, a large amount of unseen mass must be holding them together, a conundrum that became known as the galaxy rotation problem.
Rubin's results came to be cited as evidence that spiral galaxies were surrounded by dark matter haloes.
Rubin's calculations showed that galaxies must contain at least five to ten times more mass than can be observed directly based on the light emitted by ordinary matter. Rubin's results were confirmed over subsequent decades, and became the first persuasive results supporting the theory of dark matter, initially proposed by Fritz Zwicky in the 1930s. This data was confirmed by radio astronomers, the discovery of the cosmic microwave background, and images of gravitational lensing. However, Rubin did not rule out alternative models to dark matter also inspired by her measurements. She and her research were discussed in the 1991 PBS series, The Astronomers.
Another area of interest for Rubin was the phenomenon of counter-rotation in galaxies. Her discovery that some gas and stars moved in the opposite direction to the rotation of the rest of the galaxy challenged the prevailing theory that all of the material in a galaxy moved in the same direction, and provided the first evidence for galaxy mergers and the process by which galaxies initially formed.
Rubin's perspective on the history of the work on galaxy movements was presented in a review, "One Hundred Years of Rotating Galaxies," for the Publications of the Astronomical Society of the Pacific in 2000. This was an adaptation of the lecture she gave in 1996 upon receiving the Gold Medal of the Royal Astronomical Society, the second woman to be so honored, 168 years after Caroline Herschel received the Medal in 1828.
In 2002, Discover magazine recognized Rubin as one of the 50 most important women in science. She continued her research and mentorship until her death in 2016.
Vera C. Rubin Observatory
On December 20, 2019, the Large Synoptic Survey Telescope was renamed the Vera C. Rubin Observatory in recognition of Rubin's contributions to the study of dark matter and her outspoken advocacy for the equal treatment and representation of women in science. The observatory is located a mountain in Cerro Pachón, Chile and observations will focus on the study of dark matter and dark energy. As of 2024, the extremely agile telescope is in place and full operation is expected within the next year
Legacy
When Rubin was elected to the National Academy of Science, she became the second woman astronomer in its ranks, after her colleague Margaret Burbidge. Rubin never won the Nobel Prize, though physicists such as Lisa Randall and Emily Levesque have argued that this was an oversight. She was described by Sandra Faber and Neta Bahcall as one of the astronomers who paved the way for other women in the field, as a "guiding light" for those who wished to have families and careers in astronomy. Rebecca Oppenheimer also recalled Rubin's mentorship as important to her early career.
Rubin died on the night of December 25, 2016, of complications associated with dementia. The president of the Carnegie Institution, where she performed the bulk of her work and research, called her a "national treasure."
The Carnegie Institution has created a postdoctoral research fund in Rubin's honor, and the Division on Dynamical Astronomy of the American Astronomical Society has named the Vera Rubin Early Career Prize in her honor.
Rubin was featured in an animated segment of the 13th and final episode of Cosmos: A Spacetime Odyssey. An area on Mars, Vera Rubin Ridge, is named after her and Asteroid 5726 Rubin was named in her honor.
On 6 November 2020, a satellite named after her (ÑuSat 18 or "Vera", COSPAR 2020-079K) was launched into space.
Rubin will be honored on a U.S. quarter in 2025 as part of the final year of the American Women quarters program.
On June 2, 2024, Nvidia announced that their next generation of datacenter accelerators would be named after Vera (CPU) Rubin (GPU).
In media
The Verubin Nebula which appears in Season Three of Star Trek: Discovery is named after Rubin.
The Stuff Between the Stars: How Vera Rubin Discovered Most of the Universe is a children's book by Sandra Nickel and Aimee Sicuro.
Awards and honors
Member, National Academy of Sciences (elected 1981)
Member, Pontifical Academy of Sciences
Member, American Philosophical Society
Gold Medal of the Royal Astronomical Society (1996)
Weizmann Women & Science Award (1996)
Gruber International Cosmology Prize (2002)
Catherine Wolfe Bruce Gold Medal of the Astronomical Society of the Pacific (2003)
James Craig Watson Medal of the National Academy of Sciences (2004)
Richtmyer Memorial Award
Dickson Prize for Science
National Medal of Science (1993)
Adler Planetarium Lifetime Achievement Award
Jansky Lectureship before the National Radio Astronomy Observatory
Henry Norris Russell Lectureship, American Astronomical Society (1994)
Honorary doctorates from Harvard University, Yale University, Smith College, Grinnell College, and Princeton University (2005)
Personal life
Vera Cooper Rubin was married to Robert Joshua Rubin from 1948 until his death in 2008. She had children while undertaking her graduate studies at Cornell, and she continued to work on her research while raising their young children. All four of their children earned PhDs in the natural sciences or mathematics: David (born 1950), is a geologist with the U.S. Geological Survey; Judith Young (1952–2014), was an astronomer at the University of Massachusetts; Karl (born 1956), is a mathematician at the University of California at Irvine; and Allan (born 1960), is a geologist at Princeton University. Rubin's children recalled later in life that their mother made a life of science appear desirable and fun, which inspired them to become scientists themselves.
Motivated by her own battle to gain credibility as a woman in a field that was dominated by male astronomers, Rubin encouraged girls interested in investigating the universe to pursue their dreams. Throughout her life, she faced discouraging comments on her choice of study but persevered, as she was supported by family and colleagues. In addition to encouraging women in astronomy, Rubin was a force for greater recognition of women in the sciences and for scientific literacy.
She, alongside Burbidge, advocated for more women to be elected to the National Academy of Sciences (NAS), selected for review panels, and represented in academic searches. She said that despite her struggles with the NAS, she continued to be dissatisfied with the low number of women who were elected each year, and she further said it was "the saddest part of [her] life".
Rubin was Jewish, and she shared that she saw no conflict between science and religion. In an interview, she said: "In my own life, my science and my religion are separate. I'm Jewish, and so religion to me is a kind of moral code and a kind of history. I try to do my science in a moral way, and, I believe that, ideally, science should be looked upon as something that helps us understand our role in the universe."
Publications
Books
Articles
The following are a small selection of articles selected by the scientists and historians of the CWP project (Contributions of 20-th Century Women to Physics), as being representative of her most important writings; Rubin published over 150 scientific papers.
The abstract of this is also generally available.
References
Further reading
External links
1928 births
2016 deaths
Jewish women scientists
Jewish astronomers
Dark matter
Cornell University alumni
Georgetown University alumni
Jewish American scientists
Members of the Pontifical Academy of Sciences
Members of the United States National Academy of Sciences
National Medal of Science laureates
Recipients of the Gold Medal of the Royal Astronomical Society
Vassar College alumni
American people of Lithuanian-Jewish descent
American people of Moldovan-Jewish descent
Scientists from Philadelphia
Members of the American Philosophical Society
20th-century American astronomers
21st-century American astronomers
20th-century American women scientists
21st-century American women scientists
Cosmologists
American astrophysicists
American women astrophysicists
American women planetary scientists | Vera Rubin | Physics,Astronomy | 3,101 |
2,861,044 | https://en.wikipedia.org/wiki/Land%20description | In surveying and property law, a land description or legal description is a written statement that delineates the boundaries of a piece of real property. In the written transfer of real property, it is universally required that the instrument of conveyance (deed) include a written description of the property.
Legal land description
Canada
In many parts of Canada the original subdivision of crown land was done by township surveys. Different sizes of townships have been used (e.g. Québec's irregularly shaped cantons and Ontario's concession townships), but all were designed to provide rectangular farm lots within a defined rural community. The survey of a township was essentially a subdivision survey, because the plan of the township was registered and the lots (sometimes called sections) were numbered. The description of a whole lot for legal purposes is complete in the identification of the township and the lot within the township.
A legal land description in Manitoba, Saskatchewan, and Alberta would be defined by the Dominion Land Survey. For example, the village of Yarbo, Saskatchewan is located at the legal land description of SE-12-20-33-W1, which would be the South East quarter of Section 12, Township 20, Range 33, West of the first meridian.
A legal land description in British Columbia Fraser Valley Lower Mainland (Metro Vancouver) is defined by land surveys based out of New Westminster. Land in New Westminster Townsite corresponding to present day New Westminster is labelled as such while land outside the townsite is labelled as being in New Westminster District. The Main subdivisions are District Lots that represent parcel sales to settlers mostly in the time from 1860-1890. District lots are numbered from DL1 to over DL3,000. These District Lots are still represented on the cadastral maps of British Columbia. Later these lots would be subdivided to form blocks and residential lots. A typical address would thus indicate a lot number, a block range, and the original District Lot from which it was subdivided.
References
External links
Cadastral Map of British Columbia showing District Lots
Mouland D.J. (1987) Land Descriptions. In: Brinker R.C., Minnick R. (eds) The Surveying Handbook. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-1188-2_30
Surveying
Real estate in Canada
Real property law | Land description | Engineering | 483 |
28,366,048 | https://en.wikipedia.org/wiki/Couchbase%20Server | Couchbase Server, originally known as Membase, is a source-available, distributed (shared-nothing architecture) multi-model NoSQL document-oriented database software package optimized for interactive applications. These applications may serve many concurrent users by creating, storing, retrieving, aggregating, manipulating and presenting data. In support of these kinds of application needs, Couchbase Server is designed to provide easy-to-scale key-value, or JSON document access, with low latency and high sustainability throughput. It is designed to be clustered from a single machine to very large-scale deployments spanning many machines.
Couchbase Server provided client protocol compatibility with memcached, but added disk persistence, data replication, live cluster reconfiguration, rebalancing and multitenancy with data partitioning.
Product history
Membase was developed by several leaders of the memcached project, who had founded a company, NorthScale, to develop a key-value store with the simplicity, speed, and scalability of memcached, but also the storage, persistence and querying capabilities of a database. The original membase source code was contributed by NorthScale, and project co-sponsors Zynga and Naver Corporation (then known as NHN) to a new project on membase.org in June 2010.
On February 8, 2011, the Membase project founders and Membase, Inc. announced a merger with CouchOne (a company with many of the principal players behind CouchDB) with an associated project merger. The merged company was called Couchbase, Inc. In January 2012, Couchbase released Couchbase Server 1.8.
In September of 2012, Orbitz said it had changed some of its systems to use Couchbase.
In December of 2012, Couchbase Server 2.0 (announced in July 2011) was released and included a new JSON document store, indexing and querying, incremental MapReduce and replication across data centers.
Architecture
Every Couchbase node consists of a data service, index service, query service, and cluster manager component. Starting with the 4.0 release, the three services can be distributed to run on separate nodes of the cluster if needed.
In the parlance of Eric Brewer's CAP theorem, Couchbase is normally a CP type system meaning it provides consistency and partition tolerance, or it can be set up as an AP system with multiple clusters.
Cluster manager
The cluster manager supervises the configuration and behavior of all the servers in a Couchbase cluster. It configures and supervises inter-node behavior like managing replication streams and re-balancing operations. It also provides metric aggregation and consensus functions for the cluster, and a RESTful cluster management interface. The cluster manager uses the Erlang programming language and the Open Telecom Platform.
Replication and fail-over
Data replication within the nodes of a cluster can be controlled with several parameters.
In December of 2012, support was added for replication between different data centers.
Data manager
The data manager stores and retrieves documents in response to data operations from applications.
It asynchronously writes data to disk after acknowledging to the client. In version 1.7 and later, applications can optionally ensure data is written to more than one server or to disk before acknowledging a write to the client.
Parameters define item ages that affect when data is persisted, and how max memory and migration from main-memory to disk is handled.
It supports working sets greater than a memory quota per "node" or "bucket".
External systems can subscribe to filtered data streams, supporting, for example, full text search indexing, data analytics or archiving.
Data format
A document is the most basic unit of data manipulation in Couchbase Server. Documents are stored in JSON document format with no predefined schemas. Non-JSON documents can also be stored in Couchbase Server (binary, serialized values, XML, etc.)
Object-managed cache
Couchbase Server includes a built-in multi-threaded object-managed cache that implements memcached compatible APIs such as get, set, delete, append, prepend etc.
Storage engine
Couchbase Server has a tail-append storage design that is immune to data corruption, OOM killers or sudden loss of power. Data is written to the data file in an append-only manner, which enables Couchbase to do mostly sequential writes for update, and provide an optimized access patterns for disk I/O.
Performance
A performance benchmark done by Altoros in 2012, compared Couchbase Server with other technologies.
Cisco Systems published a benchmark that measured the latency and throughput of Couchbase Server with a mixed workload in 2012.
Licensing and support
Couchbase Server is a packaged version of Couchbase's open source software technology and is available in a community edition without recent bug fixes with an Apache 2.0 license and an edition for commercial use.
Couchbase Server builds are available for Ubuntu, Debian, Red Hat, SUSE, Oracle Linux, Microsoft Windows and macOS operating systems.
Couchbase has supported software developers' kits for the programming languages .NET, PHP, Ruby, Python, C, Node.js, Java, Go, and Scala.
SQL++
A query language called SQL++ (formerly called N1QL), is used for manipulating the JSON data in Couchbase, just like SQL manipulates data in RDBMS. It has SELECT, INSERT, UPDATE, DELETE, MERGE statements to operate on JSON data.
It was initially announced in March 2015 as "SQL for documents".
The SQL++ data model is non-first normal form (N1NF) with support for nested attributes and domain-oriented normalization. The SQL++ data model is also a proper superset and generalization of the relational model.
Example
{
"email": "testme@example.org",
"friends": [
{ "name": "Pavan" },
{ "name": "Ravi" }
]
}
Like query
Array query
Couchbase Mobile
Couchbase Mobile / Couchbase Lite is a mobile database providing data replication.
Couchbase Lite (originally TouchDB) provides native libraries for offline-first NoSQL databases with built-in peer-to-peer or client-server replication mechanisms.
Sync Gateway manages secure access and synchronization of data between Couchbase Lite and Couchbase Server.
Couchbase Lite added support for Vector Search in version 3.2, allowing cloud to edge support for vector search in mobile applications.
Uses
Couchbase began as an evolution of Memcached, a high-speed data cache, and can be used as a drop-in replacement for Memcached, providing high availability for memcached application without code changes.
Couchbase is used to support applications where a flexible data model, easy scalability, and consistent high performance are required, such as tracking real-time user activity or providing a store of user preferences or online applications.
Couchbase Mobile, which stores data locally on devices (usually mobile devices) is used to create “offline-first” applications that can operate when a device is not connected to a network and synchronize with Couchbase Server once a network connection is re-established.
The Catalyst Lab at Northwestern University uses Couchbase Mobile to support the Evo application, a healthy lifestyle research program where data is used to help participants improve dietary quality, physical activity, stress, or sleep.
Amadeus uses Couchbase with Apache Kafka to support their “open, simple, and agile” strategy to consume and integrate data on loyalty programs for airline and other travel partners. High scalability is needed when disruptive travel events create a need to recognize and compensate high value customers.
Starting in 2012, it played a role in LinkedIn's caching systems, including backend caching for recruiter and jobs products, counters for security defense mechanisms, for internal applications.
Alternatives
For caching, Couchbase competes with Memcached and Redis.
For document databases, Couchbase competes with other document-oriented database systems. It is commonly compared with MongoDB, Amazon DynamoDB, Oracle RDBMS, DataStax, Google Bigtable, MariaDB, IBM Cloudant, Redis Enterprise, SingleStore, and MarkLogic.
Bibliography
Vemulapalli, Sitaram; et al. (May 10, 2018), A Guide to N1QL features in Couchbase 5.5: Special Edition, Self-published, p. 112
Chamberlin, Don; (Oct 19, 2018) SQL++ For SQL Users: A Tutorial, Couchbase
References
External links
Distributed computing architecture
NoSQL
Cross-platform software
Structured storage
Client-server database management systems
Database-related software for Linux
Applications of distributed computing
Databases
Data management
Distributed data stores
Document-oriented databases
Software using the Business Source License | Couchbase Server | Technology | 1,845 |
62,149 | https://en.wikipedia.org/wiki/Custard | Custard is a variety of culinary preparations based on sweetened milk, cheese, or cream cooked with egg or egg yolk to thicken it, and sometimes also flour, corn starch, or gelatin. Depending on the recipe, custard may vary in consistency from a thin pouring sauce () to the thick pastry cream () used to fill éclairs. The most common custards are used in custard desserts or dessert sauces and typically include sugar and vanilla; however, savory custards are also found, e.g., in quiche.
Preparation
Custard is usually cooked in a double boiler (bain-marie), or heated very gently in a saucepan on a stove, though custard can also be steamed, baked in the oven with or without a water bath, or even cooked in a pressure cooker. Custard preparation is a delicate operation because a temperature increase of leads to overcooking and curdling. Generally, a fully cooked custard should not exceed ; it begins setting at . A bain marie water bath slows heat transfer and makes it easier to remove the custard from the oven before it curdles. Adding a small amount of cornflour (U.S. corn starch) to the egg-sugar mixture stabilises the resulting custard, allowing it to be cooked in a single pan as well as in a double-boiler. A sous-vide water bath may be used to precisely control temperature.
Variations
While custard may refer to a wide variety of thickened dishes, technically (and in French cookery) the word custard (crème or more precisely crème moulée, ) refers only to an egg-thickened custard.
When starch is added, the result is called 'pastry cream' (, ) or confectioners' custard, made with a combination of milk or cream, egg yolks, fine sugar, flour or some other starch, and usually a flavoring such as vanilla, chocolate, or lemon. Crème pâtissière is a key ingredient in many French desserts, including mille-feuille (or Napoleons) and filled tarts. It is also used in Italian pastry and sometimes in Boston cream pie. The thickening of the custard is caused by the combination of egg and starch. Corn flour or flour thickens at and as such many recipes instruct the pastry cream to be boiled. In a traditional custard such as a crème anglaise, where eggs are used alone as a thickener, boiling results in the over-cooking and subsequent curdling of the custard; however, in a pastry cream, starch prevents this. Once cooled, the amount of starch in pastry cream sets the cream and requires it to be beaten or whipped before use.
When gelatin is added, it is known as crème anglaise collée (). When gelatin is added and whipped cream is folded in, and it sets in a mold, it is bavarois. When starch is used alone as a thickener (without eggs), the result is a blancmange.
In the United Kingdom, custard has various traditional recipes some thickened principally with cornflour (cornstarch) rather than the egg component, others involving regular flour; see custard powder.
After the custard has thickened, it may be mixed with other ingredients: mixed with stiffly beaten egg whites and gelatin, it is chiboust cream; mixed with whipped cream, it is crème légère, . Beating in softened butter produces German buttercream or crème mousseline.
A quiche is a savoury custard tart. Some kinds of timbale or vegetable loaf are made of a custard base mixed with chopped savoury ingredients. Custard royale is a thick custard cut into decorative shapes and used to garnish soup, stew or broth. In German, it is known as Eierstich and is used as a garnish in German Wedding Soup (Hochzeitssuppe). Chawanmushi is a Japanese savoury custard, steamed and served in a small bowl or on a saucer. Chinese steamed egg is a similar but larger savoury egg dish. Bougatsa is a Greek breakfast pastry whose sweet version consists of semolina custard filling between layers of phyllo.
Custard may also be used as a top layer in gratins, such as the South African bobotie and many Balkan versions of moussaka.
In Peru, leche asada ("baked milk") is custard baked in individual molds. It is considered a restaurant dish.
In French cuisine
French cuisine has several named variations on custard:
Crème anglaise is a light custard made with eggs, sugar, milk, and vanilla (with the possible addition of starch), with other flavoring agents as desired
With cream instead of milk, and more sugar, it is the basis of crème brûlée
With egg yolks and heavy cream, it is the basis of ice cream
With egg yolks and whipped cream, and stabilised with gelatin, it is the basis of Bavarian cream
Thickened with butter, chocolate, or gelatin, it is a popular basis for a crémeux
Crème pâtissière (pastry cream) is similar to crème anglaise, but with a thickening agent such as cornstach or flour
With added flavoring or fresh fruit, it is the basis of crème plombières
Crème Saint-Honoré is crème pâtissière enriched with whipped egg whites
Crème chiboust is similar to crème Saint-Honoré, but stabilised with gelatin
Crème diplomate and crème légère are variations of crème pâtissière enriched with whipped cream
Crème mousseline is a variation of crème pâtissière enriched with butter
Frangipane is crème pâtissière mixed with powdered macarons or almond powder
Uses
Recipes involving sweet custard are listed in the custard dessert category, and include:
Banana custard
Bavarian cream
Boston cream pie
Bougatsa
Chiboust cream
Cream pie
Crème brûlée
Crème caramel
Cremeschnitte
Custard tart
Danish pastry
Egg tart
Eggnog
English trifle
Flan
Floating island
Frangipane, with almonds
Frozen custard
Fruit Salad
Galaktoboureko
Manchester tart
Muhallebi
Natillas
Pastel de nata
Pudding
Taiyaki
Vanilla slice
Vla
Zabaione
History
Custards baked in pastry (custard tarts) were very popular in the Middle Ages, and are the origin of the English word 'custard': the French term croustade originally referred to the crust of a tart, and is derived from the Italian word crostata, and ultimately the Latin .
Examples include Crustardes of flessh and Crustade, in the 14th century English collection The Forme of Cury. These recipes include solid ingredients such as meat, fish, and fruit bound by the custard. Stirred custards cooked in pots are also found under the names Creme Boylede and Creme boiled. Some custards especially in the Elizabethan era used marigold (calendula) to give the custard color.
In modern times, the name 'custard' is sometimes applied to starch-thickened preparations like blancmange and Bird's Custard powder.
Chemistry
Stirred custard is thickened by coagulation of egg protein, while the same gives baked custard its gel structure. The type of milk used also impacts the result. Most important to a successfully stirred custard is to avoid excessive heat that will cause over-coagulation and syneresis that will result in a curdled custard.
Eggs contain the proteins necessary for the gel structure to form, and emulsifiers to maintain the structure. Egg yolk also contains enzymes like amylase, which can break down added starch. This enzyme activity contributes to the overall thinning of custard in the mouth. Egg yolk lecithin also helps to maintain the milk-egg interface. The proteins in egg whites are set at .
Starch is sometimes added to custard to prevent premature curdling. The starch acts as a heat buffer in the mixture: as they hydrate, they absorb heat and help maintain a constant rate of heat transfer. Starches also make for a smoother texture and thicker mouth feel.
If the mixture pH is 9 or higher, the gel is too hard; if it is below 5, the gel structure has difficulty forming because protonation prevents the formation of covalent bonds.
Physical-chemical properties
Cooked (set) custard is a weak gel, viscous, and thixotropic; while it does become easier to stir the more it is manipulated, it does not, unlike many other thixotropic liquids, recover its lost viscosity over time. On the other hand, a suspension of uncooked imitation custard powder (starch) in water, with the proper proportions, has the opposite rheological property: it is negative thixotropic, or dilatant, allowing the demonstration of "walking on custard".
See also
List of desserts
List of custard desserts
Custard cream
Bird's Custard – brand of imitation custard
Eggnog – sweetened dairy-based beverage
Pudding – dessert or savory dish
References
External links
British desserts
Dairy products
English cuisine
Food ingredients
Steamed foods
American desserts
Types of food
Creamy dishes | Custard | Technology | 2,043 |
338,129 | https://en.wikipedia.org/wiki/Plateau%27s%20problem | In mathematics, Plateau's problem is to show the existence of a minimal surface with a given boundary, a problem raised by Joseph-Louis Lagrange in 1760. However, it is named after Joseph Plateau who experimented with soap films. The problem is considered part of the calculus of variations. The existence and regularity problems are part of geometric measure theory.
History
Various specialized forms of the problem were solved, but it was only in 1930 that general solutions were found in the context of mappings (immersions) independently by Jesse Douglas and Tibor Radó. Their methods were quite different; Radó's work built on the previous work of René Garnier and held only for rectifiable simple closed curves, whereas Douglas used completely new ideas with his result holding for an arbitrary simple closed curve. Both relied on setting up minimization problems; Douglas minimized the now-named Douglas integral while Radó minimized the "energy". Douglas went on to be awarded the Fields Medal in 1936 for his efforts.
In higher dimensions
The extension of the problem to higher dimensions (that is, for -dimensional surfaces in -dimensional space) turns out to be much more difficult to study. Moreover, while the solutions to the original problem are always regular, it turns out that the solutions to the extended problem may have singularities if . In the hypersurface case where , singularities occur only for . An example of such singular solution of the Plateau problem is the Simons cone, a cone over in that was first described by Jim Simons and was shown to be an area minimizer by Bombieri, De Giorgi and Giusti. To solve the extended problem in certain special cases, the theory of perimeters (De Giorgi) for codimension 1 and the theory of rectifiable currents (Federer and Fleming) for higher codimension have been developed. The theory guarantees existence of codimension 1 solutions that are smooth away from a closed set of Hausdorff dimension . In the case of higher codimension Almgren proved existence of solutions with singular set of dimension at most in his regularity theorem. S. X. Chang, a
student of Almgren, built upon Almgren’s work to show that the singularities of 2-dimensional area
minimizing integral currents (in arbitrary codimension) form a finite discrete set.
The axiomatic approach of Jenny Harrison and Harrison Pugh treats a wide variety of special cases. In particular, they solve the anisotropic Plateau problem in arbitrary dimension and codimension for any collection of rectifiable sets satisfying a combination of general homological, cohomological or homotopical spanning conditions. A different proof of Harrison-Pugh's results were obtained by Camillo De Lellis, Francesco Ghiraldin and Francesco Maggi.
Physical applications
Physical soap films are more accurately modeled by the -minimal sets of Frederick Almgren, but the lack of a compactness theorem makes it difficult to prove the existence of an area minimizer. In this context, a persistent open question has been the existence of a least-area soap film. Ernst Robert Reifenberg solved such a "universal Plateau's problem" for boundaries which are homeomorphic to single embedded spheres.
See also
Double Bubble conjecture
Dirichlet principle
Plateau's laws
Stretched grid method
Bernstein's problem
References
Calculus of variations
Minimal surfaces
Mathematical problems | Plateau's problem | Chemistry,Mathematics | 703 |
216,967 | https://en.wikipedia.org/wiki/Distributed%20Management%20Task%20Force | Distributed Management Task Force (DMTF) is a 501(c)(6) nonprofit industry standards organization that creates open manageability standards spanning diverse emerging and traditional IT infrastructures including cloud, virtualization, network, servers and storage. Member companies and alliance partners collaborate on standards to improve interoperable management of information technologies.
Based in Portland, Oregon, the DMTF is led by a board of directors representing technology companies including: Broadcom Inc., Cisco, Dell Technologies, Hewlett Packard Enterprise, Intel Corporation, Lenovo, Positivo Tecnologia S.A., and Verizon.
History
Founded in 1992 as the Desktop Management Task Force, the organization's first standard was the now-legacy Desktop Management Interface (DMI). As the organization evolved to address distributed management through additional standards, such as the Common Information Model (CIM), it changed its name to the Distributed Management Task Force in 1999, but is now known as, DMTF.
The DMTF continues to address converged, hybrid IT and the Software Defined Data Center (SDDC) with its latest specifications, such as the Redfish standard, SMBIOS, SPDM, and PMCI standards.
Standards
DMTF standards include:
CADF - Cloud Auditing Data Federation
CIMI - Cloud Infrastructure Management Interface
CIM - Common Information Model
CMDBf - Configuration Management Database Federation
DASH - Desktop and Mobile Architecture for System Hardware
MCTP - Management Component Transport Protocol Including NVMe-MI, I2C/SMBus and PCIe Bindings
NC-SI - Network Controller Sideband Interface
OVF - Open Virtualization Format
PLDM - Platform Level Data Model Including Firmware Update, Redfish Device Enablement (RDE)
Redfish – Including Protocols, Schema, Host Interface, Profiles
SMASH - Systems Management Architecture for Server Hardware
System Management BIOS (SMBIOS) – Standardized Host Management Information
SPDM - Security Protocol and Data Model
See also
Cloud Infrastructure Management Interface
Common Information Model (computing)
Desktop and mobile Architecture for System Hardware
Management Component Transport Protocol
NC-SI
Open Virtualization Format
Redfish (specification)
Systems Management Architecture for Server Hardware
SMBIOS
References
General
https://www.theregister.co.uk/2015/08/05/dmtf_signs_off_redfish_server_management_spec_v_10/
https://digitalisationworld.com/news/49120/dmtf-announces-redfish-api-advancements
https://searchstorage.techtarget.com/tip/Choose-the-right-storage-management-interface-for-you
External links
Technology trade associations
DMTF standards
Information technology organizations
Network management
Standards organizations in the United States
Working groups
Task forces | Distributed Management Task Force | Technology,Engineering | 574 |
29,474,849 | https://en.wikipedia.org/wiki/Bredolab%20botnet | The Bredolab botnet, also known by its alias Oficla, was a Russian botnet mostly involved in viral e-mail spam. Before the botnet was eventually dismantled in November 2010 through the seizure of its command and control servers, it was estimated to consist of millions of zombie computers.
The countries most affected by the botnet were Russia itself, Uzbekistan, US, Europe, India, Vietnam and Philippines.
Operations
Though the earliest reports surrounding the Bredolab botnet originate from May 2009 (when the first malware samples of the Bredolab trojan horse were found) the botnet itself did not rise to prominence until August 2009, when there was a major surge in the size of the botnet. Bredonet's main form of propagation was through sending malicious e-mails that included malware attachments which would infect a computer when opened, effectively turning the computer into another zombie controlled by the botnet. At its peak, the botnet was capable of sending 3.6 billion infected emails every day. The other main form of propagation was through the use of drive-by downloads - a method which exploits security vulnerabilities in software. This method allowed the botnet to bypass software protection in order to facilitate downloads without the user being aware of them.
The main income of the botnet was generated through leasing parts of the botnet to third parties who could subsequently use these infected systems for their own purposes, and security researchers estimate that the owner of the botnet made up to $139,000 a month from botnet related activities. Due to the rental business strategy, the payload of Bredolab has been very diverse, and ranged from scareware to malware and e-mail spam.
Dismantling and aftermath
On 25 October 2010, a team of Dutch law enforcement agents seized control of 143 servers which contained three command & control servers, one database server and several management servers from the Bredolab botnet in a datacenter from LeaseWeb, effectively removing the botnet herder's ability to control the botnet centrally. In an attempt to regain control of his botnet, the botnet herder utilized 220,000 computers which were still under his control, to unleash a DDoS attack on LeaseWeb servers, though these attempts were ultimately in vain. After taking control of the botnet, the law enforcement team utilized the botnet itself to send a message to owners of infected computers, stating that their computer was part of the botnet.
Subsequently, Armenian law enforcement officers arrested an Armenian citizen, Georgy Avanesov, on the basis of being the suspected mastermind behind the botnet. The suspect denied any such involvement in the botnet. He was sentenced to four years in prison in May 2012.
While the seizure of the command and control servers severely disrupted the botnet's ability to operate, the botnet itself is still partially intact, with command and control servers persisting in Russia and Kazakhstan. Security firm FireEye believes that a secondary group of botnet herders has taken over the remaining part of the botnet for their own purposes, possibly a previous client who reverse engineered parts of the original botnet creator's code. Even so, the group noted that the botnet's size and capacity has been severely reduced by the law enforcement intervention.
References
Web security exploits
Distributed computing projects
Spamming
Botnets
Cybercrime in India | Bredolab botnet | Technology,Engineering | 696 |
1,800,046 | https://en.wikipedia.org/wiki/Rho%20Cassiopeiae | Rho Cassiopeiae (; ρ Cas, ρ Cassiopeiae) is a yellow hypergiant star in the constellation Cassiopeia. It is about from Earth, yet can still be seen by the naked eye as it is over 300,000 times brighter than the Sun. On average it has an absolute magnitude of −9.5, making it visually one of the most luminous stars known. Recently imaged and measured by the CHARA array in 2024, its diameter measures between 564 and 700 times that of the Sun, approximately , or 2.6 to 3.3 times the size of Earth's orbit.
Louisa Wells discovered that the star's brightness varies, and that discovery was published in 1901. Rho Cassiopeiae is a single star, and is categorized as a semiregular variable. As a yellow hypergiant, it is one of the rarest types of stars. Only a few dozen are known in the Milky Way, but it is not the only one in its constellation which also contains V509 Cassiopeiae.
Observation
Rho Cassiopeiae is the second brightest yellow hypergiant in the sky, the brightest being V382 Carinae, although Rho Cassiopeiae is mostly visible only in the northern hemisphere and V382 Carinae mostly only in the southern hemisphere.
The Bayer designation for this star was established in 1603 as part of the Uranometria, a star catalog produced by Johann Bayer, who placed this star in the sixth magnitude class. The star catalog by John Flamsteed published in 1712, which orders the stars in each constellation by their right ascension, gave this star the Flamsteed designation 7 Cassiopeiae.
Rho Cas was first described as variable in 1901. It was classified only as "pec." with a small but definite range of variation. Its nature continued to be unclear during the deep visual minimum in 1946, although it was presumed to be related to the detection of an expanding shell around the star. The spectrum developed lower excitation features described as typical of an M star rather than the previous F8 class. The nature of Rho Cas was eventually clarified as a massive luminous unstable star, pulsating and losing mass, and occasionally becoming obscured by strong bouts of mass loss.
Rho Cas usually has an apparent magnitude near 4.5, but in 1946 it unexpectedly dimmed to 6th magnitude and cooled by over 3,000 Kelvin, before returning to its previous brightness. A similar eruption was recorded in 1893, suggesting that it undergoes these eruptions approximately once every 50 years. This happened again in 2000–2001, when it was observed by the William Herschel Telescope.
In 2013, a shell ejection produced dramatic spectral changes and a drop of about half a magnitude at visual wavelengths. Weak emission lines of metals and doubled H-α absorption lines were detected in late 2014, and unusual tripled absorption lines in 2017. The brightness peaked at magnitude 4.3 before fading to 5th magnitude. In 2018 it brightened again to magnitude 4.2.
The original Hipparcos parallax publication estimated Rho Cas at around 0.28 mas, which would have corresponded to a distance around 10,000 light years and would have made Rho Cas among the farthest stars visible to the naked eye. However, more recent publications estimate Rho Cas with a much shorter distance.
Properties
Rho Cassiopeiae is one of the most luminous yellow stars known. It is close to the Eddington luminosity limit and normally loses mass at around /yr, hundreds of millions of times the rate of the solar wind. Much of the time it has a temperature over 7,000 K, a radius around , and pulsates irregularly producing small changes in brightness. Approximately every 50 years it undergoes a larger outburst and blows off a substantial fraction of its atmosphere, causing the temperature to drop around 1,500 K and the brightness to drop by up to 1.5 magnitudes. In 2000–2001 the mass loss rate jumped to /yr, ejecting in total approximately 3% of a solar mass or 10,000 Earth masses. The luminosity remains roughly constant during the outbursts at , but the radiation output shifts towards the infra red.
In 2024, Rho Cassiopeiae was imaged through the CHARA array's H and K-band filters, using interferometry. The results gave an angular diameter of 2.08 ± 0.01 milliarcseconds. At two adopted distance of 2.5 to 3.1 kiloparsecs (kpc), this gives a physical photospheric radius of 564 ± 67 or 700 ± 112 , comparable to Betelgeuse.
Surface abundances of most heavy elements on Rho Cas are enhanced relative to the Sun, but carbon and oxygen are depleted. This is expected for a massive star where hydrogen fusion takes place predominantly via the CNO cycle. In addition to the expected helium and nitrogen convected to the surface, sodium is strongly enhanced, indicating that the star had experienced a dredge-up while in a red supergiant stage. Therefore, it is expected that Rho Cas is now evolving towards hotter temperatures. It is currently core helium burning through the triple alpha process.
The relatively low mass and high luminosity of a post-red supergiant star is a source of instability, pushing it close to the Eddington Limit. However, yellow hypergiants lie in a temperature range where opacity variations in zones of partial ionisation of hydrogen and helium cause pulsations, similar to the cause of Cepheid variable pulsations. In hypergiants, these pulsations are generally irregular and small, but combined with the overall instability of the outer layers of the star they can result in larger outbursts. This may all be part of an evolutionary trend towards hotter temperatures through the loss of the star's atmosphere.
Naming
ρ Cassiopeiae is a member of the Chinese constellation Flying Serpent (), in the Encampment mansion. In order, the 22 member stars are α and 4 Lacertae, π2 and π1 Cygni, stars 5 and 6, HD 206267, 13 and ε Cephei, β Lacertae, σ, ρ, τ, and AR Cassiopeiae, 9 Lacertae, 3, 7, 8, λ, ψ, κ, and ι Andromedae. Consequently, the Chinese name for ρ Cassiopeiae is (, )
See also
Betelgeuse and VY Canis Majoris, similar cool massive stars that have undergone one or more dimming events
RW Cephei, a similar star
WOH G64
References
External links
Rho Cassiopeiae fact sheet
David Darling site
Big and Giant Stars: Rho Cassiopeiae
Cassiopeia (constellation)
Cassiopeiae, Rho
Cassiopeiae, 07
G-type hypergiants
Semiregular variable stars
117863
224014
9045
BD+56 3111 | Rho Cassiopeiae | Astronomy | 1,458 |
1,758,910 | https://en.wikipedia.org/wiki/James%20Walker%20%28engineer%29 | James Walker (14 September 1781 – 8 October 1862) was an influential British civil engineer and contractor.
Life
Born in Law Wynd in Falkirk, the eldest of five children of John Walker and his wife Margaret, James was educated at the local school and was sent to Glasgow University in October 1794, aged 13. He studied Latin and Greek for two years, and logic during his third year. During his final two years he studied natural philosophy and mathematics, taking the first prize.
He returned to Falkirk in May 1799, aged 18, and his family discussed a career in business or law. But, by chance, in the summer of 1800, he was asked to accompany his ill brother-in-law on a sea journey to London. Once there, he visited his uncle Ralph Walker in Blackwall, intending to return to Scotland after a week. However, Ralph discussed his work at the West India Docks, and was so impressed by his young nephew's grasp of engineering that he immediately took him on as his apprentice.
Around 1800 they worked on the design and construction of London's West India and East India Docks. At the age of 21 he took on his first engineering work in his own right: the construction of Commercial Road in London, connecting the West India Docks to the warehouses of the City. Later, he worked on the Surrey Commercial Docks from about 1810 onwards, remaining as engineer to the Surrey Commercial Dock Company until his death in 1862.
In 1821 Walker built his first lighthouse, the West Usk Lighthouse, near Newport, South Wales. He went on to build another 21 lighthouses.
Walker was the senior partner of the consulting engineering firm of Messrs. Walker and Burges (of Limehouse), Alfred Burges having first become his pupil in 1811 and risen to partner in 1829. In 1832 their offices moved to 44 Parliament Street, Westminster (which lies at southern end of Whitehall) and then to 23 George Street. In 1853 he promoted James Cooper, one of his assistants, to the partnership with the firm then being known as Messrs. Walker, Burges & Cooper.
Walker succeeded his associate Thomas Telford as President of the Institution of Civil Engineers, serving from 1834 to 1845. One of his first major roles as President was to oversee the choice of three new harbours to serve Edinburgh: a major extension to Leith Docks; a new harbour at Trinity; or a new harbour at Granton. The choice resulted in the building of Granton Harbour.
He was also chief engineer within Trinity House, hence his considerable involvement with coastal engineering and lighthouses. He was conferred with Honorary Membership of the Institution of Engineers and Shipbuilders in Scotland in 1857.
He died at 23 Great George Street in Westminster, London on 8 October 1862.
He is buried beneath a humble gravestone in St Johns churchyard in Edinburgh against a retaining wall on one of the southern terraces.
Projects and other work
Walker worked on various engineering projects, including:
Greenland Dock, London (c. 1808 – c. 1862)
Vauxhall Bridge, London (1816, since demolished)
Poplar Workhouse, London (c. 1815 - c. 1817), designer
West Usk Lighthouse, near Newport, South Wales
Survey for the Leeds and Selby Railway (1829)
Brunswick Wharf Warehouse, Blackwall, London (1832–34) designer, for the East India Dock Company and built by contractor Messrs. Horne & Gates of Poplar
Survey for the Leipzig–Dresden Railway (1835)
Hull and Selby Railway (survey and consulting engineer, 1834, 1836–40).
Start Point Lighthouse, Devon (1836)
Maplin Sands Lighthouse (1838)
Advice on alignment of Herefordshire and Gloucestershire Canal (1838)
Victoria Viaduct (or Bridge) on the Durham Junction Railway (1838)
Improvements to Aberdeen Harbour (1838)
South Bishop Lighthouse (1839)
Wolf Rock beacon and lighthouse (1840–1862)
Coquet Lighthouse (1841)
Plans for River Thames embankments, later known as 'Walker's lines,' upon which the present Thames and Victoria Embankments are largely based (c. 1842)
South Foreland Lighthouse rebuilt with a taller tower (1841-1842)
Trevose Head Lighthouse (1844–1847)
River Severn and South Wales Railway (1845), a report that blocked Brunel's plans for railway bridges across the River Severn.
Gunfleet Lighthouse, off Frinton-on-Sea, Essex (1850)
Victoria Bridge, Glasgow (1851–1854)
Design of the East Bute Dock, Cardiff (1855)
Whitby Lighthouse - the twin lights of Whitby North and Whitby South lighthouses, near Ling Hill, High Whitby (1857–58)
Bishop Rock Lighthouse (1858)
Needles Lighthouse, Isle of Wight (1859)
Completion of the Caledonian Canal (1838–1848)
Alderney breakwater, Channel Islands (1847) and harbour (1862)
St Catherine's Harbour, Jersey, Channel Islands (1847–1856)
Improvements to navigation in the River Tyne (1853–1861)
Houses of Parliament, consulting engineer for the Clock Tower (also known as Big Ben) as well as the Victoria Tower and Central Tower (1836–1859), cofferdam for the riverside foundations and terrace (1837–49)
Walker was also involved in designing a dock harbour in Hamburg (1845, with William Lindley and Heinrich Hübbe). He was also involved in the Liverpool and Manchester Railway, preparing a report on the merits of stationary and locomotive engines along with other notable engineers of the day. He was also for a long period consulting engineer to the Board of Admiralty.
Memorial
A memorial to Walker was commissioned by the Institution of Civil Engineers to stand at Greenland Dock and was unveiled in 1990.
References
Obituaries
Proceedings of the Royal Society of London, Volume 12, Royal Society (Great Britain), 1863, "Obituary Notices of Fellows Deceased", p. lxiv–lxvi, google books link
1781 births
1862 deaths
People from Falkirk
Scottish company founders
Scottish civil engineers
19th-century British engineers
Civil engineering contractors
Lighthouse builders
Presidents of the Institution of Civil Engineers
Presidents of the Smeatonian Society of Civil Engineers
Fellows of the Royal Society of Edinburgh
Alumni of the University of Glasgow
Burials at St John's, Edinburgh
Fellows of the Royal Society
Committee members of the Society for the Diffusion of Useful Knowledge
19th-century Scottish businesspeople
Fellows of the Royal Society of Arts | James Walker (engineer) | Engineering | 1,297 |
31,643,910 | https://en.wikipedia.org/wiki/Split%20interval | In topology, the split interval, or double arrow space, is a topological space that results from splitting each point in a closed interval into two adjacent points and giving the resulting ordered set the order topology. It satisfies various interesting properties and serves as a useful counterexample in general topology.
Definition
The split interval can be defined as the lexicographic product equipped with the order topology. Equivalently, the space can be constructed by taking the closed interval with its usual order, splitting each point into two adjacent points , and giving the resulting linearly ordered set the order topology. The space is also known as the double arrow space, Alexandrov double arrow space or two arrows space.
The space above is a linearly ordered topological space with two isolated points, and in the lexicographic product. Some authors take as definition the same space without the two isolated points. (In the point splitting description this corresponds to not splitting the endpoints and of the interval.) The resulting space has essentially the same properties.
The double arrow space is a subspace of the lexicographically ordered unit square. If we ignore the isolated points, a base for the double arrow space topology consists of all sets of the form with . (In the point splitting description these are the clopen intervals of the form , which are simultaneously closed intervals and open intervals.) The lower subspace is homeomorphic to the Sorgenfrey line with half-open intervals to the left as a base for the topology, and the upper subspace is homeomorphic to the Sorgenfrey line with half-open intervals to the right as a base, like two parallel arrows going in opposite directions, hence the name.
Properties
The split interval is a zero-dimensional compact Hausdorff space. It is a linearly ordered topological space that is separable but not second countable, hence not metrizable; its metrizable subspaces are all countable.
It is hereditarily Lindelöf, hereditarily separable, and perfectly normal (T6). But the product of the space with itself is not even hereditarily normal (T5), as it contains a copy of the Sorgenfrey plane, which is not normal.
All compact, separable ordered spaces are order-isomorphic to a subset of the split interval.
See also
Notes
References
Arhangel'skii, A.V. and Sklyarenko, E.G.., General Topology II, Springer-Verlag, New York (1996)
Engelking, Ryszard, General Topology, Heldermann Verlag Berlin, 1989.
Topological spaces | Split interval | Mathematics | 537 |
38,574,264 | https://en.wikipedia.org/wiki/Link-Systems%20International | Link-Systems International, Inc. (LSI) is a privately held American distance-learning software corporation based in Tampa, Florida. The company is best known for NetTutor, its online tutoring service; WorldWideTestbank, its platform for authoring online content; and its WorldWideWhiteboard education-focused online collaboration platform.
History
Link-Systems International was founded in 1995, then incorporated in the state of Florida on February 27, 1996. It started as an internet content authoring and repurposing service. For several years, LSI performed the conversion of scholarly content such as journals from earlier print-oriented formats to a format for Web display. Link-Systems International, Inc. ranked 3396 on the Inc. 5000 list in 2014 REB7I.
WorldWideWhiteboard (formerly NetTutor): 1997
In 1997, LSI released "NetTutor", a Java-based application. The online educational interface allowed participants and a "leader" to collaborate in real time. When the leader drew or placed text, figures, or symbols, such as square-root and integral signs, on a virtual whiteboard, content would be simultaneously displayed on all users' screens. Other participants can "raise their hands" and be recognized by the leader, who can grant them access to draw or type on the whiteboard area. A secondary, instant messaging-style text area allows text-only communication. LSI leased this interface, mainly to schools and textbook publishers, renaming the platform WorldWideWhiteboard in 2001, in order to distinguish it from its online tutoring service, which retained the NetTutor name.
NetTutor: 1998
LSI launched the NetTutor online tutoring service in 1998. NetTutor uses the WorldWideWhiteboard and a proprietary queuing system. Students who log in gain access to professional tutors who provide assistance in specific subject areas. NetTutor employs its tutors on a full-time basis at LSI headquarters in Tampa. (Other online tutoring services employ part-time tutors and allow them to tutor from any physical location.)
Another distinction between NetTutor and other services is that a set of written guidelines is drawn up in consultation with client institutions and amended as needed. Such guidelines reinforce tutoring best practices, such as that tutors should prefer the Socratic method and scaffolding over simply answering questions. However, they also can be used to instruct tutors to adopt special practices preferred by the client. Some examples of tutoring guidelines are available on the Web.
Content authoring and the WorldWideTestbank: 2002
LSI developed the WorldWideTestbank platform for creating a wide variety of problem-types. True-false, multiple choice, and free input are the traditional ones, but the WorldWideTestbank also supports graphical and manipulative problem-types. The interface development environment of the WorldWideTestbank supports authors' programming in the platform's own scripting language and creates a "template" for each question. Depending on how it is programmed, a single template can yield an indefinite number of problems for a user to practice by inserting randomly selected numbers and phrases in each instance of the template. The WorldWideTestbank also enables an instructor to specify how close the student has to be to an exact correct answer to be "correct." For instance, in numerical answers, students may be required to express answers with a specific number of significant digits.
IVS and MyAcademicWorkshop: 2010
In 2010, LSI released two platforms: an educational institution analytics platform called Information Visibility Solutions™ (IVS) and a homework and assessment platform that combined features of the WorldWideTestbank and WorldWidewGradebook with an optional connection to tutoring called MyAcademicWorkshop™. IVS applies the alerts and notices features common in gradebook applications, but at an enterprise level and with user-definable dashboards. For IVS, which bridges different databases in which an institution may hold its data to display interrelated events about a college campus as the events are occurring, LSI collaborated with Mindshare, Inc. in nearby Clearwater, Florida, a company that already had developed analytics for the state of Florida; MyAcademicWorkshop enables schools to place students into one of a number developmental math courses and provides content for these courses aligned to state or national standards.
HTML5 and Mobile Readiness: 2012
In 2012, LSI began replacing the original, Java-based code of its platforms with HTML5 so that its various platforms are more accessible and can be used on mobile devices. As of 2015, the WorldWideWhiteboard, NetTutor, and the WorldWideTestbank have been re-released in HTML5.
Comfit and RTR: 2014
By May 2014, LSI completed acquisition of the ComFit Online Learning Center. ComFit uses an initial set of questions to try to pinpoint students' difficulties in reading and English, after which students work problems and win "medals" as they progress through learning goals. In contrasts to the adaptive placement of MyAcademicWorkshop, which assigns a student to a course and emphasizes science, technology, engineering, and mathematics (STEM), ComFit focuses on specific sets of learning goals and emphasizes English and writing objectives.
Also in 2014, LSI developed an automated tutoring referral system called Refer-Tutor-Report™ (RTR™). RTR accommodates four activities: teachers refer chosen students to tutoring for specified learning objectives, students receive notification of referrals and learn how to reach tutors, tutors find out what learning objectives students need to cover, and administrators garner reports about the tutoring referral process.
LSI Clients
Educational Institutions
Use of LSI platforms is part of a general turn by higher-education institutions to online instruction and student support. The WorldWideWhiteboard went online at Pima Community College and other institutions even before 2000. NetTutor, initially offered with new editions of textbooks in deals with publishing companies, is now available in community college systems and universities like the California Community College System (through the Online Education Initiative), Mississippi's Virtual Community College System, and the Oregon State University eCampus.
Publishers
LSI, through its content conversion services and conferencing and tutoring tools, became an early partner to textbook publishers. The NetTutor service was adopted as a value-added feature on new textbooks, whereby purchasers of the textbooks gained access to a certain number of hours of online tutoring.
NetTutor has been packaged over the years with textbooks published by John Wiley and Sons, Pearson, Cengage Learning, and Bedford-St. Martin's, an imprint of MacMillan. LSI content authors have created material that appears in the homework systems associated with McGraw-Hill, Cengage Learning, MacMillan, and Houghton Mifflin Harcourt and in such tools as MyMathLab and WebAssign.
Education Research and LSI Platforms
Recent research compares the performance of college students who use NetTutor to those who do not, and finds evidence that both performance and persistence improve among users of the service. In other cases, schools compared NetTutor favorably to its competitors on ground of price and flexibility of tutoring approach.
A study at Hampton University concluded in 1999 that the whiteboard could effectively support such activities as online office hours.
In 2004, researchers at Stony Brook University concluded that "...according to our research NetTutor remains the only workable math-friendly e-learning communication system."
A study at Utah Valley University in 2006 pointed out the role of the WorldWideWhiteboard as "[o]ne of the earliest synchronous models for math tutoring]").
An example of live interaction on the WorldWhiteWhiteboard can be found in the online journal on writing instruction Kairos.
By 2007, LSI's NetTutor service had conducted one million online tutorial sessions.
LSI and controversies surrounding distance learning software
Online technology is shown to help develop a community of inquiry, which has been shown to improve student participation and results. In particular, schools require students enrolling in online courses to complete inventories of personal study practices, since more commitment is required to complete online courses successfully.
The relative novelty of online tutoring also raises questions of tutor preparedness to support students on the internet, whether that support is delivered in real time (synchronously) or via exchange of documents (asynchronously)
User guidelines
LSI platforms are used in conjunction with online instruction, delivery of student support in the online and mobile formats, and to aggregate and notify users about events that generate online data. As such, the school needs to perform due-diligence to ascertain that the technology they intend to offer provides essentially the same academic credentials as school educators.
For instance, while technology is documented to yield academic results at least comparable to those of face-to-face services, schools also need to confirm that use of a service or software complies with existing practices of their faculty. To answer this concern, LSI creates, jointly with its schools or publishers, a written set of guidelines called "Rules of Engagement" (ROE) that can be modified as needed.
Notes
Citations
References
Collison, G., Elbaum, B., Haavind, S. & Tinker, R. (2000). Facilitating online learning: Effective strategies for moderators. Atwood Publishing, Madison.
Hewett, Beth L. (2010). The online writing conference: a guide for teachers and tutors. Boynton/Cook Heinemann, Portsmouth, NJ.
Jacques, D., and Salmon, G (2007) Learning in Groups: A Handbook for on and off line environments, Routledge, London and New York.
Patrick, Pamela K. S. (2005) "Online Counseling Education: Pedagogy Controversies and Delivery Issues." VISTAS2005. 5(52). pp. 239–242. Retrieved April 15, 2011 from counseling.org.
External links
Science education software
American educational websites
Educational math software
Privately held companies based in Florida | Link-Systems International | Mathematics | 2,086 |
25,344,564 | https://en.wikipedia.org/wiki/Sulphobes | A sulphobe is a film composed of formaldehyde and thiocyanates alleged to have lifelike properties. The name is a portmanteau of sulphur microbe. Sulphobes were a subject in the researches of Alfonso L. Herrera, a biologist who studied the origin of life.
References
Further reading
Evolutionarily significant biological phenomena
Evolutionary biology
Origin of life | Sulphobes | Biology | 81 |
43,602,880 | https://en.wikipedia.org/wiki/Purchase%20and%20sale%20agreement | A purchase and sale agreement (PSA), also called a sales and purchase agreement (SPA) or an agreement for purchase and sale (APS), is an agreement between a buyer and a seller of real estate property, company stock, or other assets.
The person, company, or other legal entity acquiring, receiving, and purchasing the property, stock, or other assets is typically referred to as the buyer. The entity disposing, conveying, and selling the assets is referred to as the seller or vendor. A PSA sets out the various rights and obligations of both the buyer and seller, and might also require other documents be executed and recorded in the public records, such as an assignment, deed of trust, or farmout agreement.
In the oil and natural gas industries, a PSA is the primary legal contract by which companies exchange oil and gas assets (including stock in an oil and gas business entity) for cash, debt, stock, or other assets.
References
Petroleum production
Sales
Land use | Purchase and sale agreement | Chemistry | 206 |
55,903,753 | https://en.wikipedia.org/wiki/Lakhori%20bricks | Lakhori bricks (also Badshahi bricks, Kakaiya bricks, Lakhauri bricks) are flat, thin, red burnt-clay bricks, originating from Lahore, Pakistan that became increasingly popular element of Mughal architecture during Shah Jahan, and remained so till early 20th century when lakhori bricks and similar Nanak Shahi bricks were replaced by the larger standard 9"x4"x3" bricks called ghumma bricks that were introduced by the colonial British India.
Several still surviving famous 17th to 19th century structures of Mughal India, characterized by jharokhas, jalis, fluted sandstone columns, ornamental gateways and grand cusped-arch entrances are made of lakhori bricks, including fort palaces (such as Red Fort), protective bastions and pavilions (as seen in Bawana Zail Fortess), havelis (such as Bagore-ki-Haveli, Chunnamal Haveli, Ghalib ki Haveli, Dharampura Haveli and Hemu's Haveli), temples and gurudwaras (such as in Maharaja Patiala's Bahadurgarh Fort), mosques and tombs (such as Mehram Serai, Teele Wali Masjid), water wells and baoli stepwells (such as Choro Ki Baoli), bridges (such as Mughal bridge at Karnal), Kos minar road-side milestones (such as at Palwal along Grand Trunk Road) and other notable structures.
Origin
The exact origin of lakhori bricks is not confirmed, especially if they existed, or not, prior to becoming more prevalent in use during the Mughal India. Prior to the rise in frequent use of lakhori bricks during Mughal India, Indian architecture primarily used trabeated prop and lintel (point and slot) gravity-based technique of shaping large stones to fit into each other that required no mortar. The reason lakhori bricks became more popular during the Mughal period, starting from Shah Jahan's reign, is mainly because lakhori bricks that were used to construct structures with the typical elements of Mughal architecture such as arches, jalis, jharokas, mouldings, cornices, cladding, etc. were easy to create intricate patterns due to the small shape and slim size of lakhori bricks.
Regional, socio-strata and dimensional variations
The slim and compact Lakhorie bricks became popular across pan-Indian subcontinental Mughal Empire, specially in North India, resulting in several variations in their dimensions as well as due to the use of lower strength local soil by poor people and higher strength clay by affluent people. Restoration architect author Anil Laul reasons that poor people used local soil to bake slimmer bricks using locally available cheaper dung cakes as fuel and richer people used higher-end thicker and bigger bricks made of higher strength clay baked in kilns using not so easily locally available more expensive coal, both methods yielded bricks of similar strength but different proportions at different economic levels of strata.
Lakhori bricks versus Nanakshahi bricks
Due to the lack of understanding, sometimes contemporary writers confuse the lakhori bricks with other similar but distinct regional variants. For example, some writers use "Lakhori bricks and Nanak Shahi bricks" implying two different things, and others use "Lakhori bricks or Nanak Shahi bricks" inadvertently implying either same or two different things, leading to confusion as if they are same, especially if these words are casually mentioned interchangeably.
Lakhori bricks were used by Mughal Empire that spanned across the Indian subcontinent, whereas Nanak Shahi bricks were used mainly across the Sikh Empire, that was spread across Punjab region in north-west Indian subcontinent, when Sikhs were in conflict with Mughal Empire due to the religious persecution of Sikhs by Mughal Muslims. Coins struck by Sikh rulers between 1764 CE to 1777 CE were called "Gobind Shahi" coins (bearing inscription in the name of Guru Gobind Singh), and coins struck from 1777 onward were called "Nanak Shahi" coins (bearing inscription in the name of Guru Nanak).
A similar concept applies to the Nanak Shahi bricks of Sikh Empire, i.e. Lakhori and Nanak Shahi bricks being two similar, but a different type of bricks due to the regional variations as well as political reasons. Closely related similar things may be considered separate, and on the other hand considerably different things might be considered the same, in both cases due to the social-political-religious contextual reasons, for example closely related mutually intelligible Sanskritised-Hindustani language Hindi versus Arabised-Hindustani language Urdu being favored as separate languages by Hindus and Muslims respectively as seen in the context of Hindu-Muslim conflict that resulted in Partition of India, whereas mutually unintelligible speech varieties that differ considerably in structure such as Moroccan Arabic, Yemeni Arabic and Lebanese Arabic are considered the same language due to the pan-Islamism religious movement.
Mughal-era lakhorie bricks predate the Nanak Shahi bricks as seen in Bahadurgarh Fort of Patiala that was built by Mughal Nawab Saif Khan in 1658 CE using earlier-era lakhori bricks, and nearly 80 years later it was renovated using later-era Nanak Shahi bricks and renamed in the honor of Guru Teg Bahadur (where Guru Teg Bahadur stayed at this fort for three months and nine days before leaving for Delhi when he was executed by Aurangzeb in 1675 CE) by Maharaja of Patiala Karam Singh in 1837 CE. Since the timeline of both Mughal Empire and Sikh Empire overlapped, both Lakhori bricks and Nanak Shahi bricks were used around the same time in their respective dominions. Restoration architect author Anil Laul clarifies "We, therefore, had slim bricks known as the Lakhori and Nanakshahi bricks in India and the slim Roman bricks or their equivalents for many other parts of the world."
Mortar recipe
They were used to construct structures with crushed bricks and lime mortar, and walls were usually plastered with lime mortar. The concrete mixture of that era was a preparation of lime, surki (trass), jaggery and bael fruit (wood apple) pulp where some recipe used as much as 23 ingredients including urad ki daal (paste of vigna mungo pulse).
References
Works cited
External links
Lakhori brick rampart of Bavana Fortress of Zail (administrative unit) of Jat chiefs
Haveli Dharampura built with lakhori bricks has a restaurant named "lakhori"
Rajput architecture
Indian architectural history
Mughal architecture elements
Building materials | Lakhori bricks | Physics,Engineering | 1,361 |
241,154 | https://en.wikipedia.org/wiki/Cross-site%20scripting | Cross-site scripting (XSS) is a type of security vulnerability that can be found in some web applications. XSS attacks enable attackers to inject client-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same-origin policy. During the second half of 2007, XSSed documented 11,253 site-specific cross-site vulnerabilities, compared to 2,134 "traditional" vulnerabilities documented by Symantec. XSS effects vary in
range from petty nuisance to significant security risk, depending on the sensitivity of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site's owner network.
OWASP considers the term cross-site scripting to be a misnomer. It initially was an attack that was used for breaching data across sites, but gradually started to include other forms of data injection attacks.
Background
Security on the web depends on a variety of mechanisms, including an underlying concept of trust known as the same-origin policy. This states that if content from one site (such as https://mybank.example1.com) is granted permission to access resources (like cookies etc.) on a web browser, then content from any URL with the same (1) URI scheme (e.g. ftp, http, or https), (2) host name, and (3) port number will share these permissions. Content from URLs where any of these three attributes are different will have to be granted permissions separately.
Cross-site scripting attacks use known vulnerabilities in web-based applications, their servers, or the plug-in systems on which they rely. Exploiting one of these, attackers fold malicious content into the content being delivered from the compromised site. When the resulting combined content arrives at the client-side web browser, it has all been delivered from the trusted source, and thus operates under the permissions granted to that system. By finding ways of injecting malicious scripts into web pages, an attacker can gain elevated access-privileges to sensitive page content, to session cookies, and to a variety of other information maintained by the browser on behalf of the user. Cross-site scripting attacks are a case of code injection.
Microsoft security-engineers introduced the term "cross-site scripting" in January 2000. The expression "cross-site scripting" originally referred to the act of loading the attacked, third-party web application from an unrelated attack-site, in a manner that executes a fragment of JavaScript prepared by the attacker in the security context of the targeted domain (taking advantage of a reflected or non-persistent XSS vulnerability). The definition gradually expanded to encompass other modes of code injection, including persistent and non-JavaScript vectors (including ActiveX, Java, VBScript, Flash, or even HTML scripts), causing some confusion to newcomers to the field of information security.
XSS vulnerabilities have been reported and exploited since the 1990s. Prominent sites affected in the past include the social-networking sites Twitter and
Facebook. Cross-site scripting flaws have since surpassed buffer overflows to become the most common publicly reported security vulnerability, with some researchers in 2007 estimating as many as 68% of websites are likely open to XSS attacks.
Types
There is no single, standardized classification of cross-site scripting flaws, but most experts distinguish between at least two primary flavors of XSS flaws: non-persistent and persistent. Some sources further divide these two groups into traditional (caused by server-side code flaws) and DOM-based (in client-side code).
Non-persistent (reflected)
The non-persistent (or reflected) cross-site scripting vulnerability is by far the most basic type of web vulnerability. These holes show up when the data provided by a web client, most commonly in HTTP query parameters (e.g. HTML form submission), is used immediately by server-side scripts to parse and display a page of results for and to that user, without properly sanitizing the content.
Because HTML documents have a flat, serial structure that mixes control statements, formatting, and the actual content, any non-validated user-supplied data included in the resulting page without proper HTML encoding, may lead to markup injection. A classic example of a potential vector is a site search engine: if one searches for a string, the search string will typically be redisplayed verbatim on the result page to indicate what was searched for. If this response does not properly escape or reject HTML control characters, a cross-site scripting flaw will ensue.
A reflected attack is typically delivered via email or a neutral web site. The bait is an innocent-looking URL, pointing to a trusted site but containing the XSS vector. If the trusted site is vulnerable to the vector, clicking the link can cause the victim's browser to execute the injected script.
Persistent (or stored)
The persistent (or stored) XSS vulnerability is a more devastating variant of a cross-site scripting flaw: it occurs when the data provided by the attacker is saved by the server, and then permanently displayed on "normal" pages returned to other users in the course of regular browsing, without proper HTML escaping. A classic example of this is with online message boards where users are allowed to post HTML formatted messages for other users to read.
For example, suppose there is a dating website where members scan the profiles of other members to see if they look interesting. For privacy reasons, this site hides everybody's real name and email. These are kept secret on the server. The only time a member's real name and email are in the browser is when the member is signed in, and they can't see anyone else's.
Suppose that Mallory, an attacker, joins the site and wants to figure out the real names of the people she sees on the site. To do so, she writes a script designed to run from other users' browsers when they visit her profile. The script then sends a quick message to her own server, which collects this information.
To do this, for the question "Describe your Ideal First Date", Mallory gives a short answer (to appear normal), but the text at the end of her answer is her script to steal names and emails. If the script is enclosed inside a <script> element, it won't be shown on the screen. Then suppose that Bob, a member of the dating site, reaches Mallory's profile, which has her answer to the First Date question. Her script is run automatically by the browser and steals a copy of Bob's real name and email directly from his own machine.
Persistent XSS vulnerabilities can be more significant than other types because an attacker's malicious script is rendered automatically, without the need to individually target victims or lure them to a third-party website. Particularly in the case of social networking sites, the code would be further designed to self-propagate across accounts, creating a type of client-side worm.
The methods of injection can vary a great deal; in some cases, the attacker may not even need to directly interact with the web functionality itself to exploit such a hole. Any data received by the web application (via email, system logs, IM etc.) that can be controlled by an attacker could become an injection vector.
Server-side versus DOM-based vulnerabilities
XSS vulnerabilities were originally found in applications that performed all data processing on the server side. User input (including an XSS vector) would be sent to the server, and then sent back to the user as a web page. The need for an improved user experience resulted in popularity of applications that had a majority of the presentation logic (maybe written in JavaScript) working on the client-side that pulled data, on-demand, from the server using AJAX.
As the JavaScript code was also processing user input and rendering it in the web page content, a new sub-class of reflected XSS attacks started to appear that was called DOM-based cross-site scripting. In a DOM-based XSS attack, the malicious data does not touch the web server. Rather, it is being reflected by the JavaScript code, fully on the client side.
An example of a DOM-based XSS vulnerability is the bug found in 2011 in a number of jQuery plugins. Prevention strategies for DOM-based XSS attacks include very similar measures to traditional XSS prevention strategies but implemented in JavaScript code and contained in web pages (i.e. input validation and escaping). Some JavaScript frameworks have built-in countermeasures against this and other types of attack — for example AngularJS.
Self-XSS
Self-XSS is a form of XSS vulnerability that relies on social engineering in order to trick the victim into executing malicious JavaScript code in their browser. Although it is technically not a true XSS vulnerability due to the fact it relies on socially engineering a user into executing code rather than a flaw in the affected website allowing an attacker to do so, it still poses the same risks as a regular XSS vulnerability if properly executed.
Mutated XSS (mXSS)
Mutated XSS happens when the attacker injects something that is seemingly safe but is rewritten and modified by the browser while parsing the markup. This makes it extremely hard to detect or sanitize within the website's application logic. An example is rebalancing unclosed quotation marks or even adding quotation marks to unquoted parameters on parameters to CSS font-family.
Preventive measures
Contextual output encoding/escaping of string input
There are several escaping schemes that can be used depending on where the untrusted string needs to be placed within an HTML document including HTML entity encoding, JavaScript escaping, CSS escaping, and URL (or percent) encoding. Most web applications that do not need to accept rich data can use escaping to largely eliminate the risk of XSS attacks in a fairly straightforward manner.
Performing HTML entity encoding only on the five XML significant characters is not always sufficient to prevent many forms of XSS attacks, security encoding libraries are usually easier to use.
Some web template systems understand the structure of the HTML they produce and automatically pick an appropriate encoder.
Safely validating untrusted HTML input
Many operators of particular web applications (e.g. forums and webmail) allow users to utilize a limited subset of HTML markup. When accepting HTML input from users (say, <b>very</b> large), output encoding (such as <b>very</b> large) will not suffice since the user input needs to be rendered as HTML by the browser (so it shows as "very large", instead of "<b>very</b> large"). Stopping an XSS attack when accepting HTML input from users is much more complex in this situation. Untrusted HTML input must be run through an HTML sanitization engine to ensure that it does not contain XSS code.
Many validations rely on parsing out (blacklisting) specific "at risk" HTML tags such as the iframe tag, link and the script tag.
There are several issues with this approach, for example sometimes seemingly harmless tags can be left out which when utilized correctly can still result in an XSS
Another popular method is to strip user input of " and ' however this can also be bypassed as the payload can be concealed with obfuscation.
Cookie security
Besides content filtering, other imperfect methods for cross-site scripting mitigation are also commonly used. One example is the use of additional security controls when handling cookie-based user authentication. Many web applications rely on session cookies for authentication between individual HTTP requests, and because client-side scripts generally have access to these cookies, simple XSS exploits can steal these cookies. To mitigate this particular threat (though not the XSS problem in general), many web applications tie session cookies to the IP address of the user who originally logged in, then only permit that IP to use that cookie. This is effective in most situations (if an attacker is only after the cookie), but obviously breaks down in situations where an attacker is behind the same NATed IP address or web proxy as the victim, or the victim is changing his or her mobile IP.
Http-only cookie
Another mitigation present in Internet Explorer (since version 6), Firefox (since version 2.0.0.5), Safari (since version 4), Opera (since version 9.5) and Google Chrome, is an HttpOnly flag which allows a web server to set a cookie that is unavailable to client-side scripts. While beneficial, the feature can neither fully prevent cookie theft nor prevent attacks within the browser.
Disabling scripts
While Web 2.0 and Ajax developers require the use of JavaScript, some web applications are written to allow operation without the need for any client-side scripts. This allows users, if they choose, to disable scripting in their browsers before using the application. In this way, even potentially malicious client-side scripts could be inserted unescaped on a page, and users would not be susceptible to XSS attacks.
Some browsers or browser plugins can be configured to disable client-side scripts on a per-domain basis. This approach is of limited value if scripting is allowed by default, since it blocks bad sites only after the user knows that they are bad, which is too late. Functionality that blocks all scripting and external inclusions by default and then allows the user to enable it on a per-domain basis is more effective. This has been possible for a long time in Internet Explorer (since version 4) by setting up its so called "Security Zones", and in Opera (since version 9) using its "Site Specific Preferences". A solution for Firefox and other Gecko-based browsers is the open source NoScript add-on which, in addition to the ability to enable scripts on a per-domain basis, provides some XSS protection even when scripts are enabled.
The most significant problem with blocking all scripts on all websites by default is substantial reduction in functionality and responsiveness (client-side scripting can be much faster than server-side scripting because it does not need to connect to a remote server and the page or frame does not need to be reloaded). Another problem with script blocking is that many users do not understand it, and do not know how to properly secure their browsers. Yet another drawback is that many sites do not work without client-side scripting, forcing users to disable protection for that site and opening their systems to vulnerabilities. The Firefox NoScript extension enables users to allow scripts selectively from a given page while disallowing others on the same page. For example, scripts from example.com could be allowed, while scripts from advertisingagency.com that are attempting to run on the same page could be disallowed.
Selectively disabling scripts
Content Security Policy
Content Security Policy (CSP) allows HTML documents to opt in to disabling some scripts while leaving others enabled. The browser checks each script against a policy before deciding whether to run it. As long as the policy only allows trustworthy scripts and disallows dynamic code loading, the browser will not run programs from untrusted authors regardless of the HTML document's structure.
Modern CSP policies allow using nonces to mark scripts in the HTML document as safe to run instead of keeping the policy entirely separate from the page content. As long as trusted nonces only appear on trustworthy scripts, the browser will not run programs from untrusted authors. Some large application providers report having successfully deployed nonce-based policies.
Emerging defensive technologies
Trusted types changes Web APIs to check that values have been trademarked as trusted. As long as programs only trademark trustworthy values, an attacker who controls a JavaScript string value cannot cause XSS. Trusted types are designed to be auditable by blue teams.
Another defense approach is to use automated tools that will remove XSS malicious code in web pages, these tools use static analysis and/or pattern matching methods to identify malicious codes potentially and secure them using methods like escaping.
SameSite cookie parameter
When a cookie is set with the SameSite=Strict parameter, it is stripped from all cross-origin requests. When set with SameSite=Lax, it is stripped from all non-"safe" cross-origin requests (that is, requests other than GET, OPTIONS, and TRACE which have read-only semantics). The feature is implemented in Google Chrome since version 63 and Firefox since version 60.
Notable Incidents
British Airways data breach (2018)
See also
Web application security
Internet security
XML external entity
Browser security
Metasploit Project, an open-source penetration testing tool that includes tests for XSS
w3af, an open-source web application security scanner
DOMPurify, a free and open source code library by Cure53 to reduce susceptibility to XSS vulnerabilities in websites.
Cross-document messaging
Samy (computer worm)
Parameter validation
Footnotes
References
Further reading
External links
OWASP: XSS, Testing for XSS, Reviewing Code for XSS
XSSed: Database of Websites Vulnerable to Cross-Site Scripting Attacks
Web security exploits
Injection exploits
Hacking (computer security)
Client-side web security exploits | Cross-site scripting | Technology | 3,628 |
1,708,811 | https://en.wikipedia.org/wiki/Matt%20Nagle | Matthew Nagle (October 16, 1979 – July 24, 2007) was the first person to use a brain–computer interface to restore functionality lost due to paralysis. He was a C3 tetraplegic, paralyzed from the neck down after being stabbed.
Biography
Nagle attended Weymouth High School (Class of 1998). He was an exceptional athlete and a star football player. In 2001, he sustained a stabbing injury while leaving the town's annual fireworks show near Wessagussett Beach on July 3. He was stabbed and his spinal cord severed when he stepped in to help a friend.
Nagle died on July 24, 2007, in Stoughton, Massachusetts, from sepsis.
BrainGate Clinical Trial
Nagle agreed to participate in a clinical trial involving the BrainGate Neural Interface System (developed by Cyberkinetics) out of a desire to again be healthy and lead a normal life, and in hopes that modern medical discoveries could help him. He also hoped that his participation in this Clinical Trial would help improve the lives of people who, like him, suffered injuries or diseases that cause severe motor disabilities.
The device was implanted on June 22, 2004, by neurosurgeon Gerhard Friehs. A 96-electrode "Utah Array" was placed on the surface of his brain over the region of motor cortex that controlled his dominant left hand and arm. A link connected it to the outside of his skull, where it could be connected to a computer. The computer was then trained to recognize Nagle's thought patterns and associate them with movements he was trying to achieve.
While he was implanted, Matt could control a computer "mouse" cursor, using it then to press buttons that can control TV, check e-mail, and do basically everything that can be done by pressing buttons. He could draw (although the cursor control is not precise) on the screen. He could also send commands to an external prosthetic hand (close and open). The results of the study are published in the journal Nature. Per Food and Drug Administration (FDA) regulations and the study protocol, the BrainGate device was removed from him after approximately one year.
Charges against assailant
On June 5, 2008, a grand jury in Norfolk County, Massachusetts, indicted on a second-degree murder charge Nagle's attacker Nicholas Cirignano. Cirignano had in 2005 been convicted of Nagle's stabbing and sentenced to nine years' imprisonment. District Attorney William Keating used the state medical examiner's ruling that the stabbing had caused Nagle's eventual death as grounds to seek the murder charge.
On April 10, 2009, a Superior Court Judge ruled that Cirignano could not be tried for murder, as the jury's verdict from the original assault case had already determined that one of the key components to a murder charge, malice, was negated by excessive force in self-defence. However, the lesser charge of manslaughter could still, in theory, be applied.
References
Nagle, Matthew
1979 births
2007 deaths
American murder victims
People murdered in Massachusetts
People from Stoughton, Massachusetts
Deaths by stabbing in Massachusetts
Deaths from sepsis in the United States
Infectious disease deaths in Massachusetts | Matt Nagle | Biology | 661 |
8,249,355 | https://en.wikipedia.org/wiki/The%20Cham-Cham | "The Cham-Cham" is the 25th episode of Thunderbirds, a British Supermarionation television series created by Gerry and Sylvia Anderson and filmed by their production company AP Films (APF). The penultimate episode of Thunderbirds Series One, it was written and directed by Alan Pattillo and first broadcast on 24 March 1966 on ATV Midlands.
Set in the 2060s, Thunderbirds follows the exploits of International Rescue, an organisation that uses technologically advanced rescue vehicles to save human life. The main characters are ex-astronaut Jeff Tracy, founder of International Rescue, and his five adult sons, who pilot the organisation's main vehicles: the Thunderbird machines. "The Cham-Cham" opens with a United States Air Force plane being shot down during a radio broadcast of the instrumental "Dangerous Game" by popular musical group the Cass Carnaby Five. International Rescue suspect sabotage, and Lady Penelope, Tin-Tin and Parker travel to the Swiss Alps to investigate the band's current tour venue, the mountain resort Paradise Peaks. There, they discover that the RTL2 attacks are being co-ordinated with the aid of an advanced computer called a "Cham-Cham".
Filmed in late 1965, "The Cham-Cham" has a show business theme and was written in the style of classic Hollywood musicals. It features several innovations in APF's use of marionette puppets. One scene features the Penelope character performing a slow dance, which was a challenge to film due to the difficulty in moving Supermarionation puppets convincingly. "The Cham-Cham" is also the first episode of any Supermarionation series to show characters skiing. "Dangerous Game", the focus of the episode's soundtrack, was devised as a Latin rhythm by series composer Barry Gray. Singer Ken Barrie recorded a lyrical version but this is not heard in the finished episode.
"The Cham-Cham" has been well received by commentators, drawing particular praise for its production design and soundtrack. Sylvia Anderson considered the plot "far-fetched" but valued the episode for its "charm" and Swiss Alps setting. An audio adaptation of the episode, narrated by David Graham as Parker, was released in March 1967 as the Century 21 mini-LP Lady Penelope.
Plot
USAF planes flying missile shipments out of Matthews Field air base have been shot down by enemy fighters shortly after take-off. On Tracy Island, Alan notes that each attack has occurred while popular band the Cass Carnaby Five have been performing their hit instrumental "Dangerous Game" on live radio. He and Brains examine a recording of the latest broadcast to determine whether the music contains a hidden code that is being used to co-ordinate the attacks.
Meanwhile, Jeff assigns Lady Penelope, Tin-Tin and Parker to investigate Paradise Peaks, a mountain-top hotel in the Swiss Alps that is currently playing host to Cass Carnaby and his group. The agents go undercover, with Penelope posing as a singer called "Wanda Lamour" and Parker securing a job as a waiter. They learn that Carnaby's manager, the mysterious Mr Olsen, often alters the arrangement of "Dangerous Game" before each new broadcast and that he is expecting to receive a message the following day.
In the morning, Penelope and Tin-tin ski down the mountain to Olsen's chalet and film him operating a strange machine that is decoding musical sounds into text stating the time of the next missile shipment. They deduce that he is issuing orders for the next attack and start back to Paradise Peaks to alert Jeff. Realising that he has been observed, Olsen telephones his associate Banino, a waiter at the hotel, with orders to kill Penelope and Tin-Tin. Banino goes outside with a sniper rifle and prepares to shoot the women before they reach the hotel. However, he is thwarted by Parker, who overheard the phone conversation and grabs the rifle, upsetting Banino's aim. In their struggle, the men lose their balance and tumble down the mountain together, forming a giant snowball in the process. Banino is knocked out but Parker emerges unscathed.
On Tracy Island, Brains identifies Olsen's machine as a Cham-Cham, an ultrasonically-sensitive computer that Olsen is using to send coded radio transmissions. Jeff relays this information to Washington, D.C., but the Matthews Field commander is sceptical and refuses to postpone the next shipment. That night, the Cass Carnaby Five begin performing Olsen's latest arrangement of "Dangerous Game". The shipment seems doomed until Penelope, in the guise of Wanda Lamour, appears on stage and sings a lyrical version, devised by Brains, containing a new set of coded instructions. Decoding the broadcast, the personnel at the enemy air base unwittingly direct their fighters to overfly Matthews Field. Arriving in Thunderbird 1, Scott alerts the commander and USAF fighters are launched to shoot down the hostiles.
Fearing Olsen's retribution, Jeff dispatches Virgil and Alan to Paradise Peaks in Thunderbird 2 to bring Penelope, Tin-Tin and Parker home. As the trio leave the hotel in a cable car, Olsen cuts the lines behind them, causing the car to speed out of control down the mountain. Thunderbird 2s magnetic grabs cannot get a purchase on the car, so Virgil and Alan release a set of guide cables. Climbing onto the roof, Parker hooks the cables with the handle of Penelope's umbrella and attaches them to the car. Virgil and Alan fire Thunderbird 2s retro-rockets, bringing the car to a halt but also throwing Parker off the roof. He uses the umbrella to parachute safely to the ground. Penelope, Tin-Tin, Parker and the Tracys return to Paradise Peaks, where Cass treats them to a private piano recital of "Dangerous Game".
Regular voice cast
Sylvia Anderson as Lady Penelope
Peter Dyneley as Jeff Tracy
Christine Finn as Tin-Tin Kyrano
David Graham as Brains and Parker
David Holliday as Virgil Tracy
Shane Rimmer as Scott Tracy
Matt Zimmerman as Alan Tracy
Ray Barrett as John Tracy
Production
Filmed in November and December 1965, "The Cham-Cham" was the second-to-last episode of Thunderbirds Series One to be produced. Scriptwriter Alan Pattillo created its show business plot and the exotic setting of Paradise Peaks in an attempt to emulate classic Hollywood musicals. Penelope's alias, Wanda Lamour, was named after the actress and singer Dorothy Lamour and Wanda Webb, one of APF's puppet operators.
APF had always found it difficult to make its puppets walk convincingly, so rarely showed this action openly on-screen. Instead, the puppet operators created an illusion of walking by holding the puppets' legs (which were kept out of shot) and moving the puppets up and down using a "bobbing" action. For the scene in "The Cham-Cham" where Penelope glides across the Paradise Peaks ballroom while singing "Dangerous Game", Webb worked the puppet from the stage while fellow operator Christine Glanville controlled its wired top portion from an overhead gantry.
Gerry Anderson believed that Penelope and Tin-Tin's trip to Olsen's lodge looked suitably realistic, despite APF never having shown puppets skiing prior to this episode. Anderson himself conceived the "ski thrusters" used by the characters to ascend the mountain during their journey back to Paradise Peaks, in part to remove the need for the puppets to walk. Praising Bob Bell's production design, Anderson commented that the episode "gave [APF's] art and design departments a chance to show what they could really do, and they didn't let us down."
Composer Barry Gray devised "Dangerous Game" as a Latin rhythm. Originally all performances by the Cass Carnaby Five were to have been a lyrical version sung by Ken Barrie, but for the finished episode this was replaced with a variety of instrumental versions. Sylvia Anderson based her singing voice on that of Marlene Dietrich. The shots of Penelope and Tin-Tin skiing to Olsen's chalet are accompanied by an incidental track called "Happy Flying" that was originally composed for the Supercar episode "Amazonian Adventure".
As with "Attack of the Alligators!", which had been filmed immediately prior, the technical complexity of "The Cham-Cham" caused production to finish behind schedule and considerably over-budget. To make up for the lost time and extra costs, the scriptwriters turned the final episode of Series One into a clip show. That episode, "Security Hazard", made extensive use of flashbacks to earlier instalments to reduce the amount of new footage that needed to be filmed.
Reception
Sylvia Anderson considered "The Cham-Cham" one of the series' best episodes and a rival to "Attack of the Alligators!" in terms of quality. On her website, she commented: "Even though the plot is far-fetched, it has charm and, because of the lovely Swiss mountain setting, has credibility."
Gerry Anderson biographers Simon Archer and Marcus Hearn describe "The Cham-Cham" as "perhaps the most lavish-looking episode of the series", calling the scenes of Penelope and Tin-Tin skiing and Penelope singing "unforgettable images". Hearn, in his book Thunderbirds: The Vault, calls the episode one of Thunderbirds "most entertaining" due to its focus on Penelope and Parker as well as its use of "one of the most exotic locations in the series". Tom Fox of Starburst magazine rates the episode 4 out of 5, describing the plot as "tenuous" but believing this to be redeemed by the production design and the scenes of the cable car rescue. Like Archer and Hearn, he is entertained by Parker's umbrella descent.
Ian Fryer considers the premise to be inspired by the first episode of The Sentimental Agent, "All That Jazz" (1963), in which a band are found to be sending information to spies. He praises the "confidence" of "The Cham-Cham", calling it a "triumph" for art director Bob Bell and writing that although the story has "occasional moments of silliness", "everything about the production works perfectly." He believes that the episode is proof of Supermarionation's ability to "present glamour convincingly on-screen" and represents the "absolute pinnacle of what [the Andersons] achieved with puppetry". According to commentator Alistair McGown, the story was influenced by Road to... comedy films and the spy series The Avengers. He writes that while the plot "may be flimsy in places", the overall episode is a "gorgeous confection" with the skiing and dancing sequences paying "impressive attention to detail". Both McGown and Hearn call the skiing scenes "charming".
Stephen La Rivière, author of Filmed in Supermarionation: A History of the Future, praises the episode's technical standards, remarking that the skiing and dancing sequences "[fly] in the face of what puppets can and can't do." He sums up "The Cham-Cham" as a "glorious example of Thunderbirds at its best, combining all the elements that made the show so popular: the characters, the adventure, the rescues and, of course, the humour." He further argues that the humour has intergenerational appeal, stating that Parker's double entendres are counterbalanced by overt slapstick moments such as the character's "Mary Poppins"-style descent using Penelope's umbrella.
In a review of the CD release of the Thunderbirds soundtrack, Morag Reavley of BBC Online describes Sylvia Anderson's singing as "slinky, sexy and slightly off-key, like a hung-over Zsa Zsa Gabor". Heather Phares of AllMusic considers "Dangerous Game" to be a highlight of the release, commenting that while the instrumental version "[reflects] the Sixties' ongoing fascination with exotica and Latin pop", its lyrical counterpart "could be a kissing cousin to seductive spy themes like 'Goldfinger'." McGown calls Anderson's conscious imitation of Marlene Dietrich her "campest moment" voicing Penelope.
Media historian Nicholas J. Cull interprets "The Cham-Cham" as a piece of Cold War-inspired fiction, noting the "Central/Eastern European accents" of the enemy airbase personnel.
References
Works cited
External links
1966 British television episodes
Fiction about cryptography
Fictional computers
Television episodes set in hotels
Television episodes set in Switzerland
Thunderbirds (TV series) episodes
United States Air Force in fiction | The Cham-Cham | Technology | 2,610 |
71,184,920 | https://en.wikipedia.org/wiki/Oxalate%20sulfate | Oxalate sulfates are mixed anion compounds containing oxalate and sulfate. They are mostly transparent, and any colour comes from the cations.
Related compounds include the sulfite oxalates and oxalate selenates.
Production
Oxalate sulfates may be deposited from a solution in water of the metal anions and sulfate with oxalic acid when evaporated. Crystals formed this way may be hydrates.
Properties
Many crystal forms are non-centrosymmetric and have non-linear optical properties.
When heated, oxalate sulfates will first dehydrate, and then give off carbon dioxide.
List
References
Oxalates
Mixed anion compounds
Sulfates | Oxalate sulfate | Physics,Chemistry | 141 |
31,614,353 | https://en.wikipedia.org/wiki/History%20of%20pharmacy | The history of pharmacy as a modern and independent science dates back to the first third of the 19th century. Before then, pharmacy evolved from antiquity as part of medicine. Before the advent of pharmacists, there existed apothecaries that worked alongside priests and physicians in regard to patient care.
Prehistoric pharmacy
Paleopharmacological studies attest to the use of medicinal plants in pre-history. For example, herbs were discovered in the Shanidar Cave, and remains of the areca nut (Areca catechu) in the Spirit Cave. Prehistoric man learned pharmaceutical techniques through instinct, by watching birds and beasts, and using cool water, leaves, dirt, or mud as a soothing agent.
Ancient Era
Mesopotamia and Egypt
Sumerian cuneiform tablets record prescriptions for medicine.
Ancient Egyptian pharmacological knowledge was recorded in various papyri, such as the Ebers Papyrus of 1550 BC and the Edwin Smith Papyrus of the 16th century BC.
The very beginnings of pharmaceutical texts were written on clay tablets by Mesopotamians. Some texts included formulas, instructions via pulverization, infusion, boiling, filtering and spreading; herbs were mentioned as well. Babylon, a state within Mesopotamia, provided the earliest known practice of running an apothecary i.e. pharmacy. Alongside the ill person included a priest, physician, and a pharmacist to tend to their needs.
Greece
In Ancient Greece, there existed a separation between physician and herbalist. The duties of the herbalist was to supply physicians with raw materials, including plants, to make medicines. According to Edward Kremers and Glenn Sonnedecker, "before, during and after the time of Hippocrates there was a group of experts in medicinal plants. Probably the most important representative of these rhizotomoi was Diocles of Carystus (4th century BC). He is considered to be the source for all Greek pharmacotherapeutic treatises between the time of Theophrastus and Dioscorides."
Between 60 and 78 AD, the Greek physician Pedanius Dioscorides wrote a five-volume book, De materia medica, covering over 600 plants and coining the term materia medica. It formed the basis for many medieval texts, and was built upon by many Middle Eastern scientists during the Islamic Golden Age.
Asia
The earliest known Chinese manual on materia medica is the Shennong Ben Cao Jing (The Divine Farmer's Herb-Root Classic), dating back to the first century AD. It was compiled during the Han dynasty and was attributed to the mythical Shennong. Earlier literature included lists of prescriptions for specific ailments, exemplified by a manuscript "Recipes for 52 Ailments", found in the Mawangdui, sealed in 168 BC. Present-day Chinese pharmacy is a result of pharmaceutical exchanges between China and the rest of the world in the past centuries.
The earliest known compilation of medicinal substances in Indian traditional medicine dates to the third or fourth century AD (attributed to Sushruta, who is recorded as a physician of the sixth century BC).
There is a stone sign for a pharmacy with a tripod, a mortar, and a pestle opposite one for a doctor in the Arcadian Way in Ephesus, Turkey.
Japan
In Japan, at the end of the Asuka period (538–710) and the early Nara period (710–794), the men who fulfilled roles similar to those of modern pharmacists were highly respected. The place of pharmacists in society was expressly defined in the Taihō Code (701) and re-stated in the Yōrō Code (718). Ranked positions in the pre-Heian Imperial court were established; and this organizational structure remained largely intact until the Meiji Restoration (1868). In this highly stable hierarchy, the pharmacists—and even pharmacist assistants—were assigned status superior to all others in health-related fields such as physicians and acupuncturists. In the Imperial household, the pharmacist was even ranked above the two personal physicians of the Emperor.
Middle Ages
Middle East
In Baghdad the first pharmacies, or drug stores, were established in 754, under the Abbasid Caliphate during the Islamic Golden Age. By the ninth century, these pharmacies were state-regulated.
The advances made in the Middle East in botany and chemistry led medicine in medieval Islam substantially to develop pharmacology. Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915), for instance, acted to promote the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation. His Liber servitoris is of particular interest, as it provides the reader with recipes and explains how to prepare the "simples" from which were compounded the complex drugs then generally used. Shapur ibn Sahl (d. 869), was, however, the first physician to initiate a pharmacopoeia, describing a large variety of drugs and remedies for ailments. Al-Biruni (973–1050) wrote one of the most valuable Islamic works on pharmacology entitled Kitab al-Saydalah (The Book of Drugs), where he gave detailed knowledge of the properties of drugs and outlined the role of pharmacy and the functions and duties of the pharmacist.
Ibn Sina (Avicenna), too, described no less than 700 preparations, their properties, mode of action and their indications. He devoted in fact a whole volume to simple drugs in The Canon of Medicine. Of great impact were also the works by al-Maridini of Baghdad and Cairo, and Ibn al-Wafid (1008–1074), both of which were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by 'Mesue' the younger, and the Medicamentis Simplicibus by 'Abenguefit'. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Maridini under the title De Veneris. Al-Muwaffaq's contributions in the field are also pioneering. Living in the tenth century, he wrote The Foundations of the True Properties of Remedies, amongst others describing arsenious oxide, and being acquainted with silicic acid. He made clear distinction between sodium carbonate and potassium carbonate, and drew attention to the poisonous nature of copper compounds, especially copper vitriol, and also lead compounds. He also describes the distillation of sea-water for drinking.
Middle Eastern pharmaceutical practitioners would experience an upheaval of their craft come the beginning of the 13th and 14th centuries as pharmacists realigned their interests from developing medicinal theories to establishing practical and therapeutic applications of pharmaceuticals. For example, in 1260 CE a Cairenes pharmacist named Abu ‘I-Munā al-Kuhín al-‘Attār published a 25-chapter manual, the Minhāj al-dukkān (How to run a pharmacy), wherein he documented: titles of drugs, their ingredients and quantities, preparation methods, and dosages. The manual noticeably lacks any chapters that highlight desirable characteristics and qualities aspiring physicians should display and Al Kuhín al-Attār covers very little of the ethical dilemmas or basic concepts that most pharmacists would normally discuss during this time in his manual. This demonstrates that after 1260 CE, interests lessened in discussing the culture and values surrounding pharmacy and a growing interest in developing archives of pharmacy knowledge for the public. It is worth mentioning that written manuals were not commonly produced by pharmacists in the Middle East. It is also during the 13th and 14th centuries that Middle Eastern pharmacopoeias begin to resemble cookbooks more than medical encyclopedias, which Thomas Allsen attributes to the extensive cultural exchange between China, Iran, and the greater Mongol Empire.
Europe
After the fifth century fall of the Western Roman Empire, medicinal knowledge in Europe suffered due to the loss of Greek medicinal texts and a strict adherence to tradition, although an area of Southern Italy near Salerno remained under Byzantine control and developed a hospital and medical school, which became famous by the 11th century.
In the early 11th century, Salerno scholar Constantinos Africanus translated many Arabic books into Latin, driving a shift from Hippocratic medicine towards a pharmaceutical-driven approach advocated by Galen. In medieval Europe, monks typically did not speak Greek, leaving only Latin texts such as the works of Pliny available until these translations by Constantinos. In addition, Arabic medicine became more widely known due to Muslim Spain.
In the 15th century, the printing press spread medicinal textbooks and formularies; the Antidotarium was the first printed drug formulary.
In Europe pharmacy-like shops began to appear during the 12th century. In 1240 emperor Frederic II issued a decree by which the physician's and the apothecary's professions were separated.
Old pharmacies continue to operate in Dubrovnik, Croatia located inside the Franciscan monastery, opened in 1317. The Town Hall Pharmacy in Tallinn, Estonia, which dates back to at least 1422, is the oldest continuously run pharmacy in the world still operating in the original premises.
The trend towards pharmacy specialization started to take effect in Bruges, Belgium where a new law was passed that forbid physicians to prepare medications for patients.
The oldest pharmacy is claimed to be set up in 1221 in the Church of Santa Maria Novella in Florence, Italy, which now houses a perfume museum. Florence is also the birthplace of the first official pharmacopeia, called the Nuovo Receptario, in which all pharmacies would use that document as guidance for caring for the sickly.
The Royal College of Apothecaries of the City and Kingdom of Valencia was founded in 1441, considered the oldest in the world, with full administrative and legislative powers. The apothecaries of Valencia were the first in the world to elaborate their medicines, with the same criteria that are currently required in the official pharmacopoeias.
The Republic of Venice was the first State with health modern policies which requires that the nature of the drug is public. In actuality, thirteen secrets survive which were offered to sale to the Venetian Republic.
Industrialization
The 1800s brought increased technical sophistication. By the late 1880s, German dye manufacturers had perfected the purification of individual organic compounds from tar and other mineral sources and had also established rudimentary methods in organic chemical synthesis.
Chloral hydrate was introduced as a sleeping aid and sedative in 1869. Chloroform was first used as an anesthetic in 1847.
Derivatives of phenothiazines had an important impact on various aspects of medicine, beginning with methylene blue which was originally used as a dye after its synthesis from aniline in 1876. Phenothiazines were used as antimalarials, antiseptics, and antihelminthics up to 1940. The "psychopharmacological revolution" began in 1950 when Chlorpromazine was discovered.
The United States formed the American Pharmaceutical Association in 1852 with its main purpose to advance pharmacists' roles in patient care, assist in furthering career development, spread information about tools and resources, and raising awareness about the roles of pharmacists and their contribution to patient care.
Frederick Banting and Charles Best found the hormone insulin to lower blood sugar of dogs in 1921. This inspired further work by James Collip who developed pure insulin used for human testing and dramatically changed the prospects for all diabetics.
Alexander Fleming developed the first antibiotic, penicillin, after discovering a fungus that was able to kill off bacteria.
Late Modern Period
Singapore
The first pharmaceutical infrastructure was the Medical Stores and Dispensary, organized by Sub-Assistant Surgeon Thomas Prendergast during Raffles' expedition to Singapore. British settlement in Singapore led to establishing three General Hospitals. Two were for military and sailors respectively, while the last one was for civilian use.
Medical staffing were ranked as Senior Surgeon, Assistant Surgeon, and Apothecaries. Apothecaries were medical subordinates; they were doctors that graduated from Indian Medical Colleges. To support staffing shortages at the General Hospital, a proposal to select local male students from Penang Free School to become assistant apothecaries were approved. The proposal outlined five years of training and rigorous requirements to be qualified as an apprenticeship. Those that complete the apprenticeship not only experienced strict expectations and responsibilities, but also little pay. Apothecaries resigned and left for private practice. Private practices heavily advertised in newspapers. This was considered the first system of pharmacy operations.
In the 1820s, James Isaiah, (J.I.) Woodford trained to be an apothecary in Penang. He later founded the Kampong Glam Dispensary. Another company was Martin & Line of the Singapore Dispensary. Both establishments were considered as chemists and druggists. In addition to traditional Chinese medicine, Western medicine and practices were also established. Dr. Christopher Trebing arrived in Singapore and opened a dispensary called German Medicine Deity Medical Office. Following Dr. Trebing’s passing, the Medical Office continued to be operated by German owners until its demolition in 1970.
Singaporean consumers suffered from misinformation of drug advertisements and lacked standards and qualifications for dispensing drugs. There was also substance abuse with opium and poison accessibility for criminal attempt. This led to creating the Medical Ordinance in 1904 and Poisons Ordinance in 1905 to create standards for qualified chemists and druggists to handle these substances. The Poisons Ordinance established regulation for the retail of poisons. One criterion was possessing certificates from the Principal Civil Medical Officer. In the same year, King Edward VII College of Medicine was established. The Medical School hosted classes and exams for the pharmacy certificate.
Throughout the twentieth century, the government amended its ordinances for education and licensing. These early legal efforts led to Ordinance No. 30 of 1933. The ordinance formally required training and registration for pharmacists after training. King Edward VII College of Medicine admitted its first pharmacy students in 1935.
See also
International Society for the History of Pharmacy
British Society for the History of Pharmacy
History of medicine
History of pharmacy automation
History of pharmacy in the United States
List of drugs by year of discovery
Museum of the History of Lithuanian Medicine and Pharmacy
References
External links
Drug discovery | History of pharmacy | Chemistry,Biology | 3,019 |
77,901,927 | https://en.wikipedia.org/wiki/SPR741 | SPR741 is an experimental antibiotic related to Polymyxin B. It shows activity against a number of bacterial pathogens, especially Acinetobacter baumannii and Klebsiella pneumoniae, acting as an antibiotic adjuvant which disrupts the outer membrane of Gram-negative bacteria and allows other antibiotics to more effectively penetrate into the cell.
References
Antibiotics
Cyclic peptides
Polymyxin antibiotics
Polypeptide antibiotics | SPR741 | Biology | 93 |
10,971,432 | https://en.wikipedia.org/wiki/Methylglyoxal%20pathway | The methylglyoxal pathway is an offshoot of glycolysis found in some prokaryotes, which converts glucose into methylglyoxal and then into pyruvate. However unlike glycolysis the methylglyoxal pathway does not produce adenosine triphosphate, ATP. The pathway is named after the substrate methylglyoxal which has three carbons and two carbonyl groups located on the 1st carbon and one on the 2nd carbon. Methylglyoxal is, however, a reactive aldehyde that is very toxic to cells, it can inhibit growth in E. coli at milimolar concentrations. The excessive intake of glucose by a cell is the most important process for the activation of the methylglyoxal pathway.
The Methylglyoxal pathway
The methylglyoxal pathway is activated by the increased intercellular uptake of carbon containing molecules such as glucose, glucose-6-phosphate, lactate, or glycerol. Methylglyoxal is formed from dihydroxyacetone phosphate (DHAP) by the enzyme methylglyoxal synthase, giving off a phosphate group.
Methylglyoxal is then converted into two different products, either D-lactate, and L-lactate. Methylglyoxal reductase and aldehyde dehydrogenase convert methylglyoxal into lactaldehyde and, eventually, L-lactate. If methylglyoxal enters the glyoxylase pathway, it is converted into lactoylguatathione and eventually D-lactate. Both D-lactate, and L-lactate are then converted into pyruvate. The pyruvate that is created most often goes on to enter the Krebs cycle (Weber 711–13).
Enzymes and regulation
The potentially hazardous effects of methylglyoxal require regulation of the reactions with this substrate. Synthesis of methylglyoxal is regulated by levels of DHAP and phosphate concentrations. High concentrations of DHAP encourage methylglyoxal synthase to produce methylglyoxal, while high phosphate concentrations inhibit the enzyme, and therefore the production of more methylglyoxal. The enzyme triose phosphate isomerase affects the levels of DHAP by converting glyceraldehyde 3-phosphate (GAP) into DHAP. The usual pathway converting GAP to pyruvate starts with the enzyme glyceraldehyde 3-phosphate dehydrogenase (Weber 711–13). Low phosphate levels inhibit GAP dehydrogenase; GAP is instead converted into DHAP by triosephosphate isomerase. Again, increased levels of DHAP activate methylglyoxal synthase and methylglyoxal production (Weber 711–13).
The oscillation of Methylglyoxal concentration in feast concentrations
Jan Weber, Anke Kayser, and Ursula Rinas, performed an experiment to test what happened to the methylglyoxal pathway when E. coli was in the presence of a constantly high concentration of glucose. The concentration of methylglyoxal increased until it reached 20 μmol. Methylglyoxal concentration then began to decrease, once it reached this level. The decrease in the concentration of methylglyoxal was connected to the drop in respiratory activity. When respiration activity increased the concentration of methylglyoxal increased again, until it reached the 20 μmol concentration (Weber 714–15).
Why does the Methylglyoxal pathway exist?
This pathway does not produce any ATP, this pathway does not replace glycolysis, it runs simultaneously to glycolysis and is only initiated with an increased concentration of sugar phosphates. One believed purpose of the methylglyoxal pathway is to help release the stress of elevated sugar phosphate concentration. Also when methylglyoxal is formed from DHAP, an inorganic phosphate is given off which can be used to replenish a low concentration of needed inorganic phosphate. The methylglyoxal pathway is a rather dangerous tactic, both because less energy is produced and a toxic compound, methylglyoxal is formed. (Weber 715).
References
Weber, Jan, Anke Kayser, and Ursula Rinas. Metabolic Flux Analysis of Escherichia Coli In. Vers. 151: 707-716. 6 Dec. 2004. Microbiology. 10 Apr. 2007 <http://mic.sgmjournals.org/cgi/reprint/151/3/707>.
Saadat, D., Harrison, D.H.T. "Methylglyoxal Synthase From Escherichia Coli." RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 25 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1B93>.
"Methylglyoxal Synthase From Escherichia Coli." RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 25 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1B93>.
Yun, M., C.-G. Park, J.-Y Kim, and H.-W. Park. "Structural Anayysis of Glyeraldehyde 3-Phosphate Dehydrogenase from Escherichia coli: Direct Evidence for Substrate Binding and Cofactor-Induced Confromational Changes. RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 30 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1DC4>.
Cellular respiration | Methylglyoxal pathway | Chemistry,Biology | 1,187 |
35,925,493 | https://en.wikipedia.org/wiki/Three-surface%20aircraft | A three-surface aircraft or sometimes three-lifting-surface aircraft has a foreplane, a central wing and a tailplane. The central wing surface always provides lift and is usually the largest, while the functions of the fore and aft planes may vary between types and may include lift, control and/or stability.
In civil aircraft the three surface configuration may be used to give safe stalling characteristics and short takeoff and landing (STOL) performance. It is also claimed to allow minimizing the total wing surface area, reducing the accompanying skin drag. In combat aircraft this configuration may also be used to enhance maneuverability both before and beyond the stall, often in conjunction with vectored thrust.
History
An early designation used in 1911 was "three plane system". The Fernic designs of the 1920s were referred to as "tandem". While there are indeed two lifting wing surfaces in tandem, the tailplane forms a third horizontal surface.
Pioneer experiments
During the pioneer years of aviation a number of aircraft were flown with both fore and aft auxiliary surfaces. The issue of control vs. stability was poorly understood and typically pitch control was on the front surface with the rear surface also lifting, leading to instability in pitch. The Kress Drachenflieger of 1901 and Dufaux triplane of 1908 had insufficient power to take off. More successful types included the Voisin-Farman I (1907) and Curtiss No. 1 (1909). The Wright Brothers too experimented on the basic Flyer design in an effort to obtain both controllability and stability, flying it at various times in first canard, then three surface and finally conventional configurations. By the outbreak of the First World War in 1914, the main wing with smaller rear tail surface had become the conventional configuration and few three surface types would be flown for many years. The Fokker V.8 of 1917 and Caproni Ca.60 Noviplano of 1921 were both failures.
Soft stall and STOL
In 1920s George Fernic developed the idea of two lifting surfaces in tandem, together with a conventional tailplane. The small foreplane was highly loaded and as the angle of attack increased it was designed to stall first, causing the nose to drop and allowing the aircraft to recover safely without stalling the main wing. This "soft" stall provides a level of safety in the stall which is not usually present in conventional designs. The Fernic T-9, a three-surface monoplane, flew in 1929. Fernic was killed in an accident while flying its successor the FT-10 Cruisaire.
It is possible to achieve such a soft stall with a pure canard design, but it is then difficult to control the pitching and oscillations can develop as the foreplane repeatedly lifts the nose, stalls and recovers. Also, care must be taken in the design that the turbulent wake from the stalled foreplane does not in itself disturb the airflow over the main wing sufficiently to cause significant loss of lift and cancel out the nose-down pitching moment. In the three-surface design the third, tail surface does not stall and provides better controllability.
In the 1950s James Robertson developed his experimental Skyshark. This was a broadly conventional design but with a variety of features, including a small canard foreplane, intended to give not only a safe stall but good Short takeoff and landing (STOL) performance. The foreplane allowed STOL performance to be achieved without the high angles of attack and accompanying dangers of stalling required by conventional STOL designs. The aircraft was evaluated by the US Army. Robertson's system was commercialised as the Wren 460, a modified Cessna light aircraft. This in turn was later licensed and produced during the 1980s as the Peterson 260SE and with the foreplane modification only as the 230SE. In 2006 a ruggedised variant, the Peterson Katmai, entered production. A broadly similar approach is taken by the 1988 Eagle-XTS and its derivatives, the Eagle 150 series.
Manoeuvrability beyond the stall
Around 1979, military jet designers began studying three-surface configurations as a way to provide enhanced manoeuvrability and control, especially at low speeds and high angles of attack such as during takeoff and combat. In the United States the experimental Grumman X-29 flew in 1984 and a modified McDonnell Douglas F-15, the F-15 STOL/MTD, in 1988 but these designs were not followed up. In the Soviet Union a Sukhoi Su-27 modified with canard foreplanes flew in 1985 and derivatives of this design became the only military three-surface types to enter production.
Minimum wing surface
Also in 1979, Piaggio began design studies on a three-surface civil twin turboprop which, in collaboration with Learjet, would emerge as the Piaggio P.180 Avanti. This type first flew in 1986 and entered service in 1990, with production continuing today. In the Avanti, the three-surface configuration is claimed to significantly reduce wing size, weight and drag compared to the conventional equivalent.
Two experimental aircraft adopting this configuration were subsequently built by Scaled Composites under the lead of Burt Rutan and flown in 1988. The Triumph was a twin-turbofan very light jet aircraft designed for Beechcraft. Flight testing validated the targeted performance range. The Catbird was a single-engined propeller-driven aircraft, envisioned by Rutan as a replacement for the Beechcraft Bonanza. It holds the world record for speed over a closed circuit of without payload of set in 2014.
Fighter aircraft design
Some advanced jet aircraft have a three-surface configuration, often in conjunction with thrust vectoring. This is typically intended to enhance control and manoeuvrability, especially at very high angles of attack beyond the stall point of the main wing. Some advanced combat manoeuvres such as Pugachev's Cobra and the Kulbit were first performed on Sukhoi three-surface aircraft.
The experimental Grumman X-29 was of basic "tail-first" canard configuration, with unusual forward-swept wings and strakes extending rearwards from the main wing roots. Movable flaps at the ends of the strakes effectively made it a three-surface design. The X-29 demonstrated exceptional high-angle of attack manoeuvrability.
A more straightforward three-surface design is seen in several variants of the otherwise conventional Sukhoi Su-27. Following the successful addition of canard foreplanes to a development aircraft, these were incorporated into a number of subsequent production variants including the naval Su-33 (Su-27K), some Su-30s, the Su-35 and the Su-37. The Chinese Shenyang J-15 also inherits the configuration of the Su-33.
The McDonnell Douglas F-15 STOL/MTD was an F-15 airframe modified with canard foreplanes and thrust vectoring, designed to demonstrate these technologies for both STOL performance and high manoeuvrability.
Reduced surface area design
The three-surface configuration is claimed to reduce total aerodynamic surface area compared to the conventional and canard configurations, thus enabling drag and weight reductions.
Pitch equilibrium
On most aircraft, the wing centre of pressure moves forward and backward according to flight conditions. If it does not align with the centre of gravity, a corrective or trim force must be applied to prevent the aircraft pitching and thus to maintain equilibrium.
On a conventional aircraft this pitch trim force is applied by a tailplane. On many modern designs, the wing centre of pressure is normally aft of the centre of gravity, so the tailplane must exert a downward force. Any such negative lift generated by the tail must be compensated by additional lift from the main wing, thus increasing wing area, drag, and weight requirements.
On a three-surface aircraft, the pitch trim forces can be shared, as needed in flight, between the foreplane and tailplane. Equilibrium can be achieved with lift from the foreplane rather than downforce from the tailplane. Both effects, the reduced downforce and the extra lifting force, reduce the load on the main wing.
The Piaggio P.180 Avanti has flaps on both its forward wing and main wing. Both flaps deploy in concert to maintain pitch neutrality for take-off and landing.
Static stability and the stall
On a canard aircraft, to allow natural static pitch stability in normal flight, the foreplane must provide lift. Also, in order for the aircraft to have safe stall characteristics the foreplane must stall before the main wing, pitching the aircraft down and allowing the aircraft to recover. This means that a safety margin must be used on the main wing area so that its maximum lift coefficient and wing loading are never attained in practice. This in turn means that the main wing must be increased in size.
On a three-surface aircraft, the tailplane acts as a conventional horizontal stabiliser. In the stall condition, even if the main wing is stalled the tailplane can provide a pitch-down moment and allow recovery. The wing may thus be used up to its maximum lift coefficient, an advantage that may translate into a reduction of its area and weight.
A lifting foreplane is positioned ahead of the centre of gravity, so its lift moment acts in the same direction as any movement in pitch. If the aircraft is to be naturally stable, the foreplane's size, lift slope and moment arm must be chosen so that it does not overpower the stabilizing moment provided by the wing and tailplane. Stability constraints thus limit the foreplane's volume ratio (a measure of its effectiveness in trim and stability terms), which may in turn limit its ability to share pitch trim forces as described above.
Wing area reduction
The minimum size of the lifting wings of an aircraft is determined by: the weight of the aircraft, the force required to oppose the negative lift produced by the horizontal stabilizer, the targeted take off and landing speeds, and the coefficient of lift of the wings.
Most modern aircraft use trailing edge flaps on the main wing to increase the wings lift coefficient during takeoff and landing; thus allowing the wing to be smaller than it would otherwise need to be. This may reduce the weight of the wing, and it always reduces the surface area of the wing. The reduction of surface area proportionately reduces skin drag at all speeds.
A drawback of the use of trailing edge flaps is that they produce significant negative pitching moment when in use. In order to balance this pitching moment the horizontal stabilizer must be somewhat larger than it would otherwise be, so that it can produce enough force to balance the negative pitching moment created by the trailing edge flaps. This, in turn, means that the main wing must be somewhat larger than it would otherwise have to be to balance the larger negative lift produced by the larger horizontal stabilizer.
On a canard aircraft the foreplane can provide positive lift at takeoff, reducing some of the down force the rear stabilizer would otherwise have to create. However, the main wing must be large enough to not only lift the aircraft's remaining weight at takeoff but also to provide adequate safety margin to prevent stalling. On a three-surface aircraft, neither of these handicaps is present and the main wing can be reduced in size, so also reducing weight and drag. It is claimed that the total area of all wing surfaces of a three-surface aircraft can be less than that of the equivalent two-surface aircraft, so reducing both weight and drag.
Minimum area in cruise can be further reduced through the use of conventional high-lift devices such as flaps, allowing a three-surface design to have minimum surface area at all points in the flight envelope.
Examples of reduced-area three-surface aircraft include the Piaggio P.180 Avanti, and the Scaled Composites Triumph and Catbird. These aircraft were designed to expose a minimum of total surface area to the slipstream; thus reducing surface drag for speed and fuel efficiency. Several reviews compare the Avanti's top speed and service ceiling to that of lower-end jet aircraft, and report significantly better fuel efficiency at cruise speed. Piaggio attributes this performance in part to the layout of the aircraft, claiming a 34% reduction in total wing area compared to a conventional layout.
List of three-surface aircraft
|-
| Aceair AERIKS 200 || Switzerland || Propeller || Private || 2002 || Prototype || || Designed as a home build kit.
|-
| Curtiss/AEA June Bug || US || Propeller || Experimental || 1908 || Prototype || ||
|-
| Caproni Ca.60 Noviplano || Italy || Propeller || Transport || 1921 || Prototype || || Three triplane stacks, making nine wings in all. Flying boat.
|-
| Curtiss No. 1 || US || Propeller || Experimental || 1909 || Prototype || || Also known as the Curtiss Gold Bug or Curtiss Golden Flyer.
|-
| de la Farge Pulga || Argentina || Propeller || Private || circa 1990 || || || Modified Flying Flea
|-
| Dufaux || Switzerland || Propeller || Experimental || 1908 || Prototype || || First Swiss aircraft to fly.
|-
| Eagle-XTS || Australia || Propeller || Private || 1988 || || ||
|-
| Eagle Aircraft Eagle 150 || Australia || Propeller || Private || 1997 || || ||
|-
| Farman three wing monoplane || France || Propeller || Experimental || 1908 || Prototype || ||
|-
| Fernic T-9 || US || Propeller || Private || 1929 || || ||
|-
| Fernic-Cruisaire FT-10 || US || Propeller || Private || 1930 || || ||
|-
| Fokker V.8 || Germany || Propeller || Experimental || 1917 || Prototype || ||
|-
| Grumman X-29 || US || Jet || Experimental || 1984 || Prototype || || Forward-swept wing with canard foreplane and tailboom flaps.
|-
| Herring-Burgess || US || Propeller || || 1910 || || || Biplane.
|-
| Kress Drachenflieger || Austria-Hungary || Propeller || Experimental || 1901 || Prototype || || Failed to fly: engine lacked sufficient power to take off.
|-
| McDonnell Douglas F-15 STOL/MTD || US || Jet || Experimental || 1988 || Prototype || || technology demonstrator of enhanced maneuverability including use of thrust vectoring
|-
| Mikoyan-Gurevich Ye-8 || Soviet Union || Jet || Experimental || 1962 || Prototype || ||
|-
| Miller-Bohannon JM-2 Pushy Galore || US || Propeller || Private || 1989 || Operational || 1 || Racer in pusher configuration
|-
| NPO Molniya 1 || Russia || || Transport || 1992 || || ||
|-
| Peterson 260SE and 230SE || US || Propeller || Private || 1986 || || ||
|-
| Peterson Katmai || US || Propeller || Private || || || ||
|-
| Piaggio P.180 Avanti || Italy || Propeller || Transport || 1986 || Production || ||
|-
| Robertson Skyshark || US || Propeller || Private || || || ||
|-
| Rutan Scaled Model 120 'Predator' || US || Propeller || Experimental || 1984 || Prototype || ||
|-
| Scaled Composites ATTT (model 133) || US || Propeller || Experimental || 1987 || Prototype || ||
|-
| Scaled Composites Triumph (model 143) || US || Jet || Experimental || 1988 || Prototype || ||
|-
| Scaled Composites Catbird (model 181) || US || Propeller || Experimental || 1988 || Prototype || ||
|-
| Shenyang J-15 || China || Jet || High-manoeuvrability combat || 2009 || || ||
|-
| Short No.1 biplane || UK || Propeller || Experimental || 1910 || Prototype || || Not flown.
|-
| Sukhoi Su-27M || Soviet Union || Jet || High-manoeuvrability combat || || || || Some examples fitted with a foreplane in addition to the standard tailplane.
|-
| Sukhoi Su-30 MKI || India || Jet || Fighter || 1989 || Production || || License-built variant of the Sukhoi Su-30
|-
| Sukhoi Su-33 || Soviet Union || Jet || Fighter || 1987 || Production || ||
|-
| Sukhoi Su-34 || Russia || Jet || Attack || 1990 || Production || ||
|-
| Sukhoi Su-37 || Russia || Jet || Fighter || 1996 || Prototype || ||
|-
| Sukhoi Su-47 || Russia || Jet || Experimental || 1997 || Prototype || || Main wing is forward-swept.
|-
| Voisin-Farman I || France || Propeller || Experimental || 1907 || || ||
|-
| Wren 460 || US || Propeller || Private || 1963 || || ||
|-
| Wright Model A (Modified) || US || Propeller || Experimental || 1909 || || ||
|}
See also
Canard (aeronautics)
Tailplane
Tandem wing
Wing configuration
References
Notes
Bibliography
Garrison, P; TECHNICALITIES: Three's Company; Flying, December 2002, pp85–86
Aircraft configurations
Wing configurations
Scaled Composites
Rutan aircraft
Lists of aircraft by wing configuration | Three-surface aircraft | Engineering | 3,757 |
66,819 | https://en.wikipedia.org/wiki/Root%20mean%20square | In mathematics, the root mean square (abbrev. RMS, or rms) of a set of numbers is the square root of the set's mean square.
Given a set , its RMS is denoted as either or . The RMS is also known as the quadratic mean (denoted ), a special case of the generalized mean. The RMS of a continuous function is denoted and can be defined in terms of an integral of the square of the function.
In estimation theory, the root-mean-square deviation of an estimator measures how far the estimator strays from the data.
Definition
The RMS value of a set of values (or a continuous-time waveform) is the square root of the arithmetic mean of the squares of the values, or the square of the function that defines the continuous waveform.
In the case of a set of n values , the RMS is
The corresponding formula for a continuous function (or waveform) f(t) defined over the interval is
and the RMS for a function over all time is
The RMS over all time of a periodic function is equal to the RMS of one period of the function. The RMS value of a continuous function or signal can be approximated by taking the RMS of a sample consisting of equally spaced observations. Additionally, the RMS value of various waveforms can also be determined without calculus, as shown by Cartwright.
In the case of the RMS statistic of a random process, the expected value is used instead of the mean.
In common waveforms
If the waveform is a pure sine wave, the relationships between amplitudes (peak-to-peak, peak) and RMS are fixed and known, as they are for any continuous periodic wave. However, this is not true for an arbitrary waveform, which may not be periodic or continuous. For a zero-mean sine wave, the relationship between RMS and peak-to-peak amplitude is:
Peak-to-peak
For other waveforms, the relationships are not the same as they are for sine waves. For example, for either a triangular or sawtooth wave:
Peak-to-peak
In waveform combinations
Waveforms made by summing known simple waveforms have an RMS value that is the root of the sum of squares of the component RMS values, if the component waveforms are orthogonal (that is, if the average of the product of one simple waveform with another is zero for all pairs other than a waveform times itself).
Alternatively, for waveforms that are perfectly positively correlated, or "in phase" with each other, their RMS values sum directly.
Uses
In electrical engineering
Current
The RMS of an alternating electric current equals the value of constant direct current that would dissipate the same power in a resistive load.
Voltage
A special case of RMS of waveform combinations is:
where refers to the direct current (or average) component of the signal, and is the alternating current component of the signal.
Average electrical power
Electrical engineers often need to know the power, P, dissipated by an electrical resistance, R. It is easy to do the calculation when there is a constant current, I, through the resistance. For a load of R ohms, power is given by:
However, if the current is a time-varying function, I(t), this formula must be extended to reflect the fact that the current (and thus the instantaneous power) is varying over time. If the function is periodic (such as household AC power), it is still meaningful to discuss the average power dissipated over time, which is calculated by taking the average power dissipation:
So, the RMS value, IRMS, of the function I(t) is the constant current that yields the same power dissipation as the time-averaged power dissipation of the current I(t).
Average power can also be found using the same method that in the case of a time-varying voltage, V(t), with RMS value VRMS,
This equation can be used for any periodic waveform, such as a sinusoidal or sawtooth waveform, allowing us to calculate the mean power delivered into a specified load.
By taking the square root of both these equations and multiplying them together, the power is found to be:
Both derivations depend on voltage and current being proportional (that is, the load, R, is purely resistive). Reactive loads (that is, loads capable of not just dissipating energy but also storing it) are discussed under the topic of AC power.
In the common case of alternating current when I(t) is a sinusoidal current, as is approximately true for mains power, the RMS value is easy to calculate from the continuous case equation above. If Ip is defined to be the peak current, then:
where t is time and ω is the angular frequency (ω = 2/T, where T is the period of the wave).
Since Ip is a positive constant and was to be squared within the integral:
Using a trigonometric identity to eliminate squaring of trig function:
but since the interval is a whole number of complete cycles (per definition of RMS), the sine terms will cancel out, leaving:
A similar analysis leads to the analogous equation for sinusoidal voltage:
where IP represents the peak current and VP represents the peak voltage.
Because of their usefulness in carrying out power calculations, listed voltages for power outlets (for example, 120V in the US, or 230V in Europe) are almost always quoted in RMS values, and not peak values. Peak values can be calculated from RMS values from the above formula, which implies V = VRMS × , assuming the source is a pure sine wave. Thus the peak value of the mains voltage in the USA is about 120 × , or about 170 volts. The peak-to-peak voltage, being double this, is about 340 volts. A similar calculation indicates that the peak mains voltage in Europe is about 325 volts, and the peak-to-peak mains voltage, about 650 volts.
RMS quantities such as electric current are usually calculated over one cycle. However, for some purposes the RMS current over a longer period is required when calculating transmission power losses. The same principle applies, and (for example) a current of 10 amps used for 12 hours each 24-hour day represents an average current of 5 amps, but an RMS current of 7.07 amps, in the long term.
The term RMS power is sometimes erroneously used (e.g., in the audio industry) as a synonym for mean power or average power (it is proportional to the square of the RMS voltage or RMS current in a resistive load). For a discussion of audio power measurements and their shortcomings, see Audio power.
Speed
In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:
where R represents the gas constant, 8.314 J/(mol·K), T is the temperature of the gas in kelvins, and M is the molar mass of the gas in kilograms per mole. In physics, speed is defined as the scalar magnitude of velocity. For a stationary gas, the average speed of its molecules can be in the order of thousands of km/h, even though the average velocity of its molecules is zero.
Error
When two data sets — one set from theoretical prediction and the other from actual measurement of some physical variable, for instance — are compared, the RMS of the pairwise differences of the two data sets can serve as a measure of how far on average the error is from 0. The mean of the absolute values of the pairwise differences could be a useful measure of the variability of the differences. However, the RMS of the differences is usually the preferred measure, probably due to mathematical convention and compatibility with other formulae.
In frequency domain
The RMS can be computed in the frequency domain, using Parseval's theorem. For a sampled signal , where is the sampling period,
where and N is the sample size, that is, the number of observations in the sample and DFT coefficients.
In this case, the RMS computed in the time domain is the same as in the frequency domain:
Relationship to other statistics
The standard deviation of a population or a waveform is the RMS deviation of from its arithmetic mean . They are related to the RMS value of by
.
From this it is clear that the RMS value is always greater than or equal to the average, in that the RMS includes the squared deviation (error) as well.
Physical scientists often use the term root mean square as a synonym for standard deviation when it can be assumed the input signal has zero mean, that is, referring to the square root of the mean squared deviation of a signal from a given baseline or fit. This is useful for electrical engineers in calculating the "AC only" RMS of a signal. Standard deviation being the RMS of a signal's variation about the mean, rather than about 0, the DC component is removed (that is, RMS(signal) = stdev(signal) if the mean signal is 0).
See also
Average rectified value (ARV)
Central moment
Geometric mean
Glossary of mathematical symbols
L2 norm
Least squares
Mean squared displacement
Pythagorean addition
True RMS converter
Notes
References
External links
A case for why RMS is a misnomer when applied to audio power
A Java applet on learning RMS
Means
Statistical deviation and dispersion
it:Valore efficace | Root mean square | Physics,Mathematics | 1,971 |
998,456 | https://en.wikipedia.org/wiki/Chloralkali%20process | The chloralkali process (also chlor-alkali and chlor alkali) is an industrial process for the electrolysis of sodium chloride (NaCl) solutions. It is the technology used to produce chlorine and sodium hydroxide (caustic soda), which are commodity chemicals required by industry. Thirty five million tons of chlorine were prepared by this process in 1987. In 2022, this had increased to about 97 million tonnes. The chlorine and sodium hydroxide produced in this process are widely used in the chemical industry.
Usually the process is conducted on a brine (an aqueous solution of concentrated NaCl), in which case sodium hydroxide (NaOH), hydrogen, and chlorine result. When using calcium chloride or potassium chloride, the products contain calcium or potassium instead of sodium. Related processes are known that use molten NaCl to give chlorine and sodium metal or condensed hydrogen chloride to give hydrogen and chlorine.
The process has a high energy consumption, for example around of electricity per tonne of sodium hydroxide produced. Because the process yields equivalent amounts of chlorine and sodium hydroxide (two moles of sodium hydroxide per mole of chlorine), it is necessary to find a use for these products in the same proportion. For every mole of chlorine produced, one mole of hydrogen is produced. Much of this hydrogen is used to produce hydrochloric acid, ammonia, hydrogen peroxide, or is burned for power and/or steam production.
History
The chloralkali process has been in use since the 19th century and is a primary industry in the United States, Western Europe, and Japan. It has become the principal source of chlorine during the 20th century. The diaphragm cell process and the mercury cell process have been used for over 100 years but are environmentally unfriendly through their use of asbestos and mercury, respectively. The membrane cell process, which was only developed in the past 60 years, is a superior method with its improved energy efficiency and lack of harmful chemicals.
Although the first formation of chlorine by the electrolysis of brine was attributed to chemist William Cruikshank in 1800, it was 90 years later that the electrolytic method was used successfully on a commercial scale. Industrial scale production began in 1892. In 1833, Faraday formulated the laws that governed the electrolysis of aqueous solutions, and patents were issued to Cook and Watt in 1851 and to Stanley in 1853 for the electrolytic production of chlorine from brine.
Process systems
Three production methods are in use. While the mercury cell method produces chlorine-free sodium hydroxide, the use of several tonnes of mercury leads to serious environmental problems. In a normal production cycle a few hundred pounds of mercury per year are emitted, which accumulate in the environment. Additionally, the chlorine and sodium hydroxide produced via the mercury-cell chloralkali process are themselves contaminated with trace amounts of mercury. The membrane and diaphragm method use no mercury, but the sodium hydroxide contains chlorine, which must be removed.
Membrane cell
The most common chloralkali process involves the electrolysis of aqueous sodium chloride (a brine) in a membrane cell. A membrane, such as Nafion, Flemion or Aciplex, is used to prevent the reaction between the chlorine and hydroxide ions.
Saturated brine is passed into the first chamber of the cell. Due to the higher concentration of chloride ions in the brine, the chloride ions are oxidised at the anode, losing electrons to become chlorine gas (A in figure):
2Cl− → + 2e−
At the cathode, positive hydrogen ions pulled from water molecules are reduced by the electrons provided by the electrolytic current, to hydrogen gas, releasing hydroxide ions into the solution (C in figure):
2 + 2e− → H2 + 2OH−
The ion-permeable ion-exchange membrane at the center of the cell allows only the sodium ions (Na+) to pass to the second chamber where they react with the hydroxide ions to produce caustic soda (NaOH) (B in figure):
Na+ + OH− → NaOH
The overall reaction for the electrolysis of brine is thus:
2NaCl + 2 → + + 2NaOH
Diaphragm cell
In the diaphragm cell process, there are two compartments separated by a permeable diaphragm, often made of asbestos fibers. Brine is introduced into the anode compartment and flows into the cathode compartment. Similarly to the membrane cell, chloride ions are oxidized at the anode to produce chlorine, and at the cathode, water is split into caustic soda and hydrogen. The diaphragm prevents the reaction of the caustic soda with the chlorine. A diluted caustic brine leaves the cell. The caustic soda must usually be concentrated to 50% and the salt removed. This is done using an evaporative process with about three tonnes of steam per tonne of caustic soda. The salt separated from the caustic brine can be used to saturate diluted brine. The chlorine contains oxygen and must often be purified by liquefaction and evaporation.
Mercury cell
In the mercury-cell process, also known as the Castner–Kellner process, a saturated brine solution floats on top of a thin layer of mercury. The mercury is the cathode, where sodium is produced and forms an amalgam with the mercury. The amalgam is continuously drawn out of the cell and reacted with water which decomposes the amalgam into sodium hydroxide, hydrogen and mercury. The mercury is recycled into the electrolytic cell. Chlorine is produced at the anode and bubbles out of the cell. Mercury cells are being phased out due to concerns about the high toxicity of mercury and mercury poisoning from mercury cell pollution such as occurred in Canada (see Ontario Minamata disease) and Japan (see Minamata disease).
Unpartitioned cell
The initial overall reaction produces hydroxide and also hydrogen and chlorine gases:
2 NaCl + 2 H2O → 2 NaOH + H2 + Cl2
Without a membrane, the OH− ions produced at the cathode are free to diffuse throughout the electrolyte. As the electrolyte becomes more basic due to the production of OH−, less Cl2 emerges from the solution as it begins to disproportionate to form chloride and hypochlorite ions at the anode:
Cl2 + 2 NaOH → NaCl + NaClO + H2O
The more opportunity the Cl2 has to interact with NaOH in the solution, the less Cl2 emerges at the surface of the solution and the faster the production of hypochlorite progresses. This depends on factors such as solution temperature, the amount of time the Cl2 molecule is in contact with the solution, and concentration of NaOH.
Likewise, as hypochlorite increases in concentration, chlorates are produced from them:
3 NaClO → NaClO3 + 2 NaCl
This reaction is accelerated at temperatures above about 60 °C. Other reactions occur, such as the self-ionization of water and the decomposition of hypochlorite at the cathode, the rate of the latter depends on factors such as diffusion and the surface area of the cathode in contact with the electrolyte.
If current is interrupted while the cathode is submerged, cathodes that are attacked by hypochlorites, such as those made from stainless steel, will dissolve in unpartitioned cells.
If producing hydrogen and oxygen gases is not a priority, the addition of 0.18% sodium or potassium chromate to the electrolyte will improve the efficiency of producing the other products.
Electrodes
Due to the corrosive nature of chlorine production, the anode (where the chlorine is formed) must be non-reactive and has been made from materials such as platinum metal, graphite (called plumbago in Faraday's time), or platinized titanium. A mixed metal oxide clad titanium anode (also called a dimensionally stable anode) is the industrial standard today. Historically, platinum, magnetite, lead dioxide, manganese dioxide, and ferrosilicon (13–15% silicon) have also been used as anodes. Platinum alloyed with iridium is more resistant to corrosion from chlorine than pure platinum. Unclad titanium cannot be used as an anode because it anodizes, forming a non-conductive oxide and passivates. Graphite will slowly disintegrate due to internal electrolytic gas production from the porous nature of the material and carbon dioxide forming due to carbon oxidation, causing fine particles of graphite to be suspended in the electrolyte that can be removed by filtration. The cathode (where hydroxide forms) can be made from unalloyed titanium, graphite, or a more easily oxidized metal such as stainless steel or nickel.
Manufacturer associations
The interests of chloralkali product manufacturers are represented at regional, national and international levels by associations such as Euro Chlor and The World Chlorine Council.
See also
Electrochemical engineering
Gas diffusion electrode
Solvay process, a similar industrial method of making sodium carbonate from calcium carbonate and sodium chloride
References
Further reading
Bommaraju, Tilak V.; Orosz, Paul J.; Sokol, Elizabeth A.(2007). "Brine Electrolysis." Electrochemistry Encyclopedia. Cleveland: Case Western Reserve University.
External links
Animation showing the membrane cell process
Animation showing the diaphragm cell process
Chemical processes
Electrolysis
Industrial gases | Chloralkali process | Chemistry | 2,061 |
10,001,714 | https://en.wikipedia.org/wiki/Lightweight%20programming%20language | Lightweight programming languages are designed to have small memory footprint, are easy to implement (important when porting a language to different computer systems), and/or have minimalist syntax and features.
These programming languages have simple syntax and semantics, so one can learn them quickly and easily. Some lightweight languages (for example Lisp, Forth, and Tcl) are so simple to implement that they have many implementations (dialects).
Compiled languages
BASIC
BASIC implementations like Tiny BASIC were designed to be lightweight so that they could run on the microcomputers of the 1980s, because of memory constraints.
Forth
Forth is a stack-based concatenative imperative programming language using reverse polish notation.
Toy languages
FALSE
FALSE is a minimalist esoteric programming language, with a complete implementation done in 1024 bytes.
Brainfuck
Brainfuck is an extremely minimalist esoteric programming language.
FlipJump
FlipJump is a minimalistic One-instruction set computer.
Scripting languages
Io
Io is a prototype-based object-oriented scripting language.
Lisp
Lisp-like languages are very simple to implement, so there are many lightweight implementations of it.
There are some notable implementations:
newLISP
PicoLisp
Derivatives of Lisp:
Pico
Rebol
Red
Scheme
Tcl
Tcl-like languages can be easily implemented because of its simple syntax. Tcl itself maybe not so lightweight, but there exists some, if not many, lightweight implementations of languages which have Tcl-like syntax.
Ring
Ring is a lightweight multi-paradigm scripting language.
Embedded languages
ECMAScript
There are many embeddable implementation of ECMAScript like:
Espruino
JerryScript
QuickJS
Boa (JavaScript engine)
Derivatives of ECMAScript:
Squirrel
Lua
Lua is a small (C source is approx. 300 kB tarball, as of version 5.3.5), portable and embeddable scripting language (with LuaJIT as a JIT compiler improving speed). It can be embedded in applications such as computer games to provide runtime scripting capabilities.
Wren
Wren is a small, fast, object-oriented scripting language.
References
See also
Lightweight markup language
Lightweight software
Computer programming
Programming languages | Lightweight programming language | Technology,Engineering | 450 |
33,925,022 | https://en.wikipedia.org/wiki/J.%20W.%20Jenkinson%20Memorial%20Lectureship | John Wilfred Jenkinson (1871–1915) was a pioneer in the field of comparative developmental biology (the forerunner of evolutionary developmental biology) and one of the first to introduce experimental embryology to the UK at the start of the 20th century. He originally studied Classics as an undergraduate student at Oxford, before switching his attention to Zoology under the guidance of W. F. R. Weldon at University College London. He also travelled to Utrecht University in the Netherlands, to work with Ambrosius Hubrecht, and was exposed to new methods and approaches in embryology. In 1905, he was appointed the first lecturer in Embryology at the University of Oxford in England, and in 1909 published the first English textbook on experimental embryology in which he summarized recent work in the emerging scientific discipline and criticized neo-vitalist theories of Hans Driesch.
At the outbreak of war in 1914, Jenkinson joined the Oxford Volunteer Training Corps. In January 1915 he was assigned to the 12th Battalion of the Worcestershire Regiment and was soon promoted to the rank of captain. Jenkinson left England with his regiment in May, posted to the Dardanelles in Turkey. On 4 June 1915, just days after arriving on the Gallipoli peninsula, Jenkinson was killed. After Jenkinson's death at Gallipoli in June 1915, the University of Oxford established the John Wilfred Jenkinson Lectureship in his memory. The original statutes required the lecturer or lecturers, appointed annually, to deliver “one or more lectures or lecture demonstrations on comparative or experimental embryology”.
Each year, a Board of Electors selects one or two Jenkinson Lecturers who are invited to Oxford to present a lecture in the broad area of developmental biology. The list of Jenkinson Lecturers includes many distinguished names, including Nobel Laureates (marked with *).
Holders of the J. W. Jenkinson Lectureship
1961 Michail Fischberg
1962 P. H. Tuft
1963 Wolfgang Beerman
1964 Jean Brachet
1965 Rupert E. Billingham,
1966 Jan Erik Edstrom
1966 Alberto Monroy
1967 *Bob Edwards
1968 Georg Klein
1969 R. M. Gaze
1969 H. Chantrenne
1970 *Sydney Brenner
1970 Niels Kaj Jerne
1971 Ernst Hadorn
1971 J. M. Mitchison
1972 Anne McLaren
1972 G. Gerisch
1973 Susumu Ohno
1974 Ruggero Ceppellini
1975 Andrzej Tarkowski
1976 No formal lecture was held
1977 Nils R. Ringertz
1978 Martin Luscher
1978 Armin C. Braun
1980 Pasko Rakic
1980 Walter Fiers
1980 Nicole Le Douarin
1981 Werner Reichardt
1981 Antonio Garcia-Bellido
1982 Lionel Jaffe
1982 Maurice Sussmann
1983 Stanley M. Crain
1984 Rudolf Jaenisch
1984 *Robert G. Edwards
1985 G. S. Dawes
1985 *François Jacob
1985 Hans G. Schweiger
1986 W. Maxwell Cowan
1986 Marc Kirschner
1986 Peter A. Lawrence
1987 *Gerald Edelman
1988 Corey Goodman
1989 Josef Schell
1989 *John Gurdon
1989 Webster K. Cavenee
1990 Kai Simons
1991 Carla Shatz
1991 Harold Weintraub
1992 Manfred Schartl
1992 Noriyuki Satoh
1992 Bruce Cattanach
1993 Chuck B. Kimmell
1993 Andrew Lumsden
1994 Peter Gruss
1995 Brigid Hogan
1996 Ray Guillery
1996 Susan K McConnell
1997 James C Smith
1997 Cliff Tabin
1997 *Tim Hunt
1998 Peter J. Bryant
1999 Davor Solter
1999 Françoise Dieterlan-Lievre
2000 Peter Holland
2000 Max Bear
2000 Eduardo Boncinelli
2001 Marc Tessier-Lavigne
2002 *Roger Tsien
2002 Enrico Coen
2003 Mike Bate
2004 Cheryll Tickle
2004 Rudy Raff
2005 Gerd Jürgens
2005 David Weisblat
2006 Stephen Cohen
2006 Michael Akam
2007 Shigeru Kuratani
2007 Janet Rossant
2008 Richard Gardner
2008 Didier Stainier
2009 Sean B. Carroll
2009 Wendy Bickmore
2010 Nick Hastie
2010 Paul Sternberg
2010 David Kingsley
2011 Jurgen Knoblich
2012 Caroline Dean
2012 Hopi Hoekstra
2013 Olivier Pourquie
2013 Nipam Patel
2014 Gero Miesenböck
2014 Alex Schier
2015 *John Gurdon
2016 Detlef Weigel
2016 Linda Partridge
2017 *Jennifer Doudna
2018 Liqun Luo
2018 Elizabeth Robertson
2019 Andrea Brand
2019 *Shinya Yamanaka
2022 Nancy Papalopulu
2023 Denis Duboule
2024 Ben Lehner
2024 Elly Tanaka
2025 Irene Miguel-Aliaga ; Forthcoming
2025 James Sharpe ; Forthcoming
2026 Cassandra Extavour ; Forthcoming
Lectureship management
The lecturers are elected by an electoral board consisting of: the vice-chancellor of the University of Oxford; the rector of Exeter College, Oxford; the Regius Professor of Medicine; the Linacre Professor of Zoology; the Waynflete Professor of Physiology; Dr. Lee's Professor of Anatomy; and a member of the Mathematical, Physical and Life Sciences Board elected by that board.
References
1915 establishments in England
Jenkinson
Embryology
Developmental biology
Lists of biologists
Recurring events established in 1915 | J. W. Jenkinson Memorial Lectureship | Biology | 1,028 |
28,066,243 | https://en.wikipedia.org/wiki/Legal%20drug%20trade | Legal drug trade, as with other goods object of commerce, in opposition to smuggling or illegal drug trade, most psychotropic substances' commerce is under control and taxation by world governments, regardless of the relative perceived danger of the goods that are the object of legislation.
Legal commerce in drugs can be categorized according to the purpose of consumption (therapeutic vs recreational), type of drug involved and phase of the trading process, i.e.: production, distribution or consumption.
Therapeutic use
Production
Drugs useful for treating diseases are the object of pharmaceutical research and medical practice, and produced by the pharmaceutical industry, which in theory should be above United Nations legislation prohibiting use or sole possession of the drugs termed illegal for other purposes.
Distribution
Pharmaceutical companies, through authorized dealers, guarantee the distribution of their products to their customers.
Since antiquity, there have been stores specialized in drug selling. Today, they are called pharmacies or, straightforwardly, drugstores. Certain drugs officially considered trade-able without medical supervision are also sold in non-specialized stores, such as supermarkets.
Consumption
Therapeutic drug consumption can take place in an outpatient setting, or during hospitalization.
Recreational use
Production
Alcoholic beverages, containing psychoactive ethyl alcohol, are produced legally throughout the world. Their production supports a commercial alcohol industry. Consumption of alcohol is subject to regulation in most countries, namely by means of age restrictions.
Tobacco, a recreational drug containing nicotine, is produced legally in countries such as Cuba, China, and the United States. This also supports a tobacco industry and the production of a variety of tobacco products, which, like alcoholic beverages, are subject to age restrictions in most countries.
Caffeine, a stimulant drug, is extracted from plants including the coffee plant and the tea bush. It is the most widely consumed psychoactive substance in the world, remaining unregulated and generally recognized as safe by the U.S. Food and Drug Administration.
Distribution
There are authorized dealers which provide consumers with legal intoxicants, every industry developing a network of distribution to connect with its clients. Drug publicity by producers and distributors aims at increasing consumption.
Consumption
Legal drugs for recreation are taken by people in private and in public, including places devoted to drug selling for consumption in situ, such as bars and certain restaurants.
Cultural biases
The above-mentioned substances represent the most important products legally produced for the purposes of mind-altering effects.
Some critics argue that cannabis and opioids occupy the social position of alcohol and tobacco in some non-western countries, where alcohol and its users could be treated in a way resembling the harassment and prosecution of illegal drug users in western countries, in cultural defiance of the UN's worldwide prohibition. They are also the most important public health concern, as a result of prohibition of certain other drugs.
See also
Drug liberalization
Drug prohibition law
Grey market
Illegal drug trade
Pharmaceutical industry
Prohibition
Prohibitionism
References and sources
Drug control law | Legal drug trade | Chemistry | 591 |
3,284,448 | https://en.wikipedia.org/wiki/Symmetric%20Phase%20Recording | Symmetric Phase Recording is a tape recording (computer storage media) technology developed by Quantum Corporation packs data across a tape's recording surface by writing adjacent tracks in a herringbone pattern:
track 0 = \\\\\, track 1 = /////, track 2 = \\\\\, track 3 = /////, etc.
This eliminates crosstrack interference and guard bands so that more tracks of data can be
stored on a tape.
See also
Azimuth recording, Slant Azimuth recording
Digital Equipment Corporation
Digital Linear Tape
Linear Tape-Open
Digital Tape Format
Helical scan
Magnetic tape
Magnetic tape data storage
Storage Technology Corporation
References
Further reading
Storage media | Symmetric Phase Recording | Technology | 142 |
9,002,006 | https://en.wikipedia.org/wiki/Conference%20room%20pilot | Conference room pilot (CRP) is a type of software procurement and software acceptance testing. A CRP may be used during the selection and implementation of a software application in an organization or company.
The purpose of the conference room pilot is to validate a software application against the business processes of end-users of the software, by allowing end-users to use the software to carry out typical or key business processes using the new software. A commercial advantage of a conference room pilot is that it may allow the customer to prove that the new software will do the job (meets business requirements and expectations) before committing to buying the software, thus avoiding buying an inappropriate application. The term is most commonly used in the context of 'out of the box' (OOTB) or 'commercial off-the-shelf' software (COTS).
Compared to user acceptance testing
Although a conference room pilot shares some features of user acceptance testing (UAT), it should not be considered a testing process – it validates that a design or solution is fit for purpose at a higher level than functional testing.
Shared features of CRP and UAT include:
End-to-end business processes are used as a "business input" for both
Functionality demonstrations
Non-functional validation(e.g. performance testing)
Differences between a conference room pilot and a formal UAT:
It is attempting to identify how well the application meets business needs, and identify gaps, whilst still in the design phase of the project
There is an expectation that changes will be required before acceptance of the solution
The software is ‘on trial’ and may be rejected completely in favour of another solution.
References
General
https://web.archive.org/web/20120306082951/http://www.ensync-corp.com/consulting/conference_room_pilot.cfm?section=consulting
https://web.archive.org/web/20100410184030/http://www-archive.ui-integrate.uillinois.edu/news_art_crp.asp
https://web.archive.org/web/20120306082951/http://www.bourkeconsulting.com/documents/POCCRPBCAWebsite020903.pdf
https://web.archive.org/web/20120101095056/http://www.smthacker.co.uk/conference_room_pilot.htm
Software testing | Conference room pilot | Engineering | 526 |
38,454,237 | https://en.wikipedia.org/wiki/CAMECA | CAMECA is a manufacturer of scientific instruments, namely material analysis instruments based on charged particle beam, ions, or electrons.
History
The company was founded as a subsidiary of Compagnie générale de la télégraphie sans fil (CSF), in 1929, as "Radio-cinéma" at the time of the emergence of the talkies. The job was to design and manufacture Movie projectors for big cinema screening rooms.
After World War II, spurred on by Maurice Ponte, director of CSF and a future member of the French Academy of Sciences, the company manufactured scientific instruments developed in French University laboratories: the Spark Spectrometer at the beginning of the 1950s, the Castaing Microprobe from 1958, and Secondary Ion Analysers from 1968. Also in the early 1950s the company settled the factory in Courbevoie, boulevard Saint-Denis where it remained for more than fifty years. The Spark Spectrometer was abandoned at the end of the 1950s.
The name of CAMECA, standing for , was given in 1954. The business of movie projectors stopped soon after 1960, but in the 1960s there was a short-lived revival of the film business through the adventure of the Scopitone.
Since 1977, the year that the IMS3F was launched, CAMECA has had a virtual monopoly in the field of magnetic SIMS, but it shares the market for Castaing microprobe with Japanese competitors, including Jeol. The semiconductor industry is a very important outlet for magnetic SIMS. At the end of the 20th century, CAMECA gained a foothold in a third analytical technique, tomographic atom probe.
In 1987, CAMECA left the Thomson-CSF group and was subject to a leveraged buyout by its management and employees. In 2001, the company was sold to a small French private equity fund, and then to another private equity fund controlled by the Carlyle Group, which sold CAMECA to Ametek, which merged CAMECA with Imago Scientific Instruments in 2010.
From 1975, the number of employees has been about 200. Subsidiaries were created in the United States, Japan, Korea, Taiwan and Germany. These subsidiaries engaged in commercial and maintenance activities and employ a few dozen people.
The company in 2011
According to the website of the company, in 2011 its business was in two different markets: scientific instruments dedicated to research activities; and metrology for the semiconductor industry. The latter market addresses semiconductor fabrication cleanrooms with a dedicated version of the Castaing electron probe based on the LEXES technique (low energy electron induced X-ray emission spectrometry) developed at the beginning of the 21st century.
CAMECA instruments are well known in academic communities, including the fields of geochemistry and planetary science, and CAMECA has been cited dozens of times in scientific journals such as Nature and Science.
In 2010, Ametek purchased the Wisconsin start-up Imago Scientific Instruments and attached it to CAMECA. CAMECA therefore holds the monopoly in the manufacturing of atom probe instruments with the LEAP brand name.
References
External links
CAMECA website
The history of CAMECA in french
Instrument-making corporations
Manufacturing companies of France
Equipment semiconductor companies
Technology companies of France
French brands
The Carlyle Group companies
Manufacturing companies established in 1929 | CAMECA | Engineering | 663 |
3,327,704 | https://en.wikipedia.org/wiki/Peroneal%20strike | A peroneal strike is a temporarily disabling blow to the common fibular (peroneal) nerve of the leg, just above the knee. The attacker aims roughly a hand span above the exterior side of the knee, towards the back of the leg. This causes a temporary loss of motor control of the leg, accompanied by numbness and a painful tingling sensation from the point of impact all the way down the leg, usually lasting anywhere from 30 seconds to 5 hours in duration.
The strike is commonly made with the knee, a baton, or shin kick, but can be done by anything forcefully impacting the nerve. The technique is a part of the pressure point control tactics used in martial arts and by law enforcement agents.
The peroneal strike was used against detainees during the 2002 Bagram torture and prisoner abuse scandal.
See also
Charley horse
Pain compliance
Kubotan
References
Law enforcement techniques
Violence | Peroneal strike | Biology | 184 |
68,502,959 | https://en.wikipedia.org/wiki/Tentai%20Show | Tentai Show (Japanese: 天体ショー tentai shō), also known by the names Tentaisho, Galaxies, Spiral Galaxies, or Sym-a-Pix, is a binary-determination logic puzzle published by Nikoli.
Rules
Tentai Show is played on a rectangular grid of squares. On the grid are dots representing stars, which can be found on the grid either on the center of a cell, an edge, or a corner.
The objective of the puzzle is to draw lines along the dashed lines to divide the grid into regions representing galaxies.
In the resulting grid, all galaxies must have 180° rotational symmetry and contain exactly one dot located at its center.
The colors of the dots do not affect the logic of the puzzle and can be ignored when solving. In puzzles with multiple colored dots, the regions of the finished grid may be colored with the corresponding dot colors to reveal a picture.
Solution Methods
Tentai Show puzzles can be solved using the following steps.
Draw walls between adjacent cells that contain a dot or a fraction of a dot. These cells must belong to different galaxies.
Draw walls around the dot according to rotational symmetry. Borders of the grid also count as walls.
Find cells in areas that are 'captured' by a dot. These are cells which cannot be reached by any other dot. These cells can only belong to the galaxy for that dot.
The above steps can be repeated until the puzzle is solved.
In more advanced puzzles, it may be necessary to consider the image of rotational symmetry. Find cells which only have one valid dot when considering its rotationally symmetric cell. A cell can belong to a galaxy if its symmetric cell can also belong to that galaxy.
History
The name of the puzzle, "Tentai Show", has a double meaning when interpreted in Japanese. "Ten" (点) stands for dot, while "tai shō" (対称) stands for symmetry. The Japanese word "Tentai" (天体) is used to refer to astronomical objects. When combined, "Tentai Show" can both mean rotational symmetry and astronomical show.
Computational Complexity
NP-Completeness
Determining whether a Tentai Show puzzle has a solution is known to be NP-complete. This was proven by Friedman (2002) by constructing puzzles equivalent to arbitrary Boolean circuits, which shows NP-completeness because of the Boolean satisfiability problem.
Fertin, Jamshidi, and Komusiewicz (2015) strengthened this result by proving the puzzle is NP-complete when all galaxies have size at most seven. The proof reduces the puzzle to Positive Planar 1-in-3-SAT, which is known to be NP-complete.
Demaine, Löffler, and Schmidt (2021) further strengthened this by proving NP-completeness even if all galaxies are restricted to be rectangles of sizes 1×1, 1×3, or 3×1.
They also showed that finding a minimal set of galaxies that exactly cover a given shape is NP-complete.
Solution Algorithms
Tentai Show puzzles can be solved in exponential time by going through all possible dissections of the grid and checking if it is a valid solution.
Fertin, Jamshidi, and Komusiewicz (2015) showed a polynomial-time algorithm that can solve the puzzle for various cases, such as: (a) when all galaxies have size at most two, (b) when all galaxies are squares, and (c) when all galaxies are trivially connected.
See also
List of Nikoli puzzle types
References
Logic puzzles
NP-complete problems
Japanese board games | Tentai Show | Mathematics | 731 |
22,118,406 | https://en.wikipedia.org/wiki/Nokia%206250 | The Nokia 6250 is a mobile phone made by Nokia. It has been available since 2000. It is a more rugged version of the Nokia 6210 phone. It has a monochrome graphic LCD display of resolution 96 x 60 pixels. Its memory can hold up to 500 phone book records with up to three numbers per name, and up to 150 text messages (SMS). It was being sold mainly in Asia-Pacific markets.
References
See also
List of Nokia products
Nokia 6210
6250
Mobile phones introduced in 2000
Mobile phones with infrared transmitter | Nokia 6250 | Technology | 110 |
31,114,557 | https://en.wikipedia.org/wiki/Pleurotus%20citrinopileatus | Pleurotus citrinopileatus, the golden oyster mushroom (tamogitake in Japanese), is an edible gilled fungus. Native to eastern Russia, northern China, and Japan, the golden oyster mushroom is very closely related to P. cornucopiae of Europe, with some authors considering them to be at the rank of subspecies. In far eastern Russia, P. citrinopileatus, they are called iI'mak, is one of the most popular wild edible mushrooms.
Description
The fruiting bodies of P. citrinopileatus grow in clusters of bright yellow to golden brown caps with a velvety, dry surface texture. Caps range from in diameter. The flesh is thin and white, with a mild taste and without a strong smell. Stems are cylindrical, white in color, often curved or bent, and about long and in diameter. The gills are white, closely spaced, and run down the stem. The spores of the golden oyster mushroom are cylindrical or elliptical in shape, smooth, hyaline, amyloid, and measure 6-9 by 2–3.5 micrometres.
Ecology
The golden oyster mushroom, like other species of oyster mushroom, is a wood-decay fungus. In the wild, P. citrinopileatus most commonly decays hardwoods such as elm. The first recorded observation of naturalized golden oysters in the United States occurred in 2012 on Mushroom Observer, perhaps a decade after the cultivation of the species began in North America, and they have been found growing on oak, elm, beech, and other hardwoods. Naturalized golden oysters have been found in many states including: Delaware, Illinois, Iowa, Maryland, Massachusetts, Michigan, Minnesota, New York, Ohio, Pennsylvania, and Wisconsin. Their vigorous range expansion is comparable to invasive species. In a 2018 population genomics study comparing naturalized wild isolates with commercial strains, two of the commercial isolates showed high similarity to all of the wild isolates, representing possible source strains of the wild populations. The study also found highly similar wild isolates collected from geographically distant locations, in some cases over apart. This is strong evidence to suggest that the same cultivated strain has been re-introduced many times over in various parts of the United States, as opposed to a single introduction event and subsequent spread.
The golden oyster mushroom is also naturalized in several African countries: Cameroon, Tanzania, Kenya, Burundi, and Nigeria. It also occurs in the wild in some Asian countries outside its native territory: in the Yemen, Korea, and India.
Uses
Golden oyster mushrooms are cultivated commercially, usually on a medium of grain, straw, or sawdust. Pleurotus species are some of the most commonly cultivated mushrooms, particularly in China, due to their ease of cultivation and their ability to convert 100 g of organic refuse into 50-70 g of fresh mushrooms.
Chemistry
Pleurotus citrinopileatus mushrooms are a source of antioxidants. Extracts from P. citrinopileatus have been studied for their antihyperglycemic properties, decreasing blood sugar levels in diabetic rats. They have also been studied as a source of lipid-lowering drugs; P. ostreatus, a related oyster mushroom, has been found to contain the cholesterol-lowering drug lovastatin.
In one study, among 11 other commonly cultivated or foraged mushroom species, Pleurotus citrinopileatus contained the second highest amount of the antioxidant and amino acid ergothioneine at 3.94mg per gram of dry weight, and fourth highest in glutathione at 1.39mg per gram of dry weight. Both compounds had their highest concentrations in the pileus tissue. It had the highest amount of ergothioneine among the other saprotrophs within the group.
See also
List of Pleurotus species
Ergothioneine
References
External links
Pleurotus citrinopileatus at Mushroom Observer
"Pleurotus citrinopileatus" at iNaturalist
Pleurotaceae
Fungi described in 1943
Fungi of Asia
Fungi of China
Fungi in cultivation
Edible fungi
Carnivorous fungi
Fungus species | Pleurotus citrinopileatus | Biology | 854 |
24,201,104 | https://en.wikipedia.org/wiki/C22H18O12 | {{DISPLAYTITLE:C22H18O12}}
The molecular formula C22H18O12 (molar mass: 474.371 g/mol) may refer to:
Chicoric acid
Succinprotocetraric acid
Molecular formulas | C22H18O12 | Physics,Chemistry | 58 |
651,031 | https://en.wikipedia.org/wiki/Toilet%20training | Toilet training (also potty training or toilet learning) is the process of training someone, particularly a toddler or infant, to use the toilet for urination and defecation. Attitudes toward training in recent history have fluctuated substantially, and may vary across cultures and according to demographics. Many of the contemporary approaches to toilet training favor a behaviorism and cognitive psychology-based approach.
Specific recommendations on techniques vary considerably, although a range of these are generally considered effective, and specific research on their comparative effectiveness is lacking. No single approach may be universally effective, either across learners or for the same learner across time, and trainers may need to adjust their techniques according to what is most effective in their situation. Training may begin shortly after birth in some cultures. However, in much of the developed world this occurs between the age of 18 months and two years, with the majority of children fully trained by age four, although many children may still experience occasional accidents.
Certain behavioral or medical disorders may affect toilet training, and extend the time and effort necessary for successful completion. In certain circumstances, these will require professional intervention by a medical professional. However, this is rare and even for those children who face difficulties in training, the vast majority of children can be successfully trained.
Children may face certain risks associated with training, such as slips or falling toilet seats, and toilet training may act in some circumstances as a trigger for abuse. Certain technologies have been developed for use in toilet training, some specialized and others commonly used.
History
Little is known about toilet training in pre-modern societies. Ancient Rome has been credited with the earliest known children's toilet. However, there is no evidence of what training techniques they may have employed. Later, during the European Middle Ages, according to one source "Recommended cures for 'pyssying the bedde'...included consumption of ground hedgehog or powdered goat claw and having dried rooster combs sprinkled on the bed."
Cultural beliefs and practices related to toilet training in recent times have varied. For example, beginning in the late 18th century parenting transitioned from the use of leaves or linens (or nothing) for the covering of a child's genitals, to the use of cloth diapers (or nappies), which needed to be washed by hand. This was followed by the advent of mechanical washing machines, and then to the popularisation of disposable diapers in the mid 20th century, each of which decreased the burden on parental time and resources needed to care for children who were not toilet trained, and changed expectations about the timeliness of training. This trend did not manifest equally in all parts of the world. Those living in poorer countries usually train as early as possible, as access to amenities such as disposable diapers may still pose a significant burden. Poorer families in developed countries also tend to train earlier than their more affluent peers.
Much of the 20th-century conceptualization of toilet training was dominated by psychoanalysis, with its emphasis on the unconscious, and warnings about potential psychological impacts in later life of toilet training experiences. For example, anthropologist Geoffrey Gorer attributed much of contemporary Japanese society in the 1940s to their method of toilet training, writing that "early and severe toilet training is the most important single influence in the formation of the adult Japanese character." Some German child-rearing theorists of the 1970s tied Nazism and the Holocaust to authoritarian, sadistic personalities produced by punitive toilet training.
Into the 20th century this was largely abandoned in favor of behaviouralism, with an emphasis on the ways in which rewards and reinforcements increase the frequency of certain behaviors, and cognitive psychology, with an emphasis on meaning, cognitive ability, and personal values. Writers such as psychologist and pediatrician Arnold Gesell, along with pediatrician Benjamin Spock were influential in re-framing the issue of toilet training as one of biology and child readiness.
Approaches
Approaches to toilet training have fluctuated between "passive child readiness" ("nature"-based approaches), which emphasize individual child readiness, and more "structured behaviorally based" ("nurture"-based approaches), which emphasize the need for parents to initiate a training regime as soon as possible. Among the more popular methods are the Brazelton child-oriented approach, the approach outlined in The Common Sense Book of Baby and Child Care by Benjamin Spock, the methods recommended by the American Academy of Pediatrics, and the "toilet training in a day" approach developed by Nathan Azrin and Richard M. Foxx. According to the American Academy of Family Physicians, both the Brazelton and the Azrin/Foxx approaches are effective for developmentally normal children, although the evidence has been limited, and no study has directly compared the effectiveness of the two. Recommendations by the American Academy of Pediatrics follow closely with Brazelton, and at least one study has suggested that the Azrin/Foxx method was more effective than that proposed by Spock.
Opinions may vary greatly among parents regarding what the most effective approach to toilet training is, and success may require multiple or varied techniques according to what a child is most responsive to. These may include the use of educational material, like children's books, regularly querying a child about their need to use the bathroom, demonstration by a parent, or some type of reward system. Some children may respond more positively to more brief but intense toilet training, while others may be more successful adjusting more slowly over a longer period of time. Regardless of the techniques used, the American Academy of Pediatrics recommends that the strategy utilize as much parental involvement and encouragement as possible, while avoiding negative judgement.
The Canadian Paediatric Society makes a number of specific recommendations for toilet training techniques. These include:
Using a toilet seat adapter, foot stool, or potty chair to ensure easy access for the child
Encouraging and praising the child when they inform caregivers of their need to evacuate, even when done after the fact
Being attentive to a child's behavioural cues that may indicate their need to evacuate
Prefer encouragement and praise and avoid punishment or negative reinforcement
Ensure all caregivers are consistent with their approach
Consider transition to cotton underwear or training pants once the child achieves repeated success
Timeline
As psychologist Johnny L. Matson observes, using the toilet can be a complex process to master, from the ability to recognize and control bodily functions, to the skills required to carry out proper hygiene practices, the requisite dexterity to dress and undress oneself, and the communication skills to inform others of the need to use the toilet. Usually around one year of age, a child will begin to recognize the need to evacuate, which might be observed through changes in behavior immediately prior to urination or defecation. Although they may recognize the need, children younger than 18 months may not yet be able to consciously control the muscles involved in elimination, and cannot yet begin toilet training. While they may use the toilet if placed there by a parent at an opportune time, this likely remains an involuntary, rather than a conscious process. This will gradually change over the course of many months or years, with nighttime bowel control usually the first to manifest, followed by daytime control, and nighttime bladder control normally last.
Toilet training practice may vary greatly across cultures. For example, researchers such as Mary Ainsworth have documented families in Chinese, Indian, and African cultures beginning toilet training as early as a few weeks or months of age. In Vietnam, toilet training begins shortly after birth, with toilet training complete by age 2. This may be mediated by a number of factors, including cultural values regarding excrement, the role of caregivers, and the expectation that mothers work, and how soon they are expected to return to work following childbirth.
In 1932, the U.S. Government recommended that parents begin toilet training nearly immediately after birth, with the expectation that it would be complete by the time the child was six to eight months of age. However, this shifted over time, with parents in the early 20th century beginning training at 12–18 months of age, and shifting by the latter half of the century, to an average of greater than 18 months. In the US and Europe, training normally starts between 21 and 36 months, with only 40 to 60% of children trained by 36 months.
Both the American Academy of Pediatrics and the Canadian Paediatric Society recommend that parents begin toilet training around 18 months of age so long as the child is interested in doing so. There is some evidence to suggest that children who are trained after their second year, may be at a higher risk for certain disorders, such as urological problem or daytime wetting. There is no evidence of any psychological problems resulting from initiating training too early. In a study of families in the United Kingdom, researchers found that 2.1% began training prior to six months, 13.8% between 6 and 15 months, 50.4% between 15 and 24 months, and 33.7% had not begun training at 24 months.
The majority of children will achieve complete bladder and bowel control between ages two and four. While four-year-olds are usually reliably dry during their waking hours, as many as one in five children aged five will occasionally wet themselves during the night. Girls tend to complete successful training at a somewhat younger age than their male peers, and the typical time period between the beginning and completion of training tends to vary between three and six months.
Accidents
Accidents, periodic episodes of urinary or fecal incontinence, are generally a normal part of toilet training and are usually not a sign of serious medical issues. Accidents that occur with additional problems, such as pain when urinating or defecating, chronic constipation, or blood in urine or feces, should be evaluated by a pediatrician. The prevalence of nocturnal enuresis, also known as bed wetting, may be as high as 9.7% of seven-year-olds, and 5.5% of ten-year-olds, eventually decreasing to a rate of about 0.5% in adults.
Complications
Toilet training can be increasingly difficult for parents of children who have certain developmental, behavioral or medical disorders. Children with autism, fetal alcohol spectrum disorder, oppositional defiant disorder, or attention deficit hyperactivity disorder may not be motivated to complete toilet training, may have difficulty appropriately responding to associated social reinforcements, or may have sensory sensitivities which make using the toilet unpleasant.
Children may have a range of physical issues related to the genitourinary system, that could require medical assessment and surgical or pharmacological intervention to ensure successful toilet training. Those with cerebral palsy may face a unique set of challenges related to bladder and bowel control, and those with visual or auditory problems may require adaptations in the parental approach to training to compensate, in addition to therapy or adaptive equipment.
Stool toileting refusal occurs when a child that has been toilet trained to urinate, refuses to use the toilet to defecate for a period lasting at least one month. This may affect as many as 22% of children and can result in constipation or pain during elimination. It usually resolves without the need for intervention. Children may exhibit stool withholding, or attempts to avoid defecation all together. This can also result in constipation. Some children will hide their stool, which may be done out of embarrassment or fear, and is more likely to be associated with both toileting refusal and withholding.
Although some complications may increase the time needed to achieve successful bladder and bowel control, most children can be toilet trained nonetheless. Physiological causes of failure in toilet training are rare, as is the need for medical intervention. In most cases, children who struggle with training are most likely not yet ready.
In a 2014 survey of UK schools, primary school teachers and educational staff reported observing an increasing number of otherwise healthy schoolchildren who were not toilet trained. 15% of respondents reported that they had observed healthy children aged 5-7 wearing diapers to school in the past year. 5% reported the same for children aged 7–11. A health worker with the Kent Community Health NHS Foundation Trust said that she knew of medically healthy adolescents as old as 15 with toilet training issues. Commentators attributed the issue to parents being too busy to teach their children basic skills.
Risks
An examination of data from hospital emergency rooms in the US from 2002 to 2010 indicated that the most common form of toilet training related injury was caused by falling toilet seats, and occurred most often in children aged two to three. The second most common injury was from slipping on floors, and 99% of injuries of all types occurred in the home.
In abusive homes, toilet training may be a trigger for child maltreatment, especially in circumstances where a parent or caregiver feels the child is old enough that they should have already successfully mastered training, and yet the child continues to have accidents. This may be misinterpreted by the caregiver as willful disobedience on the part of the child.
Technologies and equipment
As early as 1938, among the first technologies developed to address toilet training was known as the "bell and pad", where a sensor detected when a child had wet themselves at night, and triggered an alarm to act as a form of conditioning. Similar alarm systems have been studied that sense wetness in undergarments, especially as it concerns the toilet training of those with intellectual disabilities. This has been applied more recently in the production of potties, that play an audible cheer or other form of encouragement when used by a child.
Trainers may choose to employ different choices of undergarments to facilitate training. This includes switching from traditional diapers or nappies to training pants (pull-ups), or the use of non-absorbent cotton underwear of the type adults may wear. These are typically employed later in the training process, and not as initial step. Children who experience repeated accidents after transitioning to cotton undergarments may be allowed to resume the use of diapers.
Most widely used techniques recommend the use of specialized children's potties, and some recommend that parents consider using snacks or drinks as rewards.
See also
Baby-led potty training, a method of toilet training
Elimination communication, an approach to parent-infant communication
Enuresis, the repeated inability to control urination
Housebreaking, the process of training a domesticated animal
Infant potty training method, method of training and book by Laurie Boucke
Open-crotch pants, clothing commonly worn by children in China that allows elimination without removal
Notes
References
Further reading
External links
Babycare
Pediatrics
Developmental psychology
Defecation
Urine | Toilet training | Biology | 2,986 |
997,409 | https://en.wikipedia.org/wiki/Deacon%20process | The Deacon process, invented by Henry Deacon, is a process used during the manufacture of alkalis (the initial end product was sodium carbonate) by the Leblanc process. Hydrogen chloride gas was converted to chlorine gas, which was then used to manufacture a commercially valuable bleaching powder, and at the same time the emission of waste hydrochloric acid was curtailed. To some extent this technically sophisticated process superseded the earlier manganese dioxide process.
Process
The process was based on the oxidation of hydrogen chloride:
4 HCl + O2 → 2 Cl2 + 2H2O
The reaction takes place at about 400 to 450 °C in the presence of a variety of catalysts, including copper chloride (CuCl2). Three companies developed commercial processes for producing chlorine based on the Deacon reaction:
The Kel-Chlor process developed by the M. W. Kellogg Company, which utilizes nitrosylsulfuric acid.
The Shell-Chlor process developed by the Shell Oil Company, which utilizes copper catalysts.
The MT-Chlor process developed by the Mitsui Toatsu Company, which utilizes chromium-based catalysts.
The Deacon process is now outdated technology. Most chlorine today is produced by using electrolytic processes. New catalysts based on ruthenium(IV) oxide have been developed by Sumitomo.
Leblanc-Deacon process
The Leblanc-Deacon process is a modification of the Leblanc process. The Leblanc process was notoriously environmentally unfriendly, and resulted in some of the first Air and Water pollution acts. In 1874, Henry Deacon had derived a process to reduce HCl emissions as mandated by the Alkali Act. In this process, hydrogen chloride is oxidized by oxygen over a copper chloride catalyst, resulting in the production of chlorine. This was widely used in the paper and textile industries as a bleaching agent, and as a result sodium carbonate was no longer the primary product of these plants, and henceforth sold at a loss.
See also
Alkali act
Leblanc process
Hydrochloric acid
Chlorine production
References
External links
http://www.che.lsu.edu/COURSES/4205/2000/Lim/paper.htm
http://www.electrochem.org/dl/interface/fal/fal98/IF8-98-Pages32-36.pdf
Deacon chemistry revisited: new catalysts for chlorine recycling. ETH (2013). ; https://dx.doi.org/10.3929/ethz-a-010055281
Chemical processes
Inorganic reactions
Chlorine | Deacon process | Chemistry | 560 |
14,897,216 | https://en.wikipedia.org/wiki/Episodic%20tremor%20and%20slip | Episodic tremor and slip (ETS) is a seismological phenomenon observed in some subduction zones that is characterized by non-earthquake seismic rumbling, or tremor, and slow slip along the plate interface. Slow slip events are distinguished from earthquakes by their propagation speed and focus. In slow slip events, there is an apparent reversal of crustal motion, although the fault motion remains consistent with the direction of subduction. ETS events themselves are imperceptible to human beings and do not cause damage.
Discovery
Nonvolcanic, episodic tremor was first identified in southwest Japan in 2002. Shortly afterwards, the Geological Survey of Canada coined the term "episodic tremor and slip" to characterize observations of GPS measurements in the Vancouver Island area. Vancouver Island lies in the eastern, North American region of the Cascadia subduction zone. ETS events in Cascadia were observed to reoccur cyclically with a period of approximately 14 months. Analysis of measurements led to the successful prediction of ETS events in following years (e.g., 2003, 2004, 2005, and 2007). In Cascadia, these events are marked by about two weeks of 1 to 10 Hz seismic trembling and non-earthquake ("aseismic") slip on the plate boundary equivalent to a magnitude 7 earthquake. (Tremor is a weak seismological signal only detectable by very sensitive seismometers.) Recent episodes of tremor and slip in the Cascadia region have occurred down-dip of the region ruptured in the 1700 Cascadia earthquake.
Since the initial discovery of this seismic mode in the Cascadia region, slow slip and tremor have been detected in other subduction zones around the world, including Japan and Mexico.
Slow slip is not accompanied by tremor in the Hikurangi Subduction Zone.
Every five years a year-long quake of this type occurs beneath the New Zealand capital, Wellington. It was first measured in 2003, and has reappeared in 2008 and 2013.
Characteristics
Slip behaviour
In the Cascadia subduction zone, the Juan de Fuca plate, a relic of the ancient Farallon plate, is actively subducting eastward underneath the North American plate. The boundary between the Juan de Fuca and North American plates is generally "locked" due to interplate friction. A GPS marker on the surface of the North American plate above the locked region will trend eastward as it is dragged by the subduction process. Geodetic measurements show periodic reversals in the motion (i.e., westward movement) of the overthrusting North American plate. During these reversals, the GPS marker will be displaced to the west over a period of days to weeks. Because these events occur over a much longer duration than earthquakes, they are termed "slow slip events".
Slow slip events have been observed to occur in the Cascadia, Japan, and Mexico subduction zones. Unique characteristics of slow slip events include periodicity on timescales of months to years, focus near or down-dip of the locked zone, and along-strike propagation of 5 to 15 km/d. In contrast, a typical earthquake rupture velocity is 70 to 90% of the S wave velocity, or approximately 3.5 km/s.
Because slow slip events occur in subduction zones, their relationship to megathrust earthquakes is of economic, human, and scientific importance. The seismic hazard posed by ETS events is dependent on their focus. If the slow slip event extends into the seismogenic zone, accumulated stress would be released, decreasing the risk of a catastrophic earthquake. However, if the slow slip event occurs down-dip of the seismogenic zone, it may "load" the region with stress. The probability of a great earthquake (moment magnitude scale ) occurring has been suggested to be 30 times greater during an ETS event than otherwise, but more recent observations have shown this theory to be simplistic. One factor is that tremor occurs in many segments at different times along the plate boundary; another factor is that rarely have tremor and large earthquakes been observed to correlate in timing.
Tremor
Slow slip events are frequently linked to non-volcanic seismological "rumbling", or tremor. Tremor is distinguished from earthquakes in several key respects: frequency, duration, and origin. Seismic waves generated by earthquakes are high-frequency and short-lived. These characteristics allow seismologists to determine the hypocentre of an earthquake using first-arrival methods. In contrast, tremor signals are weak and extended in duration. Furthermore, while earthquakes are caused by the rupture of faults, tremor is generally attributed to underground movement of fluids (magmatic or hydrothermal). As well as in subduction zones, tremor has been detected in transform faults such as the San Andreas.
In both the Cascadia and Nankai subduction zones, slow slip events are directly associated with tremor. In the Cascadia subduction zone, slip events and seismological tremor signals are spatially and temporally coincident, but this relationship does not extend to the Mexican subduction zone. Furthermore, this association is not an intrinsic characteristic of slow slip events. In the Hikurangi Subduction Zone, New Zealand, episodic slip events are associated with distinct, reverse-faulted microearthquakes.
Two types of tremor have been identified: one associated with geodetic deformation (as described above), and one associated with 5 to 10 second bursts excited by distant earthquakes. The second type of tremor has been detected worldwide; for example, it has been triggered in the San Andreas Fault by the 2002 Denali earthquake and in Taiwan by the 2001 Kunlun earthquake.
Geological interpretation
Tremor is commonly associated with the underground movement of magmatic or hydrothermal fluids. As a plate is subducted into the mantle, it loses water from its porespace and due to phase changes of hydrous minerals (such as amphibole). It has been proposed that this liberation of water generates a supercritical fluid at the plate interface, lubricating plate motion. This supercritical fluid may open fractures in the surrounding rock, and that tremor is the seismological signal of this process. Mathematical modelling has successfully reproduced the periodicity of episodic tremor and slip in the Cascadia region by incorporating this dehydration effect. In this interpretation, tremor may be enhanced where the subducting oceanic crust is young, hot, and wet as opposed to older and colder.
However, alternative models have also been proposed. Tremor has been demonstrated to be influenced by tides or variable fluid flow through a fixed volume. Tremor has also been attributed to shear slip at the plate interface. Recent contributions in mathematical modelling reproduce the sequences of Cascadia and Hikurangi (New Zealand), and suggest in-situ dehydration as the cause for the episodic tremor and slip events.
See also
Geodynamics
Plate tectonics
Seismology
Slow earthquake
References
External links
Natural Resources Canada
Geophysics
Seismology | Episodic tremor and slip | Physics | 1,444 |
39,701,308 | https://en.wikipedia.org/wiki/INT%20%28chemical%29 | INT (iodonitrotetrazolium or 2-(4-iodophenyl)-3-(4-nitrophenyl)-5-phenyl-2H-tetrazolium) is a commonly used tetrazolium salt (usually prepared with chloride ions), similar to tetrazolium chloride that on reduction produces a red formazan dye that can be used for quantitative redox assays. It is also toxic to prokaryotes.
INT is an artificial electron acceptor that can be utilized in a colorimetric assay to determine the concentration of protein in a solution. It can be reduced by succinate dehydrogenase to furazan, the formation of which can be measured by absorbance at 490 nm. The activity of succinate dehydrogenase is readily observed by the naked eye as the solution turns from colorless to rusty red.
See also
MTT assay
References
Biochemistry detection reactions
Chlorides
Dyes
Electrochemistry
Redox indicators
Tetrazoles | INT (chemical) | Chemistry,Biology | 217 |
9,481,589 | https://en.wikipedia.org/wiki/Photo%2051 | Photo 51 is an X-ray based fiber diffraction image of a paracrystalline gel composed of DNA fiber taken by Raymond Gosling, a postgraduate student working under the supervision of Maurice Wilkins and Rosalind Franklin at King's College London, while working in Sir John Randall's group. The image was tagged "photo 51" because it was the 51st diffraction photograph that Franklin had taken. It was critical evidence in identifying the structure of DNA.
Use in discovering structure of DNA
According to a later account by Raymond Gosling, although Photo 51 was an exceptionally clear diffraction pattern of the "B" form of DNA, Franklin was more interested in solving the diffraction pattern of the "A" form of DNA, so she put Gosling's Photo 51 to the side. When it had been decided that Franklin would leave King's College, Gosling showed the photograph to Maurice Wilkins (who would become Gosling's advisor after Franklin left).
A few days later, Wilkins showed the photo to James Watson after Gosling had returned to working under Wilkins' supervision. Franklin did not know this at the time because she was leaving King's College London. Randall, the head of the group, had asked Gosling to share all his data with Wilkins. Watson recognized the pattern as a helix because his co-worker Francis Crick had previously published a paper of what the diffraction pattern of a helix would be. Watson and Crick used characteristics and features of Photo 51, together with evidence from multiple other sources, to develop the chemical model of the DNA molecule. Their model, along with papers by Wilkins and colleagues, and by Gosling and Franklin, were first published, together, in 1953, in the same issue of Nature.
In 1962, the Nobel Prize in Physiology or Medicine was awarded to Watson, Crick and Wilkins. The prize was not awarded to Franklin; she had died four years earlier, and although there was not yet a rule against posthumous awards, the Nobel Committee generally does not make posthumous nominations. Gosling's work also was not cited by the prize committee.
The photograph provided key information that was essential for developing a model of DNA. The diffraction pattern determined the helical nature of the double helix strands (antiparallel). The outside of the DNA chain has a backbone of alternating deoxyribose and phosphate moieties, and the base pairs, the order of which provides codes for protein building and thereby inheritance, are inside the helix. Watson and Crick's calculations from Gosling and Franklin's photography gave crucial parameters for the size and structure of the helix.
Photo 51 became a crucial data source that led to the development of the DNA model and confirmed the prior postulated double helical structure of DNA, which were presented in the series of three articles in the journal Nature in 1953.
As historians of science have re-examined the period during which this image was obtained, considerable controversy has arisen over both the significance of the contribution of this image to the work of Watson and Crick, as well as the methods by which they obtained the image. Franklin had been hired independently of Maurice Wilkins, who, taking over as Gosling's new supervisor, showed Photo 51 to Watson and Crick without Franklin's knowledge. Whether Franklin would have deduced the structure of DNA on her own, from her own data, had Watson and Crick not obtained Gosling's image, is a hotly debated topic, made more controversial by the negative caricature of Franklin presented in the early chapters of Watson's history of the research on DNA structure, The Double Helix. Watson admitted his distortion of Franklin in his book, noting in the epilogue: "Since my initial impressions about [Franklin], both scientific and personal (as recorded in the early pages of this book) were often wrong, I want to say something here about her achievements."
Cultural references
A 56-minute documentary, DNA – Secret of Photo 51, was broadcast in 2003 on PBS NOVA. Narrated by Sigourney Weaver, the program features interviews with Wilkins, Gosling, Aaron Klug, Brenda Maddox, including Franklin's friends Vittorio Luzzati, Donald Caspar, Anne Piper, and Sue Richley. The UK version produced by the BBC is titled Rosalind Franklin: DNA's Dark Lady.
The first episode of a PBS documentary serial, DNA, which aired on 4 January 2004 as "The Secret of Life", centres on and features the contributions of Franklin. Narrated by Jeff Goldblum, it features Watson, Wilkins, Gosling and Peter Pauling (son of Linus Pauling).
A play entitled Photograph 51 by Anna Ziegler focuses on the role of X-ray crystallographer Rosalind Franklin in the discovery of the structure of DNA. This play won the third STAGE International Script Competition in 2008. In 2015, the play was put on at London West End, with Nicole Kidman playing Franklin.
A 107-minute documentary, Life Story, BBC Horizon science series 1987, starring Juliet Stevenson as Rosalind Franklin, Nicholas Fry as Raymond Gosling
See also
List of photographs considered the most important
References
X-ray crystallography
DNA
Black-and-white photographs
Genetics in the United Kingdom
History of genetics
Works originally published in Nature (journal)
1952 works
1952 in art
1950s photographs | Photo 51 | Chemistry,Materials_science | 1,095 |
10,217,683 | https://en.wikipedia.org/wiki/Ampex%20601 | The Ampex 601 was a portable, analog, reel-to-reel tape recorder produced by The Ampex Corporation from the mid-1950s through the 1960s. Ampex manufactured a single-channel model (the 601) and dual-channel version (the 601-2). The suitcase-sized, 26 lb. unit was designed for the professional recording applications. It recorded to ¼-inch tape on 5-inch or 7-inch reels.
The Ampex 601 was preceded by the Ampex 600. Although there was no officially-released Ampex 600-2, there were factory bulletins available which detailed how to change the second electronics to support the equivalent of 600-2 mode, and this made use of the 601-2's head stack possible, thereby creating the functional equivalent of a 600-2.
The Ampex 601 was succeeded by the Ampex 602, which was available as 602 and 602-2 models. The Ampex 600 and 601 were housed in light brown Samsonite cases. Optionally, the machine could be 19" rack-mounted, using an adapter plate. The Ampex 602 was housed in a dark brown Samsonite case with similar rack-mounting provisions. Companion speaker-amplifiers were also available, and were housed in the same style cases. Models 620, 621 and 622.
Sources
https://web.archive.org/web/20100521094749/http://eshop1.chem.buffalo.edu/AMPEX.html
http://jproc.ca/rrp/ampex_601.html
https://web.archive.org/web/20060929035149/http://ftp.ampex.com/ampex/manuals/audio/601man/601schem.gif (schematic)
https://web.archive.org/web/20060929035309/http://ftp.ampex.com/ampex/manuals/audio/601man/601-man.pdf (owner's manual, PDF)
Sound recording technology
Tape recording | Ampex 601 | Technology | 464 |
38,323,622 | https://en.wikipedia.org/wiki/Snub%20trioctagonal%20tiling | In geometry, the order-3 snub octagonal tiling is a semiregular tiling of the hyperbolic plane. There are four triangles, one octagon on each vertex. It has Schläfli symbol of sr{8,3}.
Images
Drawn in chiral pairs, with edges missing between black triangles:
Related polyhedra and tilings
This semiregular tiling is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n=6, and hyperbolic plane for any higher n. The series can be considered to begin with n=2, with one set of faces degenerated into digons.
From a Wythoff construction there are ten hyperbolic uniform tilings that can be based from the regular octagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 10 forms.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Snub hexagonal tiling
Floret pentagonal tiling
Order-3 heptagonal tiling
Tilings of regular polygons
List of uniform planar tilings
Kagome lattice
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Chiral figures
Hyperbolic tilings
Isogonal tilings
Semiregular tilings
Snub tilings | Snub trioctagonal tiling | Physics,Chemistry | 378 |
1,989,818 | https://en.wikipedia.org/wiki/Oxbow |
An oxbow is a U-shaped metal pole (or larger wooden frame) that fits the underside and the sides of the neck of an ox or bullock. A bow pin holds it in place.
The term "oxbow" is widely used to refer to a U-shaped meander in a river, sometimes cut off from the modern course of the river that formed it, creating an oxbow lake.
Developed form
Its upper ends pass through a purpose-drilled hole through the bar of the yoke that is held in place into the yoke with a metal screw or key, called a bow pin. Where wood is used it is most often hardwood steamed into shape, especially elm, hickory or willow. A ring, enabling left/right movement controlled from the centre, is attached by a plate to the centre underside of a wooden yoke to enable a pair of bullocks/oxen to be chained to any other pairs in a team and to be hitched to the load behind the animal team.
Uses of the yoke and oxbows
The load is a plough or any other dragged, non-motorised, field agricultural machinery.
Alternative
Wooden staves can be used instead with a yoke, which is then termed a withers yoke, named after animals with high backs (withers) (e.g. zebu cattle) which pull mostly on the yoke part of the equipment, not as greatly on the bow shape borne by the stronger front quarters of oxen and bullocks.
See also
Oxbow lake
Horse collar
Yoke
References
Animal equipment | Oxbow | Biology | 319 |
1,214,456 | https://en.wikipedia.org/wiki/Master%20control | Master control is the technical hub of a broadcast operation common among most over-the-air television stations and television networks. It is distinct from a production control room (PCR) in television studios where the activities such as switching from camera to camera are coordinated. A transmission control room (TCR) is usually smaller in size and is a scaled down version of centralcasting.
Master control is the final point before a signal is transmitted over-the-air for terrestrial television or cablecast, satellite provider for broadcast, or sent on to a cable television operator. Television master control rooms include banks of video monitors, satellite receivers, videotape machines, video servers, transmission equipment, and, more recently, computer broadcast automation equipment for recording and playback of television programming.
Master control is generally staffed with one or two master control operators around-the-clock to ensure continuous operation. Master control operators are responsible for monitoring the quality and accuracy of the on-air product, ensuring the transmission meets government regulations, troubleshooting equipment malfunctions, and preparing programming for playout. Regulations include both technical ones (such as those against over-modulation and dead air), as well as content ones (such as indecency and station ID).
Many television networks and radio networks or station groups have consolidated facilities and now operate multiple stations from one regional master control or centralcasting center. An example of this centralized broadcast programming system on a large scale is NBC's "hub-spoke project" that enables a single "hub" to have control of dozens of stations' automation systems and to monitor their air signals, thus reducing or eliminating some responsibilities of local employees at their owned-and-operated stations.
Outside the United States, the Canadian Broadcasting Corporation (CBC) manages four radio networks, two broadcast television networks, and several more cable/satellite radio and television services out of just two master control points (English language services at the Canadian Broadcasting Centre in Toronto and French language services at Maison Radio-Canada in Montreal). Many other public and private broadcasters in Canada have taken a similar approach.
Gallery
References
See also
Network operations center
Central apparatus room
Broadcast engineering
Broadcasting
Television terminology
Rooms | Master control | Engineering | 438 |
70,258,560 | https://en.wikipedia.org/wiki/Cairo%20Citadel%20Clock | The Cairo Citadel Clock is a 19th century French clock tower situated at the Cairo Citadel, and Egypt's first public ticking clock.
For many decades, the clock was famous for not working. Various attempts were made to fix the clock in the 20th century; a repair was ordered by King Farouk in 1943 and in 1984 under President Hosni Mubarak, but both times the clock stopped working a few days later. In November and December 2020, French Horologist Francois Simon-Fustier travelled to Cairo to examine the clock, and sent a report to the Egyptian Ministry of Tourism and Antiquities. The reparation of the clock was carried out by an Egyptian expert from Luxor. On 16 September 2021, the Egyptian Ministry of Tourism and Antiquities announced the completion of the restoration and restarting of the clock:
...the clock has been repaired by Egyptian craftsmen after years of nonoperation. The trials of the clock’s automatic winding [mechanism] have begun in order to ensure its continuous, uninterrupted operation. The restoration of the clock tower has been completed, and the colors have been enhanced to give it back its original luster. Maintenance and reinstallation of the stained glass panels and the rims of the circular iron columns located on the upper part of the column were also completed.
According to Osama Talaat, the clock had needed to be wound twice a day, but following the restoration now has a mechanism for "automatic winding of the clock without human assistance.”
The clock has been widely cited as having been sent by France in return for the Luxor obelisk now at the Place de la Concorde, however this has been disputed. Secretary General of the Supreme Council of Antiquities Mostafa Waziri said that the obelisk "… has nothing to do with the [clock] that [Louis Philippe] gave to Mohammad Ali. What proves this is that in 1845 the clock arrived to Mohammad Ali, and when it did, the construction of the Mohammad Ali Mosque in the citadel had not yet been completed, so the clock was placed in the Mohammad Ali Palace in Shubra. During the reign of Said Pasha, the tower was made until the clock was placed in the Mohammad Ali Mosque in 1855 AD."
Original placement
The clock was manufactured in France, and sent to Egypt in 1846 as a gift from the King Louis Philippe I of France to Muhammad Ali of Egypt. It was intended to be placed inside Muhammad Ali's Shubra Palace, but it was not installed and was left in the palace in storage.
Installation at the Citadel
The clock was installed at the Citadel in 1856 under Abbas I of Egypt, following the construction of the Mosque of Muhammad Ali which had been completed in 1848. The clock was housed inside a locally-made metal tower, which was decorated with Arabic inscriptions and stained glass.
Gallery
References
Clock towers in Egypt
Clocks
Buildings and structures in Cairo | Cairo Citadel Clock | Physics,Technology,Engineering | 590 |
25,560,578 | https://en.wikipedia.org/wiki/Metamaterial%20cloaking | Metamaterial cloaking is the usage of metamaterials in an invisibility cloak. This is accomplished by manipulating the paths traversed by light through a novel optical material. Metamaterials direct and control the propagation and transmission of specified parts of the light spectrum and demonstrate the potential to render an object seemingly invisible. Metamaterial cloaking, based on transformation optics, describes the process of shielding something from view by controlling electromagnetic radiation. Objects in the defined location are still present, but incident waves are guided around them without being affected by the object itself.
Electromagnetic metamaterials
Electromagnetic metamaterials respond to chosen parts of radiated light, also known as the electromagnetic spectrum, in a manner that is difficult or impossible to achieve with natural materials. In other words, these metamaterials can be further defined as artificially structured composite materials, which exhibit interaction with light usually not available in nature (electromagnetic interactions). At the same time, metamaterials have the potential to be engineered and constructed with desirable properties that fit a specific need. That need will be determined by the particular application.
The artificial structure for cloaking applications is a lattice design – a sequentially repeating network – of identical elements. Additionally, for microwave frequencies, these materials are analogous to crystals for optics. Also, a metamaterial is composed of a sequence of elements and spacings, which are much smaller than the selected wavelength of light. The selected wavelength could be radio frequency, microwave, or other radiations, now just beginning to reach into the visible frequencies. Macroscopic properties can be directly controlled by adjusting characteristics of the rudimentary elements, and their arrangement on, or throughout the material. Moreover, these metamaterials are a basis for building very small cloaking devices in anticipation of larger devices, adaptable to a broad spectrum of radiated light.
Hence, although light consists of an electric field and a magnetic field, ordinary optical materials, such as optical microscope lenses, have a strong reaction only to the electric field. The corresponding magnetic interaction is essentially nil. This results in only the most common optical effects, such as ordinary refraction with common diffraction limitations in lenses and imaging.
Since the beginning of optical sciences, centuries ago, the ability to control the light with materials has been limited to these common optical effects. Metamaterials, on the other hand, are capable of a very strong interaction, or coupling, with the magnetic component of light. Therefore, the range of response to radiated light is expanded beyond the ordinary optical limitations that are described by the sciences of physical optics and optical physics. In addition, as artificially constructed materials, both the magnetic and electric components of the radiated light can be controlled at will, in any desired fashion as it travels, or more accurately propagates, through the material. This is because a metamaterial's behavior is typically formed from individual components, and each component responds independently to a radiated spectrum of light. At this time, however, metamaterials are limited. Cloaking across a broad spectrum of frequencies has not been achieved, including the visible spectrum. Dissipation, absorption, and dispersion are also current drawbacks, but this field is still in its optimistic infancy.
Metamaterials and transformation optics
The field of transformation optics is founded on the effects produced by metamaterials.
Transformation optics has its beginnings in the conclusions of two research endeavors. They were published on May 25, 2006, in the same issue of Science, a peer-reviewed journal. The two papers are tenable theories on bending or distorting light to electromagnetically conceal an object. Both papers notably map the initial configuration of the electromagnetic fields on to a Cartesian mesh. Twisting the Cartesian mesh, in essence, transforms the coordinates of the electromagnetic fields, which in turn conceal a given object. Hence, with these two papers, transformation optics is born.
Transformation optics subscribes to the capability of bending light, or electromagnetic waves and energy, in any preferred or desired fashion, for a desired application. Maxwell's equations do not vary even though coordinates transform. Instead it is the values of the chosen parameters of the materials which "transform", or alter, during a certain time period. So, transformation optics developed from the capability to choose the parameters for a given material. Hence, since Maxwell's equations retain the same form, it is the successive values of the parameters, permittivity and permeability, which change over time. Furthermore, permittivity and permeability are in a sense responses to the electric and magnetic fields of a radiated light source respectively, among other descriptions. The precise degree of electric and magnetic response can be controlled in a metamaterial, point by point. Since so much control can be maintained over the responses of the material, this leads to an enhanced and highly flexible gradient-index material. Conventionally predetermined refractive index of ordinary materials instead become independent spatial gradients in a metamaterial, which can be controlled at will. Therefore, transformation optics is a new method for creating novel and unique optical devices.
Science of cloaking devices
The purpose of a cloaking device is to hide something, so that a defined region of space is invisibly isolated from passing electromagnetic fields (or sound waves), as with Metamaterial cloaking.
Cloaking objects, or making them appear invisible with metamaterials, is roughly analogous to a magician's sleight of hand, or his tricks with mirrors. The object or subject doesn't really disappear; the vanishing is an illusion. With the same goal, researchers employ metamaterials to create directed blind spots by deflecting certain parts of the light spectrum (electromagnetic spectrum). It is the light spectrum, as the transmission medium, that determines what the human eye can see.
In other words, light is refracted or reflected determining the view, color, or illusion that is seen. The visible extent of light is seen in a chromatic spectrum such as the rainbow. However, visible light is only part of a broad spectrum, which extends beyond the sense of sight. For example, there are other parts of the light spectrum which are in common use today. The microwave spectrum is employed by radar, cell phones, and wireless Internet. The infrared spectrum is used for thermal imaging technologies, which can detect a warm body amidst a cooler night time environment, and infrared illumination is combined with specialized digital cameras for night vision. Astronomers employ the terahertz band for submillimeter observations to answer deep cosmological questions.
Furthermore, electromagnetic energy is light energy, but only a small part of it is visible light. This energy travels in waves. Shorter wavelengths, such as visible light and infrared, carry more energy per photon than longer waves, such as microwaves and radio waves. For the sciences, the light spectrum is known as the electromagnetic spectrum.
The properties of optics and light
Prisms, mirrors, and lenses have a long history of altering the diffracted visible light that surrounds all. However, the control exhibited by these ordinary materials is limited. Moreover, the one material which is common among these three types of directors of light is conventional glass. Hence, these familiar technologies are constrained by the fundamental, physical laws of optics. With metamaterials in general, and the cloaking technology in particular, it appears these barriers disintegrate with advancements in materials and technologies never before realized in the natural physical sciences. These unique materials became notable because electromagnetic radiation can be bent, reflected, or skewed in new ways. The radiated light could even be slowed or captured before transmission. In other words, new ways to focus and project light and other radiation are being developed. Furthermore, the expanded optical powers presented in the science of cloaking objects appear to be technologically beneficial across a wide spectrum of devices already in use. This means that every device with basic functions that rely on interaction with the radiated electromagnetic spectrum could technologically advance. With these beginning steps a whole new class of optics has been established.
Interest in the properties of optics and light
Interest in the properties of optics, and light, date back to almost 2000 years to Ptolemy (AD 85 – 165). In his work entitled Optics, he writes about the properties of light, including reflection, refraction, and color. He developed a simplified equation for refraction without trigonometric functions. About 800 years later, in AD 984, Ibn Sahl discovered a law of refraction mathematically equivalent to Snell's law. He was followed by the most notable Islamic scientist, Ibn Al-Haytham (c.965–1039), who is considered to be "one of the few most outstanding figures in optics in all times". He made significant advances in the science of physics in general, and optics in particular. He anticipated the universal laws of light articulated by seventeenth century scientists by hundreds of years.
In the seventeenth century both Willebrord Snellius and Descartes were credited with discovering the law of refraction. It was Snellius who noted that Ptolemy's equation for refraction was inexact. Consequently, these laws have been passed along, unchanged for about 400 years, like the laws of gravity.
Perfect cloak and theory
Electromagnetic radiation and matter have a symbiotic relationship. Radiation does not simply act on a material, nor is it simply acted upon by a given material. Radiation interacts with matter. Cloaking applications which employ metamaterials alter how objects interact with the electromagnetic spectrum. The guiding vision for the metamaterial cloak is a device that directs the flow of light smoothly around an object, like water flowing past a rock in a stream, without reflection, rendering the object invisible. In reality, the simple cloaking devices of the present are imperfect, and have limitations.
One challenge up to the present date has been the inability of metamaterials, and cloaking devices, to interact at frequencies, or wavelengths, within the visible light spectrum.
Challenges presented by the first cloaking device
The principle of cloaking, with a cloaking device, was first proved (demonstrated) at frequencies in the microwave radiation band on October 19, 2006. This demonstration used a small cloaking device. Its height was less than one half inch (< 13 mm) and its diameter five inches (125 mm), and it successfully diverted microwaves around itself. The object to be hidden from view, a small cylinder, was placed in the center of the device. The invisibility cloak deflected microwave beams so they flowed around the cylinder inside with only minor distortion, making it appear almost as if nothing were there at all.
Such a device typically involves surrounding the object to be cloaked with a shell which affects the passage of light near it. There was reduced reflection of electromagnetic waves (microwaves), from the object. Unlike a homogeneous natural material with its material properties the same everywhere, the cloak's material properties vary from point to point, with each point designed for specific electromagnetic interactions (inhomogeneity), and are different in different directions (anisotropy). This accomplishes a gradient in the material properties. The associated report was published in the journal Science.
Although a successful demonstration, three notable limitations can be shown. First, since its effectiveness was only in the microwave spectrum the small object is somewhat invisible only at microwave frequencies. This means invisibility had not been achieved for the human eye, which sees only within the visible spectrum. This is because the wavelengths of the visible spectrum are tangibly shorter than microwaves. However, this was considered the first step toward a cloaking device for visible light, although more advanced nanotechnology-related techniques would be needed due to light's short wavelengths. Second, only small objects can be made to appear as the surrounding air. In the case of the 2006 proof of cloaking demonstration, the hidden from view object, a copper cylinder, would have to be less than five inches in diameter, and less than one half inch tall. Third, cloaking can only occur over a narrow frequency band, for any given demonstration. This means that a broad band cloak, which works across the electromagnetic spectrum, from radio frequencies to microwave to the visible spectrum, and to x-ray, is not available at this time. This is due to the dispersive nature of present-day metamaterials. The coordinate transformation (transformation optics) requires extraordinary material parameters that are only approachable through the use of resonant elements, which are inherently narrow band, and dispersive at resonance.
Usage of metamaterials
At the very beginning of the new millennium, metamaterials were established as an extraordinary new medium, which expanded control capabilities over matter. Hence, metamaterials are applied to cloaking applications for a few reasons. First, the parameter known as material response has broader range. Second, the material response can be controlled at will.
Third, optical components, such as lenses, respond within a certain defined range to light. As stated earlier – the range of response has been known, and studied, going back to Ptolemy – eighteen hundred years ago. The range of response could not be effectively exceeded, because natural materials proved incapable of doing so. In scientific studies and research, one way to communicate the range of response is the refractive index of a given optical material. Every natural material so far only allows for a positive refractive index. Metamaterials, on the other hand, are an innovation that are able to achieve negative refractive index, zero refractive index, and fractional values in between zero and one. Hence, metamaterials extend the material response, among other capabilities.
However, negative refraction is not the effect that creates invisibility-cloaking. It is more accurate to say that gradations of refractive index, when combined, create invisibility-cloaking. Fourth, and finally, metamaterials demonstrate the capability to deliver chosen responses at will.
Device
Before actually building the device, theoretical studies were conducted. The following is one of two studies accepted simultaneously by a scientific journal, as well being distinguished as one of the first published theoretical works for an invisibility cloak.
Controlling electromagnetic fields
The exploitation of "light", the electromagnetic spectrum, is accomplished with common objects and materials which control and direct the electromagnetic fields. For example, a glass lens in a camera is used to produce an image, a metal cage may be used to screen sensitive equipment, and radio antennas are designed to transmit and receive daily FM broadcasts. Homogeneous materials, which manipulate or modulate electromagnetic radiation, such as glass lenses, are limited in the upper limit of refinements to correct for aberrations. Combinations of inhomogeneous lens materials are able to employ gradient refractive indices, but the ranges tend to be limited.
Metamaterials were introduced about a decade ago, and these expand control of parts of the electromagnetic spectrum; from microwave, to terahertz, to infrared. Theoretically, metamaterials, as a transmission medium, will eventually expand control and direction of electromagnetic fields into the visible spectrum. Hence, a design strategy was introduced in 2006, to show that a metamaterial can be engineered with arbitrarily assigned positive or negative values of permittivity and permeability, which can also be independently varied at will. Then direct control of electromagnetic fields becomes possible, which is relevant to novel and unusual lens design, as well as a component of the scientific theory for cloaking of objects from electromagnetic detection.
Each component responds independently to a radiated electromagnetic wave as it travels through the material, resulting in electromagnetic inhomogeneity for each component. Each component has its own response to the external electric and magnetic fields of the radiated source. Since these components are smaller than the radiated wavelength it is understood that a macroscopic view includes an effective value for both permittivity and permeability. These materials obey the laws of physics, but behave differently from normal materials. Metamaterials are artificial materials engineered to provide properties which "may not be readily available in nature". These materials usually gain their properties from structure rather than composition, using the inclusion of small inhomogeneities to enact effective macroscopic behavior.
The structural units of metamaterials can be tailored in shape and size. Their composition, and their form or structure, can be finely adjusted. Inclusions can be designed, and then placed at desired locations in order to vary the function of a given material. As the lattice is constant, the cells are smaller than the radiated light.
The design strategy has at its core inhomogeneous composite metamaterials which direct, at will, conserved quantities of electromagnetism. These quantities are specifically, the electric displacement field D, the magnetic field intensity B, and the Poynting vector S. Theoretically, when regarding the conserved quantities, or fields, the metamaterial exhibits a twofold capability. First, the fields can be concentrated in a given direction. Second, they can be made to avoid or surround objects, returning without perturbation to their original path. These results are consistent with Maxwell's equations and are more than only ray approximation found in geometrical optics. Accordingly, in principle, these effects can encompass all forms of electromagnetic radiation phenomena on all length scales.
The hypothesized design strategy begins with intentionally choosing a configuration of an arbitrary number of embedded sources. These sources become localized responses of permittivity, ε, and magnetic permeability, μ. The sources are embedded in an arbitrarily selected transmission medium with dielectric and magnetic characteristics. As an electromagnetic system the medium can then be schematically represented as a grid.
The first requirement might be to move a uniform electric field through space, but in a definite direction, which avoids an object or obstacle. Next remove and embed the system in an elastic medium that can be warped, twisted, pulled or stretched as desired. The initial condition of the fields is recorded on a Cartesian mesh. As the elastic medium is distorted in one, or combination, of the described possibilities, the same pulling and stretching process is recorded by the Cartesian mesh. The same set of contortions can now be recorded, occurring as coordinate transformation:
a (x,y,z), b (x,y,z), c (x,y,z), d (x,y,z) ....
Hence, the permittivity, ε, and permeability, μ, is proportionally calibrated by a common factor. This implies that less precisely, the same occurs with the refractive index. Renormalized values of permittivity and permeability are applied in the new coordinate system. For the renormalization equations see ref. #.
Application to cloaking devices
Given the above parameters of operation, the system, a metamaterial, can now be shown to be able to conceal an object of arbitrary size. Its function is to manipulate incoming rays, which are about to strike the object. These incoming rays are instead electromagnetically steered around the object by the metamaterial, which then returns them to their original trajectory. As part of the design it can be assumed that no radiation leaves the concealed volume of space, and no radiation can enter the space. As illustrated by the function of the metamaterial, any radiation attempting to penetrate is steered around the space or the object within the space, returning to the initial direction. It appears to any observer that the concealed volume of space is empty, even with an object present there. An arbitrary object may be hidden because it remains untouched by external radiation.
A sphere with radius R1 is chosen as the object to be hidden. The cloaking region is to be contained within the annulus R1 < r < R2. A simple transformation that achieves the desired result can be found by taking all fields in the region r < R2 and compressing them into the region R1 < r < R2. The coordinate transformations do not alter Maxwell's equations. Only the values of ε and μ change over time.
Cloaking hurdles
There are issues to be dealt with to achieve invisibility cloaking. One issue, related to ray tracing, is the anisotropic effects of the material on the electromagnetic rays entering the "system". Parallel bundles of rays, (see above image), headed directly for the center are abruptly curved and, along with neighboring rays, are forced into tighter and tighter arcs. This is due to rapid changes in the now shifting and transforming permittivity ε and permeability μ. The second issue is that, while it has been discovered that the selected metamaterials are capable of working within the parameters of the anisotropic effects and the continual shifting of ε and μ, the values for ε and μ cannot be very large or very small. The third issue is that the selected metamaterials are currently unable to achieve broad, frequency spectrum capabilities. This is because the rays must curve around the "concealed" sphere, and therefore have longer trajectories than traversing free space, or air. However, the rays must arrive around the other side of the sphere in phase with the beginning radiated light. If this is happening then the phase velocity exceeds the velocity of light in a vacuum, which is the speed limit of the universe. (Note, this does not violate the laws of physics). And, with a required absence of frequency dispersion, the group velocity will be identical with phase velocity. In the context of this experiment, group velocity can never exceed the velocity of light, hence the analytical parameters are effective for only one frequency.
Optical conformal mapping and ray tracing in transformation media
The goal then is to create no discernible difference between a concealed volume of space and the propagation of electromagnetic waves through empty space. It would appear that achieving a perfectly concealed (100%) hole, where an object could be placed and hidden from view, is not probable. The problem is the following: in order to carry images, light propagates in a continuous range of directions. The scattering data of electromagnetic waves, after bouncing off an object or hole, is unique compared to light propagating through empty space, and is therefore easily perceived. Light propagating through empty space is consistent only with empty space. This includes microwave frequencies.
Although mathematical reasoning shows that perfect concealment is not probable because of the wave nature of light, this problem does not apply to electromagnetic rays, i.e., the domain of geometrical optics. Imperfections can be made arbitrarily, and exponentially small for objects that are much larger than the wavelength of light.
Mathematically, this implies n < 1, because the rays follow the shortest path and hence in theory create a perfect concealment. In practice, a certain amount of acceptable visibility occurs, as noted above. The range of the refractive index of the dielectric (optical material) needs to be across a wide spectrum to achieve concealment, with the illusion created by wave propagation across empty space. These places where n < 1 would be the shortest path for the ray around the object without phase distortion. Artificial propagation of empty space could be reached in the microwave-to-terahertz range. In stealth technology, impedance matching could result in absorption of beamed electromagnetic waves rather than reflection, hence, evasion of detection by radar. These general principles can also be applied to sound waves, where the index n describes the ratio of the local phase velocity of the wave to the bulk value. Hence, it would be useful to protect a space from any sound sourced detection. This also implies protection from sonar. Furthermore, these general principles are applicable in diverse fields such as electrostatics, fluid mechanics, classical mechanics, and quantum chaos.
Mathematically, it can be shown that the wave propagation is indistinguishable from empty space where light rays propagate along straight lines. The medium performs an optical conformal mapping to empty space.
Microwave frequencies
The next step, then, is to actually conceal an object by controlling electromagnetic fields.
Now, the demonstrated and theoretical ability for controlled electromagnetic fields has opened a new field, transformation optics. This nomenclature is derived from coordinate transformations used to create variable pathways for the propagation of light through a material. This demonstration is based on previous theoretical prescriptions, along with the accomplishment of the prism experiment. One possible application of transformation optics and materials is electromagnetic cloaking for the purpose of rendering a volume or object undetectable to incident radiation, including radiated probing.
This demonstration, for the first time, of actually concealing an object with electromagnetic fields, uses the method of purposely designed spatial variation. This is an effect of embedding purposely designed electromagnetic sources in the metamaterial.
As discussed earlier, the fields produced by the metamaterial are compressed into a shell (coordinate transformations) surrounding the now concealed volume. Earlier this was supported theory; this experiment demonstrated the effect actually occurs. Maxwell's equations are scalar when applying transformational coordinates, only the permittivity tensor and permeability tensor are affected, which then become spatially variant, and directionally dependent along different axes. The researchers state:
Before the actual demonstration, the experimental limits of the transformational fields were computationally determined, in addition to simulations, as both were used to determine the effectiveness of the cloak.
A month prior to this demonstration, the results of an experiment to spatially map the internal and external electromagnetic fields of negative refractive metamaterial was published in September 2006. This was innovative because prior to this the microwave fields were measured only externally. In this September experiment the permittivity and permeability of the microstructures (instead of external macrostructure) of the metamaterial samples were measured, as well as the scattering by the two-dimensional negative index metamaterials. This gave an average effective refractive index, which results in assuming homogeneous metamaterial.
Employing this technique for this experiment, spatial mapping of phases and amplitudes of the microwave radiations interacting with metamaterial samples was conducted. The performance of the cloak was confirmed by comparing the measured field maps to simulations.
For this demonstration, the concealed object was a conducting cylinder at the inner radius of the cloak. As the largest possible object designed for this volume of space, it has the most substantial scattering properties. The conducting cylinder was effectively concealed in two dimensions.
Infrared frequencies
The definition optical frequency, in metamaterials literature, ranges from far infrared, to near infrared, through the visible spectrum, and includes at least a portion of ultra-violet. To date when literature refers optical frequencies these are almost always frequencies in the infrared, which is below the visible spectrum. In 2009 a group of researchers announced cloaking at optical frequencies. In this case the cloaking frequency was centered at 1500 nm or 1.5 micrometers – the infrared.
Sonic frequencies
A laboratory metamaterial device, applicable to ultra-sound waves was demonstrated in January 2011. It can be applied to sound wavelengths corresponding to frequencies from 40 to 80 kHz.
The metamaterial acoustic cloak is designed to hide objects submerged in water. The metamaterial cloaking mechanism bends and twists sound waves by intentional design.
The cloaking mechanism consists of 16 concentric rings in a cylindrical configuration. Each ring has acoustic circuits. It is intentionally designed to guide sound waves in two dimensions.
Each ring has a different index of refraction. This causes sound waves to vary their speed from ring to ring. "The sound waves propagate around the outer ring, guided by the channels in the circuits, which bend the waves to wrap them around the outer layers of the cloak". It forms an array of cavities that slow the speed of the propagating sound waves. An experimental cylinder was submerged and then disappeared from sonar. Other objects of various shape and density were also hidden from the sonar. The acoustic cloak demonstrated effectiveness for frequencies of 40 kHz to 80 kHz.
In 2014 researchers created a 3D acoustic cloak from stacked plastic sheets dotted with repeating patterns of holes. The pyramidal geometry of the stack and the hole placement provide the effect.
Invisibility in diffusive light scattering media
In 2014, scientists demonstrated good cloaking performance in murky water, demonstrating that an object shrouded in fog can disappear completely when appropriately coated with metamaterial. This is due to the random scattering of light, such as that which occurs in clouds, fog, milk, frosted glass, etc., combined with the properties of the metatmaterial coating. When light is diffused, a thin coat of metamaterial around an object can make it essentially invisible under a range of lighting conditions.
Cloaking attempts
Broadband ground-plane cloak
If a transformation to quasi-orthogonal coordinates is applied to Maxwell's equations in order to conceal a perturbation on a flat conducting plane rather than a singular point, as in the first demonstration of a transformation optics-based cloak, then an object can be hidden underneath the perturbation. This is sometimes referred to as a "carpet" cloak.
As noted above, the original cloak demonstrated utilized resonant metamaterial elements to meet the effective material constraints. Utilizing a quasi-conformal transformation in this case, rather than the non-conformal original transformation, changed the required material properties. Unlike the original (singular expansion) cloak, the "carpet" cloak required less extreme material values. The quasi-conformal carpet cloak required anisotropic, inhomogeneous materials which only varied in permittivity. Moreover, the permittivity was always positive. This allowed the use of non-resonant metamaterial elements to create the cloak, significantly increasing the bandwidth.
An automated process, guided by a set of algorithms, was used to construct a metamaterial consisting of thousands of elements, each with its own geometry. Developing the algorithm allowed the manufacturing process to be automated, which resulted in fabrication of the metamaterial in nine days. The previous device used in 2006 was rudimentary in comparison, and the manufacturing process required four months in order to create the device. These differences are largely due to the different form of transformation: the original 2006 cloak transformed a singular point, while the ground-plane version transforms a plane, and the transformation in the carpet cloak was quasi-conformal, rather than non-conformal.
Other theories of cloaking
Other theories of cloaking discuss various science and research based theories for producing an electromagnetic cloak of invisibility. Theories presented employ transformation optics, event cloaking, dipolar scattering cancellation, tunneling light transmittance, sensors and active sources, and acoustic cloaking.
Institutional research
The research in the field of metamaterials has diffused out into the American government science research departments, including the US Naval Air Systems Command, US Air Force, and US Army. Many scientific institutions are involved including:
California Institute of Technology
Massachusetts Institute of Technology
Colorado State University
Pennsylvania State University
Duke University
Harvard University
Aalto University
Imperial College London
Max Planck Society
MSU Faculty of Physics
National Institute of Standards and Technology
Nederlandse Organisatie voor Wetenschappelijk Onderzoek
University College London
University of California, Berkeley
University of California, Irvine
University of California, Los Angeles
University of California, San Diego
University of Colorado
University of Delaware
University of Rochester
Funding for research into this technology is provided by the following American agencies:
Air Force Research Laboratory
Defense Advanced Research Projects Agency
Director of Central Intelligence
National Geospatial-Intelligence Agency
Naval Air Systems Command
Office of Naval Research
Through this research, it has been realized that developing a method for controlling electromagnetic fields can be applied to escape detection by radiated probing, or sonar technology, and to improve communications in the microwave range; that this method is relevant to superlens design and to the cloaking of objects within and from electromagnetic fields.
In the news
On October 20, 2006, the day after Duke University achieved enveloping and "disappearing" an object in the microwave range, the story was reported by Associated Press. Media outlets covering the story included USA Today, MSNBC's Countdown With Keith Olbermann: Sight Unseen, The New York Times with Cloaking Copper, Scientists Take Step Toward Invisibility, (London) The Times with Don't Look Now—Visible Gains in the Quest for Invisibility, Christian Science Monitor with Disappear Into Thin Air? Scientists Take Step Toward Invisibility, Australian Broadcasting, Reuters with Invisibility Cloak a Step Closer, and the (Raleigh) News & Observer with 'Invisibility Cloak a Step Closer.
On November 6, 2006, the Duke University research and development team was selected as part of the Scientific American best 50 articles of 2006.
In the month of November 2009, "research into designing and building unique 'metamaterials' has received a £4.9 million funding boost. Metamaterials can be used for invisibility 'cloaking' devices, sensitive security sensors that can detect tiny quantities of dangerous substances, and flat lenses that can be used to image tiny objects much smaller than the wavelength of light."
In November 2010, scientists at the University of St Andrews in Scotland reported the creation of a flexible cloaking material they call "Metaflex", which may bring industrial applications significantly closer.
In 2014, the world 's first 3D acoustic device was built by Duke engineers.
See also
History of metamaterials
Acoustic metamaterials
Chirality
Metamaterial
Metamaterial absorber
Metamaterial antennas
Nonlinear metamaterials
Photonic crystal
Photonic metamaterials
Plasmonic metamaterials
Seismic metamaterials
Split-ring resonator
Superlens
Terahertz metamaterials
Theories of cloaking
Transformation optics
Tunable metamaterials
Academic journals
Metamaterials (journal)
Metamaterials books
Metamaterials Handbook
Metamaterials: Physics and Engineering Explorations
References
Further reading
148 pages. "Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Physics in the Graduate School of Duke University 2009"
External links
Defining metamaterials
Manipulating the Near Field with Metamaterials Slide show, with audio available, by Dr. John Pendry, Imperial College, London
Researchers propose mimicking the cosmos with metamaterials
Electromagnetism
Metamaterials
Stealth technology | Metamaterial cloaking | Physics,Materials_science,Engineering | 7,025 |
47,490,358 | https://en.wikipedia.org/wiki/Solar%20phenomena | Solar phenomena are natural phenomena which occur within the atmosphere of the Sun. They take many forms, including solar wind, radio wave flux, solar flares, coronal mass ejections, coronal heating and sunspots.
These phenomena are believed to be generated by a helical dynamo, located near the center of the Sun's mass, which generates strong magnetic fields, as well as a chaotic dynamo, located near the surface, which generates smaller magnetic field fluctuations. All solar fluctuations together are referred to as solar variation, producing space weather within the Sun's gravitational field.
Solar activity and related events have been recorded since the eighth century BCE. Throughout history, observation technology and methodology advanced, and in the 20th century, interest in astrophysics surged and many solar telescopes were constructed. The 1931 invention of the coronagraph allowed the corona to be studied in full daylight.
Sun
The Sun is a star located at the center of the Solar System. It is almost perfectly spherical and consists of hot plasma and magnetic fields. It has a diameter of about , around 109 times that of Earth, and its mass (1.989 kilograms, approximately 330,000 times that of Earth) accounts for some 99.86% of the total mass of the Solar System. Chemically, about three quarters of the Sun's mass consists of hydrogen, while the rest is mostly helium. The remaining 1.69% (equal to 5,600 times the mass of Earth) consists of heavier elements, including oxygen, carbon, neon and iron.
The Sun formed about 4.567 billion years ago from the gravitational collapse of a region within a large molecular cloud. Most of the matter gathered in the center, while the rest flattened into an orbiting disk that became the balance of the Solar System. The central mass became increasingly hot and dense, eventually initiating thermonuclear fusion in its core.
The Sun is a G-type main-sequence star (G2V) based on spectral class, and it is informally designated as a yellow dwarf because its visible radiation is most intense in the yellow-green portion of the spectrum. It is actually white, but from the Earth's surface, it appears yellow because of atmospheric scattering of blue light. In the spectral class label, G2 indicates its surface temperature, of approximately 5770 K ( the UAI will accept in 2014 5772 K ) and V indicates that the Sun, like most stars, is a main-sequence star, and thus generates its energy via fusing hydrogen into helium. In its core, the Sun fuses about 620 million metric tons of hydrogen each second.
The Earth's mean distance from the Sun is approximately , though the distance varies as the Earth moves from perihelion in January to aphelion in July. At this average distance, light travels from the Sun to Earth in about 8 minutes, 19 seconds. The energy of this sunlight supports almost all life on Earth by photosynthesis, and drives Earth's climate and weather. As recent as the 19th century, scientists had little knowledge of the Sun's physical composition and source of energy. This understanding is still developing; a number of present-day anomalies in the Sun's behavior remain unexplained.
Solar cycle
Many solar phenomena change periodically over an average interval of about 11 years. This solar cycle affects solar irradiation and influences space weather, terrestrial weather, and climate.
The solar cycle also modulates the flux of short-wavelength solar radiation, from ultraviolet to X-ray and influences the frequency of solar flares, coronal mass ejections and other solar eruptive phenomena.
Types
Coronal mass ejections
A coronal mass ejection (CME) is a massive burst of solar wind and magnetic fields rising above the solar corona. Near solar maxima, the Sun produces about three CMEs every day, whereas solar minima feature about one every five days. CMEs, along with solar flares of other origin, can disrupt radio transmissions and damage satellites and electrical transmission line facilities, resulting in potentially massive and long-lasting power outages.
Coronal mass ejections often appear with other forms of solar activity, most notably solar flares, but no causal relationship has been established. Most weak flares do not have CMEs; however, most powerful ones do. Most ejections originate from active regions on the Sun's surface, such as sunspot groupings associated with frequent flares. Other forms of solar activity frequently associated with coronal mass ejections are eruptive prominences, coronal dimming, coronal waves and Moreton waves, also called solar tsunami.
Magnetic reconnection is responsible for CME and solar flares. Magnetic reconnection is the name given to the rearrangement of magnetic field lines when two oppositely directed magnetic fields are brought together. This rearrangement is accompanied with a sudden release of energy stored in the original oppositely directed fields.
When a CME impacts the Earth's magnetosphere, it temporarily deforms the Earth's magnetic field, changing the direction of compass needles and inducing large electrical ground currents in Earth itself; this is called a geomagnetic storm, and it is a global phenomenon. CME impacts can induce magnetic reconnection in Earth's magnetotail (the midnight side of the magnetosphere); this launches protons and electrons downward toward Earth's atmosphere, where they form the aurora.
Flares
A solar flare is a sudden flash of brightness observed over the Sun's surface or the solar limb, which is interpreted as an energy release of up to 6 × 1025 joules (about a sixth of the total Sun's energy output each second or 160 billion megatons of TNT equivalent, over 25,000 times more energy than released from the impact of Comet Shoemaker–Levy 9 with Jupiter). It may be followed by a coronal mass ejection. The flare ejects clouds of electrons, ions and atoms through the corona into space. These clouds typically reach Earth a day or two after the event. Similar phenomena in other stars are known as stellar flares.
Solar flares strongly influence space weather near the Earth. They can produce streams of highly energetic particles in the solar wind, known as a solar proton event. These particles can impact the Earth's magnetosphere in the form of a geomagnetic storm and present radiation hazards to spacecraft and astronauts.
Solar proton events
A solar proton event (SPE), or "proton storm", occurs when particles (mostly protons) emitted by the Sun become accelerated either close to the Sun during a flare or in interplanetary space by CME shocks. The events can include other nuclei such as helium ions and HZE ions. These particles cause multiple effects. They can penetrate the Earth's magnetic field and cause ionization in the ionosphere. The effect is similar to auroral events, except that protons rather than electrons are involved. Energetic protons are a significant radiation hazard to spacecraft and astronauts. Energetic protons can reach Earth within 30 minutes of a major flare's peak.
Prominences
A prominence is a large, bright, gaseous feature extending outward from the Sun's surface, often in the shape of a loop. Prominences are anchored to the Sun's surface in the photosphere and extend outwards into the corona. While the corona consists of high temperature plasma, which does not emit much visible light, prominences contain much cooler plasma, similar in composition to that of the chromosphere.
Prominence plasma is typically a hundred times cooler and denser than coronal plasma.
A prominence forms over timescales of about an earthly day and may persist for weeks or months. Some prominences break apart and form CMEs.
A typical prominence extends over many thousands of kilometers; the largest on record was estimated at over long
– roughly the solar radius.
When a prominence is viewed against the Sun instead of space, it appears darker than the background. This formation is called a solar filament. It is possible for a projection to be both a filament and a prominence. Some prominences are so powerful that they eject matter at speeds ranging from 600 km/s to more than 1000 km/s. Other prominences form huge loops or arching columns of glowing gases over sunspots that can reach heights of hundreds of thousands of kilometers.
Sunspots
Sunspots are relatively dark areas on the Sun's radiating 'surface' (photosphere) where intense magnetic activity inhibits convection and cools the Photosphere. Faculae are slightly brighter areas that form around sunspot groups as the flow of energy to the photosphere is re-established and both the normal flow and the sunspot-blocked energy elevate the radiating 'surface' temperature. Scientists began speculating on possible relationships between sunspots and solar luminosity in the 17th century. Luminosity decreases caused by sunspots (generally < - 0.3%) are correlated with increases (generally < + 0.05%) caused both by faculae that are associated with active regions as well as the magnetically active 'bright network'.
The net effect during periods of enhanced solar magnetic activity is increased radiant solar output because faculae are larger and persist longer than sunspots. Conversely, periods of lower solar magnetic activity and fewer sunspots (such as the Maunder Minimum) may correlate with times of lower irradiance.
Sunspot activity has been measured using the Wolf number for about 300 years. This index (also known as the Zürich number) uses both the number of sunspots and the number of sunspot groups to compensate for measurement variations. A 2003 study found that sunspots had been more frequent since the 1940s than in the previous 1150 years.
Sunspots usually appear as pairs with opposite magnetic polarity. Detailed observations reveal patterns, in yearly minima and maxima and in relative location. As each cycle proceeds, the latitude of spots declines, from 30 to 45° to around 7° after the solar maximum. This latitudinal change follows Spörer's law.
For a sunspot to be visible to the human eye it must be about 50,000 km in diameter, covering or 700 millionths of the visible area. Over recent cycles, approximately 100 sunspots or compact sunspot groups are visible from Earth.
Sunspots expand and contract as they move about and can travel at a few hundred meters per second when they first appear.
Wind
The solar wind is a stream of plasma released from the Sun's upper atmosphere. It consists of mostly electrons and protons with energies usually between 1.5 and 10 keV. The stream of particles varies in density, temperature and speed over time and over solar longitude. These particles can escape the Sun's gravity because of their high energy.
The solar wind is divided into the slow solar wind and the fast solar wind. The slow solar wind has a velocity of about , a temperature of 2 K and a composition that is a close match to the corona. The fast solar wind has a typical velocity of 750 km/s, a temperature of 8 K and nearly matches the photosphere's. The slow solar wind is twice as dense and more variable in intensity than the fast solar wind. The slow wind has a more complex structure, with turbulent regions and large-scale organization.
Both the fast and slow solar winds can be interrupted by large, fast-moving bursts of plasma called interplanetary CMEs, or ICMEs. They cause shock waves in the thin plasma of the heliosphere, generating electromagnetic waves and accelerating particles (mostly protons and electrons) to form showers of ionizing radiation that precede the CME.
Effects
Space weather
Space weather is the environmental condition within the Solar System, including the solar wind. It is studied especially surrounding the Earth, including conditions from the magnetosphere to the ionosphere and thermosphere. Space weather is distinct from terrestrial weather of the troposphere and stratosphere. The term was not used until the 1990s. Prior to that time, such phenomena were considered to be part of physics or aeronomy.
Solar storms
Solar storms are caused by disturbances on the Sun, most often coronal clouds associated with solar flare CMEs emanating from active sunspot regions, or less often from coronal holes. The Sun can produce intense geomagnetic and proton storms capable of causing power outages, disruption or communications blackouts (including GPS systems) and temporary/permanent disabling of satellites and other spaceborne technology. Solar storms may be hazardous to high-latitude, high-altitude aviation and to human spaceflight. Geomagnetic storms cause aurorae.
The most significant known solar storm occurred in September 1859 and is known as the Carrington event.
Aurora
An aurora is a natural light display in the sky, especially in the high latitude (Arctic and Antarctic) regions, in the form of a large circle around the pole. It is caused by the collision of solar wind and charged magnetospheric particles with the high altitude atmosphere (thermosphere).
Most auroras occur in a band known as the auroral zone, which is typically 3° to 6° wide in latitude and observed at 10° to 20° from the geomagnetic poles at all longitudes, but often most vividly around the spring and autumn equinoxes. The charged particles and solar wind are directed into the atmosphere by the Earth's magnetosphere. A geomagnetic storm expands the auroral zone to lower latitudes.
Auroras are associated with the solar wind. The Earth's magnetic field traps its particles, many of which travel toward the poles where they are accelerated toward Earth. Collisions between these ions and the atmosphere release energy in the form of auroras appearing in large circles around the poles. Auroras are more frequent and brighter during the solar cycle's intense phase when CMEs increase the intensity of the solar wind.
Geomagnetic storm
A geomagnetic storm is a temporary disturbance of the Earth's magnetosphere caused by a solar wind shock wave and/or cloud of magnetic field that interacts with the Earth's magnetic field. The increase in solar wind pressure compresses the magnetosphere and the solar wind's magnetic field interacts with the Earth's magnetic field to transfer increased energy into the magnetosphere. Both interactions increase plasma movement through the magnetosphere (driven by increased electric fields) and increase the electric current in the magnetosphere and ionosphere.
The disturbance in the interplanetary medium that drives a storm may be due to a CME or a high speed stream (co-rotating interaction region or CIR) of the solar wind originating from a region of weak magnetic field on the solar surface. The frequency of geomagnetic storms increases and decreases with the sunspot cycle. CME driven storms are more common during the solar maximum of the solar cycle, while CIR-driven storms are more common during the solar minimum.
Several space weather phenomena are associated with geomagnetic storms. These include Solar Energetic Particle (SEP) events, geomagnetically induced currents (GIC), ionospheric disturbances that cause radio and radar scintillation, disruption of compass navigation and auroral displays at much lower latitudes than normal. A 1989 geomagnetic storm energized ground induced currents that disrupted electric power distribution throughout most of the province of Quebec and caused aurorae as far south as Texas.
Sudden ionospheric disturbance
A sudden ionospheric disturbance (SID) is an abnormally high ionization/plasma density in the D region of the ionosphere caused by a solar flare. The SID results in a sudden increase in radio-wave absorption that is most severe in the upper medium frequency (MF) and lower high frequency (HF) ranges, and as a result, often interrupts or interferes with telecommunications systems.
Geomagnetically induced currents
Geomagnetically induced currents are a manifestation at ground level of space weather, which affect the normal operation of long electrical conductor systems. During space weather events, electric currents in the magnetosphere and ionosphere experience large variations, which manifest also in the Earth's magnetic field. These variations induce currents (GIC) in earthly conductors. Electric transmission grids and buried pipelines are common examples of such conductor systems. GIC can cause problems such as increased corrosion of pipeline steel and damaged high-voltage power transformers.
Carbon-14
The production of carbon-14 (radiocarbon: 14C) is related to solar activity. Carbon-14 is produced in the upper atmosphere when cosmic ray bombardment of atmospheric nitrogen (14N) induces the nitrogen to undergo β+ decay, thus transforming into an unusual isotope of carbon with an atomic weight of 14 rather than the more common 12. Because galactic cosmic rays are partially excluded from the Solar System by the outward sweep of magnetic fields in the solar wind, increased solar activity reduces 14C production.
Atmospheric 14C concentration is lower during solar maxima and higher during solar minima. By measuring the captured 14C in wood and counting tree rings, production of radiocarbon relative to recent wood can be measured and dated. A reconstruction of the past 10,000 years shows that the 14C production was much higher during the mid-Holocene 7,000 years ago and decreased until 1,000 years ago. In addition to variations in solar activity, long-term trends in carbon-14 production are influenced by changes in the Earth's geomagnetic field and by changes in carbon cycling within the biosphere (particularly those associated with changes in the extent of vegetation between ice ages).
Observation history
Solar activity and related events have been regularly recorded since the time of the Babylonians. Early records described solar eclipses, the corona and sunspots.
Soon after the invention of telescopes, in the early 1600s, astronomers began observing the Sun. Thomas Harriot was the first to observe sunspots, in 1610. Observers confirmed the less-frequent sunspots and aurorae during the Maunder minimum. One of these observers was the renowned astronomer Johannes Hevelius who recorded a number of sunspots from 1653 to 1679 in the early Maunder minimum, listed in the book Machina Coelestis (1679).
Solar spectrometry began in 1817. Rudolf Wolf gathered sunspot observations as far back as the 1755–1766 cycle. He established a relative sunspot number formulation (the Wolf or Zürich sunspot number) that became the standard measure. Around 1852, Sabine, Wolf, Gautier and von Lamont independently found a link between the solar cycle and geomagnetic activity.
On 2 April 1845, Fizeau and Foucault first photographed the Sun. Photography assisted in the study of solar prominences, granulation, spectroscopy and solar eclipses.
On 1 September 1859, Richard C. Carrington and separately R. Hodgson first observed a solar flare. Carrington and Gustav Spörer discovered that the Sun exhibits differential rotation, and that the outer layer must be fluid.
In 1907–08, George Ellery Hale uncovered the Sun's magnetic cycle and the magnetic nature of sunspots. Hale and his colleagues later deduced Hale's polarity laws that described its magnetic field.
Bernard Lyot's 1931 invention of the coronagraph allowed the corona to be studied in full daylight.
The Sun was, until the 1990s, the only star whose surface had been resolved. Other major achievements included understanding of:
X-ray-emitting loops (e.g., by Yohkoh)
Corona and solar wind (e.g., by SoHO)
Variance of solar brightness with level of activity, and verification of this effect in other solar-type stars (e.g., by ACRIM)
The intense fibril state of the magnetic fields at the visible surface of a star like the Sun (e.g., by Hinode)
The presence of magnetic fields of 0.5×105 to 1×105 gauss at the base of the conductive zone, presumably in some fibril form, inferred from the dynamics of rising azimuthal flux bundles.
Low-level electron neutrino emission from the Sun's core.
In the later twentieth century, satellites began observing the Sun, providing many insights. For example, modulation of solar luminosity by magnetically active regions was confirmed by satellite measurements of total solar irradiance (TSI) by the ACRIM1 experiment on the Solar Maximum Mission (launched in 1980).
See also
List of articles related to the Sun
Outline of astronomy
Radiative levitation
Solar cycle
Notes
References
Further reading
Solar activity Hugh Hudson Scholarpedia, 3(3):3967. doi:10.4249/scholarpedia.3967
External links
NOAA / NESDIS / NGDC (2002) Solar Variability Affecting Earth NOAA CD-ROM NGDC-05/01. This CD-ROM contains over 100 solar-terrestrial and related global data bases covering the period through April 1990.
Recent Total Solar Irradiance data updated every Monday
Latest Space Weather Data – from the Solar Influences Data Analysis Center (Belgium)
Latest images from Big Bear Solar Observatory (California)
The Very Latest SOHO Images – from the ESA/NASA Solar & Heliospheric Observatory
Map of Solar Active Regions – from the Kislovodsk Mountain Astronomical Station
Space physics
Articles containing video clips | Solar phenomena | Physics,Astronomy | 4,404 |
51,703,470 | https://en.wikipedia.org/wiki/Ganaplacide | Ganaplacide (development codename KAF156) is a drug in development by Novartis for the purpose of treating malaria. It is a imidazolopiperazine derivative. It has shown activity against the Plasmodium falciparum and Plasmodium vivax forms of the malaria parasite.
Clinical development
The antimalarial activity of the imidazolopiperazine compound class was initially discovered through a series of sensitive phenotypic antimalarial screens that were developed and run in 2007 and 2008 by a group of biologists working at the Genomics Institute of the Novartis Research Foundation and the Scripps Research Institute. The lead product was published in 2012 as a leader of the imidazolopiperazine class. This was followed by studies in animal models published in 2014. Preclinical studies found no significant in vitro safety liabilities. A Phase 1 study found some gastrointestinal and neurological effects but these were self-limited in 70 healthy males and established dosing for a future Phase 2 Trial.
The just completed Phase 2 Trial was completed with 4 study locations in Thailand and one study location in Vietnam. This study looked at the effect of 400 mg given daily for 3 days as well as a single 800 mg dose. In the 21 Patients who received a single 800 mg dose 67% of patients cleared the infection which is comparable to other antimalarial medications. More than half of the patients had some reported adverse event and the rate was higher in patients who received a single 800 mg dose over patients who received 3 400 mg doses. The most common effect was asymptomatic bradycardia where patients heart rates fell below 60 Beats Per Minute. Other reported events include hypokalemia, elevated liver enzymes as well as anemia.
Pharmacology
The mechanism of this drug is currently unknown. Resistance is conferred by mutations in PfCARL, a protein with 7 transmembrane domains, as well as by mutations in the P. falciparum acetyl-CoA transporter and the UDP-galactose transporter. None of these are thought to be the target of ganaplacide. Initial functional studies were performed with the closely related chemotype, GNF179 that differs from the clinical candidate by a single halogen.
Society and culture
Economics
Novartis is an international drug company based in Switzerland and is developing ganaplacide as a drug for the treatment of malaria. This drug was identified by a high throughput screen of over 2 million compounds. This drug is being developed with support from the Bill and Melinda Gates foundation via their Medicine for Malaria Venture. It will also be a part of the Novartis Malaria Initiative which has been providing 750 million treatments without producing any profit for the larger company.
Intellectual property
Ganaplacide is protected by the granted United States Patent 20130281403 held by the inventors, Arnab Chatterjee, Advait Nagle, Tao Wu, David Tully, and Kelli Kuhen, and filed June 7, 2013. There are previous US patent applications but only this one has been granted.
References
Antimalarial agents
Experimental drugs
Amines
Nitrogen heterocycles
4-Fluorophenyl compounds
Drugs with unknown mechanisms of action | Ganaplacide | Chemistry | 681 |
8,302,382 | https://en.wikipedia.org/wiki/Contraharmonic%20mean | In mathematics, a contraharmonic mean (or antiharmonic mean) is a function complementary to the harmonic mean. The contraharmonic mean is a special case of the Lehmer mean, , where p = 2.
Definition
The contraharmonic mean of a set of positive real numbers is defined as the arithmetic mean of the squares of the numbers divided by the arithmetic mean of the numbers:
Two-variable formulae
From the formulas for the arithmetic mean and harmonic mean of two variables we have:
Notice that for two variables the average of the harmonic and contraharmonic means is exactly equal to the arithmetic mean:
As a gets closer to 0 then H(a, b) also gets closer to 0. The harmonic mean is very sensitive to low values. On the other hand, the contraharmonic mean is sensitive to larger values, so as a approaches 0 then C(a, b) approaches b (so their average remains A(a, b)).
There are two other notable relationships between 2-variable means. First, the geometric mean of the arithmetic and harmonic means is equal to the geometric mean of the two values:
The second relationship is that the geometric mean of the arithmetic and contraharmonic means is the root mean square:
The contraharmonic mean of two variables can be constructed geometrically using a trapezoid.
Additional constructions
The contraharmonic mean can be constructed on a circle similar to the way the Pythagorean means of two variables are constructed. The contraharmonic mean is the remainder of the diameter on which the harmonic mean lies.
History
The contraharmonic mean was discovered by the Greek mathematician Eudoxus in the 4th century BCE.
Properties
The contraharmonic mean satisfies characteristic properties of a mean of some list of positive values :
The first property implies the fixed point property, that for all k > 0,
It is not monotonic − increasing a value of can decrease the value of the contraharmonic mean. For instance .
The contraharmonic mean is higher in value than the arithmetic mean and also higher than the root mean square:
where x is a list of values, H is the harmonic mean, G is geometric mean, L is the logarithmic mean, A is the arithmetic mean, R is the root mean square and C is the contraharmonic mean. Unless all values of x are the same, the ≤ signs above can be replaced by <.
The name contraharmonic may be due to the fact that when taking the mean of only two variables, the contraharmonic mean is as high above the arithmetic mean as the arithmetic mean is above the harmonic mean (i.e., the arithmetic mean of the two variables is equal to the arithmetic mean of their harmonic and contraharmonic means).
Relationship to arithmetic mean and variance
The contraharmonic mean of a random variable is equal to the sum of the arithmetic mean and the variance divided by the arithmetic mean.
The ratio of the variance and the arithmetic mean was proposed as a test statistic by Clapham.
Since the variance is always ≥0 the contraharmonic mean is always greater than or equal to the arithmetic mean.
Other relationships
Any integer contraharmonic mean of two different positive integers is the hypotenuse of a Pythagorean triple, while any hypotenuse of a Pythagorean triple is a contraharmonic mean of two different positive integers.
It is also related to Katz's statistic
where m is the mean, s2 the variance and n is the sample size.
Jn is asymptotically normally distributed with a mean of zero and variance of 1.
Uses in statistics
The problem of a size biased sample was discussed by Cox in 1969 on a problem of sampling fibres. The expectation of size biased sample is equal to its contraharmonic mean, and the contraharmonic mean is also used to estimate bias fields in multiplicative models, rather than the arithmetic mean as used in additive models.
The contraharmonic mean can be used to average the intensity value of neighbouring pixels in graphing, so as to reduce noise in images and make them clearer to the eye.
The probability of a fibre being sampled is proportional to its length. Because of this the usual sample mean (arithmetic mean) is a biased estimator of the true mean. To see this consider
where f(x) is the true population distribution, g(x) is the length weighted distribution and m is the sample mean. Taking the usual expectation of the mean here gives the contraharmonic mean rather than the usual (arithmetic) mean of the sample. This problem can be overcome by taking instead the expectation of the harmonic mean (1/x). The expectation and variance of 1/x are
and has variance
where is the expectation operator. Asymptotically is distributed normally.
The asymptotic efficiency of length biased sampling depends compared to random sampling on the underlying distribution. if f(x) is log normal the efficiency is 1 while if the population is gamma distributed with index b, the efficiency is . This distribution has been used in modelling consumer behaviour as well as quality sampling.
It has been used longside the exponential distribution in transport planning in the form of its inverse.
See also
Harmonic mean
Lehmer mean
Pythagorean means
References
External links
Means | Contraharmonic mean | Physics,Mathematics | 1,112 |
35,861,790 | https://en.wikipedia.org/wiki/Mississippi%20River%20Basin%20Model | The Mississippi River Basin Model was a large-scale hydraulic model of the entire Mississippi River basin, covering an area of 200 acres. It is part of the Waterways Experiment Station, located near Clinton, Mississippi. The model was built from 1943 to 1966 and in operation from 1949 until 1973. By comparison, the better known San Francisco Bay Model covers 1.5 acres and the Chesapeake Bay Model covers 8 acres. The model is now derelict, but open to the public within Buddy Butts Park, Jackson.
Background
Large scale, localised flood control measures such as levees had been constructed since the early 1900s, especially in the decade after the Great Mississippi Flood of 1927 and following the Flood Control Act 1936. From 1928 onwards, the Army Corps of Engineers built a huge number of locks, run-off channels and extended and raised existing levees. These control measures only targeted single sites, and did not look at the entire river system.
There had already been extensive modelling of individual sections of the river at the Waterways Experiment Station in Vicksburg, including a 1060 ft long model of the 600 river miles from Helena, Arkansas to Donaldsonville, Louisiana, but in early 1937 it was clear that impact of control measures were not completely successful.
In 1941 Eugene Reybold proposed a large-scale hydraulic model which would allow the engineers to simulate weather, floods and evaluate the effect of flood control measures on the entire system. This would cover approximately 200 acres, include all existing and proposed control measures, and a network of streams nearly 8 miles in total length.
Design
The scale of the model was 1:100 vertical and 1:2000 horizontal. At this scale, the Appalachian Mountains are raised 20 ft above the Gulf of Mexico, the Rocky Mountains by 50 ft. The larger vertical scale was thought to reduce surface-tension and therefore better simulate turbulence.
The model used individually cast 10 ft x 10 ft (approximate) concrete panels, contoured with the land shape and river bed, including tributaries, cliffs, lakes, flood plains, bridges, and levees. Metal plugs or divots in the river bed provided roughness to simulate different types of material, whilst folded metal mesh simulated dense foliage. With each gallon of water representing 1.5 million gallons, an entire day of river flow along the whole system could be simulated in 5 minutes.
Construction
As wartime labour was short, it was proposed to make use of Italian and German prisoners of war. Construction at the site was begun in January 1943, commencing with housing for the WES personnel needed to direct work on the model, as well as an internment camp for 3,000 men at nearby Camp Clinton. The first POW's (200 of Rommel's Afrika Korps) arrived in August 1943, and by December, there were almost 1800.
Enlisted men received 90 cents for 8 hours labor. Officers and non-commissioned officers were not required to work, but could volunteer. By May 1946, the last of the prisoners had been repatriated, and the site was almost ready for model construction.
Individual sections were in operation from 1949, but construction was not completed until 1966, partly due to the complexity of modelling such a vast area, but also due to irregular funding.
Operation
By 1952 the Missouri River segment was fully operational and used extensively to predict problems during that year's April floods, helping to avoid damage of an estimated $65 million. By 1959 the model was complete as far as Memphis and a comprehensive testing program was begun which coordinated the entire model.
In 1964, the site was opened to visitors for self-guided tours, and facilities included an assembly center, 40 ft observation tower, operation observation room, and elevated platforms, drawing about 5,000 visitors a year.
On completion in 1966, basin-wide tests examined the effectiveness of reservoirs and looked at maximizing flood protection. For the next three years, the historic floods of 1937, 1943, 1945 and 1952 were reproduced, as well as hypothetical floods at different periods of the year.
Tests on individual problems were conducted until 1971 but high costs and growth of computer modelling meant that the facility was put on standby. The last use was in 1973, when a potentially catastrophic failure arose at the Old River Control Structure. The model was used to show that the untested Morganza Spillway could be opened effectively, without diverting polluted water through New Orleans and Baton Rouge, as well as identifying levees that required topping up.
Current status
In 1993 the site was taken over by the City of Jackson, designated as a Mississippi Landmark and a city park was formed around the site. The cost of maintaining the site as a tourist attraction was too high, so the model was abandoned and became overgrown. In 2000, the model was included in the Mississippi Heritage Trusts' 10 Most Endangered List, featured in a Google Sightseeing post in 2007
, and thereafter was visited and blogged about by several urban explorers and photographers. In 2010, it was reported that the panels were still intact, and observation platforms and walkways still in place.
In 2011 students from Louisiana State University received an Honor Award from the American Society of Landscape Architects for their project to revitalise the park and relaunch the model as a tourist attraction.
Richard Coupe of the Jackson Free Press visited the site in 2013 and reported it as overgrown, but open to the public within Buddy Butts Park.
A team from 16 WAPT News surveyed the site using the Eagle Eye 16 drone and reported it as overgrown, defaced, and with several pieces of the grid collapsing.
Access is via the park entrance on McRaven Road, the model is next to the soccer fields.
Friends of the Mississippi River Basin Model is a group consisting of local volunteers has started clearing out the model with the hope of opening it to the public. The Libertarian Party of Hinds County also assists with this project.
The model was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2018.
References
External links
https://friendsofmrbm.org/
Mississippi River watershed
Hydrology models
Buildings and structures in Hinds County, Mississippi
Mississippi Landmarks
Mississippi embayment
Scale modeling
1943 establishments in Mississippi
Historic Civil Engineering Landmarks | Mississippi River Basin Model | Physics,Engineering,Biology,Environmental_science | 1,249 |
64,217,569 | https://en.wikipedia.org/wiki/Jacqueline%20Krim | Jacqueline Krim is an American condensed matter physicist specializing in nanotribology, the study of film growth, friction, and wetting of nanoscale surfaces. She is a Distinguished University Professor of Physics at North Carolina State University.
Education and career
Krim graduated from the University of Montana in 1978 and completed a Ph.D. in experimental condensed matter physics at the University of Washington in 1984. After postdoctoral research at Aix-Marseille University, she became a faculty member at Northeastern University, and moved to North Carolina State University in 1998.
Recognition
Krim is a fellow of the American Vacuum Society (1999) and the American Physical Society (2000). The Division of Materials Physics of the American Physical Society named her as their David Adler Lecturer for 2015. In 2019 she was elected as a fellow of the American Association for the Advancement of Science "for distinguished contributions to the understanding of atomic-scale friction, wetting and surface roughening and for exemplary efforts in scientific outreach and diversity". She received a National Science Foundation Presidential Young Investigator Award in 1986.
References
External links
Krim Nanoscale Tribology Group
Year of birth missing (living people)
Living people
American women physicists
University of Montana alumni
University of Washington alumni
Northeastern University faculty
North Carolina State University faculty
Fellows of the American Physical Society
Fellows of the American Association for the Advancement of Science
20th-century American physicists
20th-century American women scientists
21st-century American physicists
21st-century American women scientists
Condensed matter physicists
Tribologists
American women academics | Jacqueline Krim | Materials_science | 306 |
41,496 | https://en.wikipedia.org/wiki/Pseudo%20bit%20error%20ratio | Pseudo bit error ratio (PBER) in adaptive high-frequency (HF) radio, is a bit error ratio derived by a majority decoder that processes redundant transmissions.
Note: In adaptive HF radio automatic link establishment, PBER is determined by the extent of error correction, such as by using the fraction of non-unanimous votes in the 2-of-3 majority decoder.
Engineering ratios
Error detection and correction | Pseudo bit error ratio | Mathematics,Engineering | 87 |
71,058,412 | https://en.wikipedia.org/wiki/International%20Liquid%20Mirror%20Telescope | The International Liquid Mirror Telescope is a 4-meter telescope in Uttarakhand, India. It is the first liquid-mirror telescope for astronomy in Asia and the largest liquid-mirror telescope in Asia.
History
On 2 June 2022, the telescope saw its first light. On 12 June 2022, it came online. On 21 June 2022, it became ready to observe. On 21 March 2023, it was inaugurated by Jitendra Singh.
Mechanism
The telescope uses elemental mercury as its mirror surface. The mercury is rotated about the axis of the telescope, and due to centrifugal force, takes a parabolic shape to focus incoming light.
See also
Large Zenith Telescope
NASA Orbital Debris Observatory
References
Telescopes
Optical telescopes
Astronomical observatories in India
Liquid mirror telescopes | International Liquid Mirror Telescope | Astronomy | 157 |
75,216,955 | https://en.wikipedia.org/wiki/Stigmidium%20cerinae | Stigmidium cerinae is a species of lichenicolous (lichen-dwelling) fungus in the family Mycosphaerellaceae. It was formally described as a new species in 1994 by mycologists Claude Roux and Dagmar Triebel. The type specimen was collected in Austria from the apothecia of the muscicolous (moss-dwelling) species Caloplaca stillicidiorum. It infects lichens in the genus Caloplaca, and more generally, members of the family Teloschistaceae. Infection by the fungus results in bleaching of the host hymenium.
Description
Stigmidium cerinae is distinguished by its globular to slightly elongated ascomata, which are exceptionally dark, glossy, and appear in abundance, ranging from 6 to 60 on the apothecia of the lichen host. These ascomata partially or fully darken the of the host, appearing embedded to varying degrees. The wall of the ascomata has a deep rufous-brown hue, with the upper portion appearing darker compared to the lighter lower part. This structure measures between 5 and 10 μm in thickness and consists of cells with a similarly coloured wall, which are internally coated with very fine brown pigment granules.
The cellular within the ascomata wall are distinguishable, with sizes varying in tangential and vertical planes. The and within the ascomata are well-defined and visible. The asci, which house the spores, have a club-like shape and are almost or bear a short stalk. As for the , they initially appear colourless, turning to a light brown towards the end of their lifecycle, possibly when they are dead. These spores are long, narrow, and range in their dimensions, typically three to four times as long as they are wide. They possess a thin wall and an outer that is barely discernible, not creating a . The cells within the spores are nearly equal, containing two large oil droplets.
In addition to the reproductive ascomata, Stigmidium cerinae also features conidiomata, albeit infrequently observed. These structures are globular and consist of a light brown wall made up of cells. The conidia generated are small in size. Vegetative hyphae are present, colourless, and hardly visible without staining, scattered throughout the hymenium and subhymenium of the host.
Distribution
The fungus has been recorded from several localities: Austria, Germany, Italy, Switzerland, Taymyr Peninsula in the Far North of Russia, the East Siberian Lowland, Romania, and Slovenia. Although it was reported from North America in 2001, these sightings were later revised to represent the species Stigmidium epistigmellum.
References
Mycosphaerellaceae
Fungi described in 1994
Fungi of Europe
Lichenicolous fungi
Taxa named by Claude Roux
Taxa named by Dagmar Triebel
Fungus species | Stigmidium cerinae | Biology | 618 |
72,953,248 | https://en.wikipedia.org/wiki/Meritxell%20Huch | Meritxell Huch (Barcelona, 1978) is a stem cell biologist and director at the Max Planck Institute of Molecular Cell Biology and Genetics. Her research considers tissue regeneration and the development of tissue-specific disease models for human organs. She was awarded a European Research Council Consolidator Grant in 2023.
Early life and education
After college Huch decided she wanted to work in science because of a desire to understand how aspirin works. Huch was an undergraduate student at the University of Barcelona, where she studied pharmaceutical sciences. She remained there for her graduate studies, earning a Master in 2003 and doctorate in 2007. She completed her PhD research in the Centre for Genomic Regulation, where she worked alongside Cristina Fillat. After completing her doctoral research she spent a year as a postdoctoral fellow before moring to the Hubrecht Institute on a Marie Curie Fellowship. In Utrecht she worked in the laboratory of Hans Clevers, where she isolated the stem cells responsible for the turnover of the adult stomach.
Research and career
Huch was appointed a Sir Henry Dale Research Fellow at the Gurdon Institute at the University of Cambridge. She held a joint position with The Wellcome Trust and the Department of Physiology. After five years in Cambridge, Huch joined the Max Planck Institute of Molecular Cell Biology and Genetics as one of the first members of the Lise Meitner Excellence Program. She was appointed to the board of directors in 2022.
Inflammation and tissue damage are associated with chronic liver disease and cancer. Her group have extensively developed human organoid models to study the molecular basis of adult tissue regeneration. Having identified stem cells responsible for the rapid turnover of the adult stomach, Huch showed that they could be maintained in culture. Next she moved on to liver cells, demonstrating the replicative potential of progenitor cells during regeneration and showing they are promising candidates for future therapeutic interventions in liver diseases. Her research has the potential to reduce the use of animals in scientific research.
Select publication
Awards and honours
2016 The Hamdan Award for Medical Research Excellence Hamdan Awards
2017 The Women in Cell Biology Early Career Medal
2018 Dame Sheila Sherlock Prize
2018 Elected EMBO Young Investigator
2019 The BINDER Innovation Prize
2022 German Stem Cell Network Hilde Mangold Award
2023 European Research Council Consolidator Grant
2024 Otto Bayer Award
References
Scientists from Barcelona
1978 births
Stem cell researchers
Biologists from Catalonia
Women biologists
Spanish medical researchers
Women medical researchers
21st-century Spanish biologists
21st-century Spanish women scientists
University of Barcelona alumni
Living people
Max Planck Institute directors | Meritxell Huch | Biology | 510 |
42,327,977 | https://en.wikipedia.org/wiki/Comparator%20applications | A comparator is an electronic component that compares two input voltages. Comparators are closely related to operational amplifiers, but a comparator is designed to operate with positive feedback and with its output saturated at one power rail or the other. If necessary, an op-amp can be pressed into service as a poorly performing comparator, but its slew rate will be impaired.
Comparator
Bistable output that indicates which of the two inputs has a higher voltage. That is,
where and are nominally the positive and negative supply voltages (which are not shown in the diagram).
Threshold detector
The threshold detector with hysteresis consists of an operational amplifier and a series of resistors that provide hysteresis. Like other detectors, this device functions as a voltage switch, but with an important difference. The state of the detector output is not directly affected by input voltage, but instead by the voltage drop across its input terminals (here, referred to as Va). From Kirchhoff's Current Law, this value depends on Vin and the output voltage of the threshold detector itself, both multiplied by a resistor ratio.
Unlike the zero crossing detector, the detector with hysteresis does not switch when Vin is zero; rather the output becomes Vsat+ when Va becomes positive and Vsat- when Va becomes negative. Further examination of the Va equation reveals that Vin can exceed zero (positive or negative) by a certain magnitude before the output of the detector is caused to switch. By adjusting the value of R1, the magnitude of Vin that will cause the detector to switch can be increased or decreased. Hysteresis is useful in various applications. It has better noise immunity than the level detector, which is used in interface circuits. Its positive feedback has a faster transition, so it is used in timing applications such as frequency counters. It is also used in astable multivibrators found in instruments such as function generators.
Zero crossing detector
A zero crossing detector is a comparator with the reference level set at zero.
It is used for detecting the zero crossings of AC signals. It can be made from an operational amplifier with an input voltage at its positive input .
When the input voltage is positive, the output voltage is a positive value; when the input voltage is negative, the output voltage is a negative value. The magnitude of the output voltage is a property of the operational amplifier and its power supply.
Applications include converting an analog signal into a form suitable for frequency measurements, in phase locked loops, or controlling power electronics circuits that must switch with a defined relationship to an alternating current waveform.
This detector exploits the property that the instantaneous frequency of an FM wave is approximately given by
where is the time difference between adjacent zero crossings of FM wave
Schmitt trigger
A bistable multivibrator implemented as a comparator with hysteresis.
In this configuration, the input voltage is applied through the voltage divider formed by and (which may be the source internal resistance) to the non-inverting input and the inverting input is grounded or referenced. The hysteresis curve is non-inverting and the switching thresholds are where is the greatest output magnitude of the operational amplifier.
Alternatively, the input source and the ground may be swapped. Now the input voltage is applied directly to the inverting input, and the non-inverting input is grounded or referenced. The hysteresis curve is inverting and the switching thresholds are . This configuration is used in the relaxation oscillator shown below.
Relaxation oscillator
By using an RC network to add slow negative feedback to the inverting Schmitt trigger, a relaxation oscillator is formed. The feedback through the RC network causes the Schmitt trigger output to oscillate in an endless symmetric square wave (i.e., the Schmitt trigger in this configuration is an astable multivibrator).
References
Electronic amplifiers
Analog circuits | Comparator applications | Technology,Engineering | 817 |
1,126,111 | https://en.wikipedia.org/wiki/Photosystem%20I | Photosystem I (PSI, or plastocyanin–ferredoxin oxidoreductase) is one of two photosystems in the photosynthetic light reactions of algae, plants, and cyanobacteria. Photosystem I is an integral membrane protein complex that uses light energy to catalyze the transfer of electrons across the thylakoid membrane from plastocyanin to ferredoxin. Ultimately, the electrons that are transferred by Photosystem I are used to produce the moderate-energy hydrogen carrier NADPH. The photon energy absorbed by Photosystem I also produces a proton-motive force that is used to generate ATP. PSI is composed of more than 110 cofactors, significantly more than Photosystem II.
History
This photosystem is known as PSI because it was discovered before Photosystem II, although future experiments showed that Photosystem II is actually the first enzyme of the photosynthetic electron transport chain. Aspects of PSI were discovered in the 1950s, but the significance of these discoveries was not yet recognized at the time. Louis Duysens first proposed the concepts of Photosystems I and II in 1960, and, in the same year, a proposal by Fay Bendall and Robert Hill assembled earlier discoveries into a coherent theory of serial photosynthetic reactions. Hill and Bendall's hypothesis was later confirmed in experiments conducted in 1961 by the Duysens and Witt groups.
Components and action
Two main subunits of PSI, PsaA and PsaB, are closely related proteins involved in the binding of the vital electron transfer cofactors P, Acc, A, A, and F. PsaA and PsaB are both integral membrane proteins of 730 to 750 amino acids that contain 11 transmembrane segments. A [4Fe-4S] iron-sulfur cluster called F is coordinated by four cysteines; two cysteines are provided each by PsaA and PsaB. The two cysteines in each are proximal and located in a loop between the ninth and tenth transmembrane segments. A leucine zipper motif seems to be present downstream of the cysteines and could contribute to dimerisation of PsaA/PsaB. The terminal electron acceptors F and F, also [4Fe-4S] iron-sulfur clusters, are located in a 9-kDa protein called PsaC that binds to the PsaA/PsaB core near F.
Photon
Photoexcitation of the pigment molecules in the antenna complex induces electron and energy transfer.
Antenna complex
The antenna complex is composed of molecules of chlorophyll and carotenoids mounted on two proteins. These pigment molecules transmit the resonance energy from photons when they become photoexcited. Antenna molecules can absorb all wavelengths of light within the visible spectrum. The number of these pigment molecules varies from organism to organism. For instance, the cyanobacterium Synechococcus elongatus (Thermosynechococcus elongatus) has about 100 chlorophylls and 20 carotenoids, whereas spinach chloroplasts have around 200 chlorophylls and 50 carotenoids. Located within the antenna complex of PSI are molecules of chlorophyll called P700 reaction centers. The energy passed around by antenna molecules is directed to the reaction center. There may be as many as 120 or as few as 25 chlorophyll molecules per P700.
P700 reaction center
The P700 reaction center is composed of modified chlorophyll a that best absorbs light at a wavelength of 700 nm. P700 receives energy from antenna molecules and uses the energy from each photon to raise an electron to a higher energy level (P700*). These electrons are moved in pairs in an oxidation/reduction process from P700* to electron acceptors, leaving behind P700. The pair of P700* - P700 has an electric potential of about −1.2 volts. The reaction center is made of two chlorophyll molecules and is therefore referred to as a dimer. The dimer is thought to be composed of one chlorophyll a molecule and one chlorophyll a′ molecule. However, if P700 forms a complex with other antenna molecules, it can no longer be a dimer.
Modified chlorophyll A and A
The two modified chlorophyll molecules are early electron acceptors in PSI. They are present one per PsaA/PsaB side, forming two branches electrons can take to reach F. A accepts electrons from P700*, passes it to A of the same side, which then passes the electron to the quinone on the same side. Different species seems to have different preferences for either A/B branch.
Phylloquinone
A phylloquinone, sometimes called vitamin K, is the next early electron acceptor in PSI. It oxidizes A in order to receive the electron and in turn is re-oxidized by F, from which the electron is passed to F and F. The reduction of Fx appears to be the rate-limiting step.
Iron–sulfur complex
Three proteinaceous iron–sulfur reaction centers are found in PSI. Labeled F, F, and F, they serve as electron relays. F and F are bound to protein subunits of the PSI complex and F is tied to the PSI complex. Various experiments have shown some disparity between theories of iron–sulfur cofactor orientation and operation order. In one model, F passes an electron to F, which passes it on to F to reach the ferredoxin.
Ferredoxin
Ferredoxin (Fd) is a soluble protein that facilitates reduction of to NADPH. Fd moves to carry an electron either to a lone thylakoid or to an enzyme that reduces . Thylakoid membranes have one binding site for each function of Fd. The main function of Fd is to carry an electron from the iron-sulfur complex to the enzyme ferredoxin– reductase.
Ferredoxin– reductase (FNR)
This enzyme transfers the electron from reduced ferredoxin to to complete the reduction to NADPH. FNR may also accept an electron from NADPH by binding to it.
Plastocyanin
Plastocyanin is an electron carrier that transfers the electron from cytochrome b6f to the P700 cofactor of PSI in its ionized state P700.
Ycf4 protein domain
The Ycf4 protein domain found on the thylakoid membrane is vital to photosystem I. This thylakoid transmembrane protein helps assemble the components of photosystem I. Without it, photosynthesis would be inefficient.
Evolution
Molecular data show that PSI likely evolved from the photosystems of green sulfur bacteria. The photosystems of green sulfur bacteria and those of cyanobacteria, algae, and higher plants are not the same, but there are many analogous functions and similar structures. Three main features are similar between the different photosystems. First, redox potential is negative enough to reduce ferredoxin. Next, the electron-accepting reaction centers include iron–sulfur proteins. Last, redox centres in complexes of both photosystems are constructed upon a protein subunit dimer. The photosystem of green sulfur bacteria even contains all of the same cofactors of the electron transport chain in PSI. The number and degree of similarities between the two photosystems strongly indicates that PSI and the analogous photosystem of green sulfur bacteria evolved from a common ancestral photosystem.
See also
Biohybrid solar cell
References
External links
Photosystem I: Molecule of the Month in the Protein Data Bank
Photosystem I in A Companion to Plant Physiology
James Barber FRS Photosystems I & II
Photosynthesis
Light reactions
EC 1.97.1
Protein complexes | Photosystem I | Chemistry,Biology | 1,664 |
16,796,930 | https://en.wikipedia.org/wiki/HD%20213240%20b | HD 213240 b is an exoplanet located from the Solar System in the constellation of Grus. It is a gas giant orbiting the G-type star HD 213240.
The planet was discovered in 2001 as part of the CORALIE extra-solar planet search program. It was described in a 2001 publication by astronomers N. C. Santos, M. Mayor, D. Naef, F. Pepe, D. Queloz, S. Udry and M. Burnet of Observatoire de Genève, Switzerland. Doppler spectroscopy was used. In 2023, the inclination and true mass of HD 213240 b were determined via astrometry.
See also
HD 212301 b
Methods of detecting extrasolar planets
References
External links
Grus (constellation)
Giant planets
Exoplanets discovered in 2001
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | HD 213240 b | Astronomy | 189 |
34,992,890 | https://en.wikipedia.org/wiki/Franca%20IDL | Franca Interface Definition Language (Franca IDL) is a formally defined, text-based interface description language. It is part of the Franca framework, which is a framework for definition and transformation of software interfaces. Franca applies model transformation techniques to interoperate with various interface description languages (e.g., D-Bus Introspection language, Apache Thrift IDL, Fibex Services).
Franca is a powerful framework for definition and transformation of software interfaces. It is used for integrating software components from different suppliers, which are built based on various runtime frameworks, platforms and IPC mechanisms. The core of it is Franca IDL(Interface Definition Language), which is a textual language for specification of APIs.
History
The initial version of Franca was developed by the GENIVI Alliance, now called COVESA (Connected Vehicle Systems Alliance), in 2011 being a common interface description language used for the standardization
of an In-Vehicle Infotainment (IVI) platform. The first public version of Franca was released in March 2012 under the Eclipse Public License, version 1.0.
In 2013, Franca has been proposed as an official Eclipse foundation project.
Franca is mainly developed by the German company Itemis.
Features
Franca IDL provides a range of features for the specification of software interfaces:
declaration of interface elements: attributes, methods, broadcasts
major/minor versioning scheme
specification of the dynamic behaviour of interfaces based on finite-state machines (Protocol State Machines, short: PSM)
storage of meta-information (e.g., author, description, links) using structured comments
user-defined data types (i.e., array, enumeration, structure, union, map, type alias)
inheritance for interfaces, enumerations and structures
Architecture
In addition to the text-based IDL for the specification of interfaces, Franca provides an HTML documentation generator.
Franca is implemented based on the Eclipse (software) tool platform. For the definition of the actual Franca IDL, the Xtext framework is used. For the user of Franca, this offers a list of benefits for the activity of reviewing and specifying software interfaces.
See also
Model transformation
Automatic programming
Eclipse (software)
Eclipse Modeling Framework
Xtext
References
External links
(at Eclipse Labs)
Resources
Specification languages
Data modeling languages
Inter-process communication
Component-based software engineering
Eclipse (software)
Object models
Remote procedure call
Object-oriented programming | Franca IDL | Technology,Engineering | 500 |
26,377,919 | https://en.wikipedia.org/wiki/Sequoioideae | Sequoioideae, commonly referred to as redwoods, is a subfamily of coniferous trees within the family Cupressaceae, that range in the northern hemisphere. It includes the largest and tallest trees in the world. The trees in the subfamily are amongst the most notable trees in the world and are common ornamental trees.
Description
The three redwood subfamily genera are Sequoia from coastal California and Oregon, Sequoiadendron from California's Sierra Nevada, and Metasequoia in China. The redwood species contains the largest and tallest trees in the world. These trees can live for thousands of years. Threats include logging, fire suppression, illegal marijuana cultivation, and burl poaching.
Only two of the genera, Sequoia and Sequoiadendron, are known for massive trees. Trees of Metasequoia, from the single living species Metasequoia glyptostroboides, are deciduous, grow much smaller (although are still large compared to most other trees) and can live in colder climates.
Taxonomy and evolution
Multiple studies of both morphological and molecular characters have strongly supported the assertion that the Sequoioideae are monophyletic. Most modern phylogenies place Sequoia as sister to Sequoiadendron and Metasequoia as the out-group. However, Yang et al. went on to investigate the origin of a peculiar genetic component in Sequoioideae, the polyploidy of Sequoia—and generated a notable exception that calls into question the specifics of this relative consensus.
Cladistic tree
A 2006 paper based on non-molecular evidence suggested the following relationship among extant species:
A 2021 study using molecular evidence found the same relationships among Sequoioideae species, but found Sequoioideae to be the sister group to the Athrotaxidoideae (a superfamily presently known only from Tasmania) rather than to Taxodioideae. Sequoioideae and Athrotaxidoideae are thought to have diverged from each other during the Jurassic.
Possible reticulate evolution in Sequoioideae
Reticulate evolution refers to the origination of a taxon through the merging of ancestor lineages.
Polyploidy has come to be understood as quite common in plants—with estimates ranging from 47% to 100% of flowering plants and extant ferns having derived from ancient polyploidy. Within the gymnosperms however it is quite rare. Sequoia sempervirens is hexaploid (2n= 6x= 66). To investigate the origins of this polyploidy Yang et al. used two single copy nuclear genes, LFY and NLY, to generate phylogenetic trees. Other researchers have had success with these genes in similar studies on different taxa.
Several hypotheses have been proposed to explain the origin of Sequoia's polyploidy: allopolyploidy by hybridization between Metasequoia and some probably extinct taxodiaceous plant; Metasequoia and Sequoiadendron, or ancestors of the two genera, as the parental species of Sequoia; and autohexaploidy, autoallohexaploidy, or segmental allohexaploidy.
Yang et al. found that Sequoia was clustered with Metasequoia in the tree generated using the LFY gene but with Sequoiadendron in the tree generated with the NLY gene. Further analysis strongly supported the hypothesis that Sequoia was the result of a hybridization event involving Metasequoia and Sequoiadendron. Thus, Yang et al. hypothesize that the inconsistent relationships among Metasequoia, Sequoia, and Sequoiadendron could be a sign of reticulate evolution by hybrid speciation (in which two species hybridize and give rise to a third) among the three genera. However, the long evolutionary history of the three genera (the earliest fossil remains being from the Jurassic) make resolving the specifics of when and how Sequoia originated once and for all a difficult matter—especially since it in part depends on an incomplete fossil record.
Extant species
Metasequoia glyptostroboides - Dawn redwood; south-central China.
Sequoiadendron giganteum - Giant sequoia, Giant redwood; western slopes of the Sierra Nevadas; California.
Sequoia sempervirens - Coastal Redwood, California redwood; Northern California coast and extreme Southern Oregon.
Paleontology
Sequoioideae is an ancient taxon, with the oldest described Sequoioideae species, Sequoia jeholensis, recovered from Jurassic deposits. The fossil wood Medulloprotaxodioxylon, reported from the late Triassic of China, resembles Sequoiadendron giganteum and may represent an ancestral form of the Sequoioideae; this supports the idea of a Late Triassic Norian origin for this subfamily.
The fossil record shows a massive expansion of range in the Cretaceous and dominance of the Arcto-Tertiary Geoflora, especially in northern latitudes. Genera of Sequoioideae were found in the Arctic Circle, Europe, North America, and throughout Asia and Japan. A general cooling trend beginning in the late Eocene and Oligocene reduced the northern ranges of the Sequoioideae, as did subsequent ice ages. Evolutionary adaptations to ancient environments persist in all three species despite changing climate, distribution, and associated flora, especially the specific demands of their reproduction ecology that ultimately forced each of the species into refugial ranges where they could survive.
The extinct genus Austrosequoia, known from the Late Cretaceous-Oligocene of the Southern Hemisphere, including Australia and New Zealand, has been suggested as a member of the subfamily.
Conservation
In 2024, it was estimated that there were about 500,000 redwoods in Britain, mostly brought as seeds and seedlings from the US in the Victorian era. The entire subfamily is endangered. The IUCN Red List Category & Criteria assesses Sequoia sempervirens as Endangered (A2acd), Sequoiadendron giganteum as Endangered (B2ab) and Metasequoia glyptostroboides as Endangered (B1ab). In 2024 it was reported that over a period of two years about one-fifth of all giant sequoias were destroyed in extreme wildfires in California.
See also
Temperate cloud forest of North American west coast (Sequoia forests)
References
Bibliography and links
:de:Liste der dicksten Mammutbäume in Deutschland. List of Large Giant Redwoods in Germany
IUCN 2013. IUCN Red List of Threatened Species. Version 2013.2. Downloaded on 10 January 2014.
External links
Plant subfamilies | Sequoioideae | Biology | 1,419 |
18,079,073 | https://en.wikipedia.org/wiki/Mark%20W.%20Spong | Mark W. Spong (born November 5, 1952, in Warren, Ohio) is an American roboticist. He is a professor of systems engineering and electrical and computer engineering in the Erik Jonsson School of Engineering & Computer Science at the University of Texas at Dallas (UTD). He served as dean of the Jonsson School and the Lars Magnus Ericsson Chair in Electrical Engineering from 2008 to 2017. Before he joined UTD, he was the Donald Biggar Willett Professor of Engineering, professor of electrical engineering, research professor of Coordinated Science Laboratory and Information Trust Institute, and director of Center for Autonomous Engineering Systems and Robotics at the University of Illinois at Urbana-Champaign.
Spong is a Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He has received numerous awards for his research and teaching, including
the 2020 Rufus Oldenburger Medal from the American Society of Mechanical Engineers
the 2018 Bode Lecture Prize from the IEEE Control Systems Society
the 2016 Nyquist Lecture Prize from the Dynamical Systems and Control Division of the ASME
the 2007 IROS Fumio Harashima Award for Innovative Technologies
the O. Hugo Schuck Award in 2002 and 2009 from the American Automatic Control Council
the 2004 John R. Ragazzini Award from the American Automatic Control Council
the IEEE Third Millennium Medal.
Spong received his B.A. in mathematics and physics from Hiram College in 1975, a M.S. in mathematics from New Mexico State University in 1977, and a M.S. and D.Sc. in systems science and mathematics from Washington University in St. Louis in 1979 and 1981, respectively.
Publications
2015, Passivity-Based Control in Networked Robotics, with Takeshi Hatanaka, Nikhil Chopra, and Masayuki Fujita
2007. The Reaction Wheel Pendulum, with D. J. Block and K. J. Astrom
2006. Robot modeling and control. with S. Hutchinson and M. Vidyasagar
1993. Robot control : dynamics, motion planning, and analysis with F.L. Lewis and C.T. Abdallah (ed.)
1989. Robot dynamics and control. with M. Vidyasagar
External links
Home page
1952 births
Control theorists
Hiram College alumni
New Mexico State University alumni
Washington University in St. Louis alumni
Washington University in St. Louis mathematicians
University of Illinois Urbana-Champaign faculty
University of Texas at Dallas faculty
Living people
American roboticists | Mark W. Spong | Engineering | 488 |
1,581,427 | https://en.wikipedia.org/wiki/Bereitschaftspotential | In neurology, the Bereitschaftspotential or BP (German for "readiness potential"), also called the pre-motor potential or readiness potential (RP), is a measure of activity in the motor cortex and supplementary motor area of the brain leading up to voluntary muscle movement. The BP is a manifestation of cortical contribution to the pre-motor planning of volitional movement. It was first recorded and reported in 1964 by Hans Helmut Kornhuber and Lüder Deecke at the University of Freiburg in Germany. In 1965 the full publication appeared after many control experiments.
Discovery
In the spring of 1964 Hans Helmut Kornhuber (then docent and chief physician at the department of neurology, head Professor Richard Jung, university hospital Freiburg im Breisgau) and Lüder Deecke (his doctoral student) went for lunch to the 'Gasthaus zum Schwanen' at the foot of the Schlossberg hill in Freiburg. Sitting alone in the beautiful garden they discussed their frustration with the passive brain research prevailing worldwide and their desire to investigate self-initiated action of the brain and the will. Consequently, they decided to look for cerebral potentials in man related to volitional acts and to take voluntary movement as their research paradigm.
The possibility to do research on electrical brain potentials preceding voluntary movements came with the advent of the 'computer of average transients' (CAT computer), invented by Manfred Clynes, the first still simple instrument available at that time in the Freiburg laboratory. In the electroencephalogram (EEG) little is to be seen preceding actions, except of an inconstant diminution of the α- (or μ-) rhythm. The young researchers stored the electroencephalogram and electromyogram of self-initiated movements (fast finger flexions) on tape and analyzed the cerebral potentials preceding movements time-reversed with the start of the movement as the trigger, literally turning the tape over for analysis since they had no reversal playback or programmable computer. A potential preceding human voluntary movement was discovered and published in the same year. After detailed investigation and control experiments such as passive finger movements the Citation Classic with the term Bereitschaftspotential was published.
Mechanism
The BP is ten to hundred times smaller than the α-rhythm of the EEG; only by averaging, relating the electrical potentials to the onset of the movement it becomes apparent. Figure shows the typical slow shifts of the cortical DC potential, called Bereitschaftspotential, preceding volitional, rapid flexions of the right index finger. The vertical line indicates the instant of triggering t = 0 (first activity in the EMG of the agonist muscle). Recording positions are left precentral (L prec, C3), right precentral (R prec, C4), mid-parietal (Pz); these are unipolar recordings with linked ears as reference. The difference between the BP in C3 and in C4 is displayed in the lowest graph (L/R prec). Superimposed are the results of eight experiments as obtained in the same subject (B.L.) on different days. see Deecke, L.; Grözinger, B.; Kornhuber H.H. (1976)
Note that the BP has two components, the early one (BP1) lasting from about −1.2 to −0.5; the late component (BP2) from −0.5 to shortly before 0 sec. The pre-motion positivity is even smaller, and the motor-potential which starts about fifty to sixty milliseconds before the onset of movement and has its maximum over the contralateral precentral hand area is still smaller. Thus, it takes great care to see these potentials: exact triggering by the real onset of movement is important, which is especially difficult preceding speech movements. Furthermore, artifacts due to head-, eye-, lid-, mouth-movements and respiration have to be eliminated before averaging because such artifacts may be of a magnitude which makes it difficult to render them negligible even after hundreds of sweeps. In the case of eye movements eye muscle potentials have to be distinguished from cerebral potentials. In some cases animal experiments were necessary to clarify the origin of potentials such as the R-wave. Therefore, it took many years until some of the other laboratories were able to confirm the details of Kornhuber & Deecke's results. In addition to the finger or eye movements as mentioned above, the BP has been recorded accompanying willful movements of the wrist, arm, shoulder, hip, knee, foot and toes. It was also recorded prior to speaking, writing and also swallowing.
The magnetoencephalographic (MEG) equivalent of the Bereitschaftspotential (BP), 'Bereitschafts(magnetic)field' (BF), or readiness field (RF) was first recorded in Hal Weinberg's laboratory at Simon Fraser University Burnaby B.C. Canada in 1982. It was confirmed that the early component, BP 1 or BF1, respectively was generated by the supplementary motor area (SMA), including the pre-SMA, while the late component, BP2 or BF2, was generated by the primary motor area, MI.
A very similar event-related potential (ERP) component had earlier been discovered by the British neurophysiologist William Grey Walter in 1962 and published in 1964. It is the contingent negative variation (CNV). The CNV also composes two waves; the initial wave (i.e., O wave) and the terminal wave (i.e., E wave). The terminal CNV has similar characteristics as the BP and many researchers have claimed that the BP and the terminal CNV are the same component. At least there is a consensus that both indicate a preparation of the brain for a following behavior.
Outcomes
The Bereitschaftspotential was received with great interest by the scientific community, as reflected by Sir John Eccles's comment: "There is a delightful parallel between these impressively simple experiments and the experiments of Galileo Galilei who investigated the laws of motion of the universe with metal balls on an inclined plane". The interest was even greater in psychology and philosophy because volition is traditionally associated with human freedom (cf. Kornhuber 1984). The spirit of the time, however, was hostile to freedom in those years; it was believed that freedom is an illusion. The tradition of behaviourism and Freudism was deterministic. While will and volition were frequently leading concepts in psychological research papers before and after the first world war and even during the second war, after the end of the second world war this declined, and by the mid-sixties these key words completely disappeared and were abolished in the thesaurus of the American Psychological Association. The BP is an electrical sign of participation of the supplementary motor area (SMA) prior to volitional movement, which starts activity prior to the primary motor area. The BP has precipitated a worldwide discussion about free will (cf. the closing chapter in the book "The Bereitschaftspotential").
As said above, the activity of the SMA generates the early component of the Bereitschaftspotential (BP1 or BP early). The SMA has the starting function of the movement or action. The role of the SMA was further substantiated by Cunnington et al. 2003, showing that SMA proper and pre-SMA are active prior to volitional movement or action, as well as the cingulate motor area (CMA). This is now called ‘anterior mid-cingulate cortex (aMCC)’. Recently it has been shown by integrating simultaneously acquired EEG and fMRI that SMA and aMCC have strong reciprocal connections that act to sustain each other’s activity, and that this interaction is mediated during movement preparation according to the Bereitschaftspotential amplitude.
EEGs and EMGs are used in combination with Bayesian inference to construct Bayesian networks which attempt to predict general patterns of Motor Intent Neuron Action Potentials firing. Researchers attempting to develop non-intrusive brain–computer interfaces are interested in this, as are system analysis, operations research, and epistemology (e.g. the Smith predictor has been suggested in the discussion).
BP and free will
In a series of neuroscience of free will experiments in the 1980s, Benjamin Libet studied the relationship between conscious experience of volition and the BP e.g. and found that the BP started about 0.35 sec earlier than the subject's reported conscious awareness that "now he or she feels the desire to make a movement." Libet concludes that we have no free will in the initiation of our movements; though, since subjects were able to prevent intended movement at the last moment, we do have the ability to veto these actions ("free won't").
These studies have provoked widespread debate.
In 2016, a group around John-Dylan Haynes in Berlin (Germany) determined the time window after the BP in which an intended motion could possibly be cancelled upon command. The authors tested whether human volunteers could win a "duel" against a BCI (brain–computer interface) designed to predict their movements in real-time from observations of their EEG activity (the BP). They aimed to determine the exact time at which cancellation (veto) of movements was not possible anymore (the point of no return). The computer was trained to predict by means of the BP when a proband would move. The point of no return was at 200 ms before the movement. However, even after that, when a pedal was already set in motion, the subjects were able to reschedule their action by not completing the already started behavior. The authors pointed out in their report that cancellation of self-initiated movements had already been reported by Libet in 1985. Thus, the new achievement was a more precise determination of the point of no return.
Applications
An interesting use of the Bereitschaftspotential is in brain–computer interface (BCI) applications; this signal feature can be identified from scalp recording (even from single-trial measurements) and interpreted for various uses, for example control of computer displays or control of peripheral motor units in spinal cord injuries. The most important BCI application is the 'mental' steering of artificial limbs in amputees.
See also
C1 and P1
Contingent negative variation
Difference due to memory
Early left anterior negativity
Epiphenomenalism
Error-related negativity
Late positive component
Lateralized readiness potential
Mismatch negativity
N2pc
N100
N170
N200
N400
P3a
P3b
P200
P300 (neuroscience)
P600
Somatosensory evoked potential
Visual N1
References
Further reading
Brunia CHM, van Boxtel GJM, Böcker KBE: Negative Slow Waves as Indices of Anticipation: The Bereitschaftspotential, the Contingent Negative Variation, and the Stimulus-Preceding Negativity. In: Steven J. Luck, Emily S. Kappenman (Eds.): The Oxford Handbook of Event-Related Potential Components. Oxford University Press, USA 2012, , p. 189-207.
Deecke, L.; Kornhuber, H.H. (2003). Human freedom, reasoned will, and the brain. The Bereitschaftspotential story. In: M Jahanshahi, M Hallett (Eds.): The Bereitschaftspotential, movement-related cortical potentials. Kluwer Academic / Plenum Publishers pp. 283–320.
Kornhuber HH; Deecke L (2012) The Will and Its Brain: An Appraisal of Reasoned Free Will. University Press of America, Lanham MD USA
Wise SP: Movement selection, preparation, and the decision to act: neurophysiological studies in nonhuman primates. In: Marjan Jahanshahi, Mark Hallett (Eds.): The Bereitschaftspotential: Movement-Related Cortical Potentials. Kluwer Academic / Plenum Publishers, New York 2003, , pp. 249–268.
Nann M, Cohen LG, Deecke L & Soekadar SR: To jump or not to jump – The Bereitschaftspotential required to jump into 192-meter abyss. Scientific Reports (2019) 9:2243 https://doi.org/10.1038/s41598-018-38447-w
External links
http://www.cmds.canterbury.ac.nz/documents/huckabee_swallowing.pdf
http://www.cs.washington.edu/homes/rao/shenoy_rao05.pdf
Somatic motor system
History of neuroscience
Brain–computer interface
Motor control
Electroencephalography
Evoked potentials | Bereitschaftspotential | Biology | 2,710 |
10,848,810 | https://en.wikipedia.org/wiki/Provider%20edge%20router | A provider edge router (PE router) is a router between one network service provider's area and areas administered by other network providers. A network provider is usually an Internet service provider as well (or only that).
The term PE router covers equipment capable of a broad range of routing protocols, notably:
Border Gateway Protocol (BGP) (PE to PE or PE to CE communication)
Open Shortest Path First (OSPF) (PE to CE router communication)
Multiprotocol Label Switching (MPLS) (PE to P router communication)
PE routers do not need to be aware of what kind of traffic is coming from the provider's network, as opposed to a P router that functions as a transit within the service provider's network. However, some PE routers also do labelling.
See also
Customer edge router
Provider router
References
Routers (computing)
MPLS networking | Provider edge router | Technology | 190 |
1,246,227 | https://en.wikipedia.org/wiki/Security%20theater | Security theater is the practice of implementing security measures that are considered to provide the feeling of improved security while doing little or nothing to achieve it.
The term was originally coined by Bruce Schneier for his book Beyond Fear and has since been widely adopted by the media and the public, particularly in discussions surrounding the United States Transportation Security Administration (TSA).
Practices criticized as security theater include airport security measures, stop and frisk policies on public transportation, and clear bag policies at sports venues.
Etymology
The term security theater was coined by computer security specialist and writer Bruce Schneier for his book Beyond Fear, but has gained currency in security circles, particularly for describing airport security measures.
Examples of use of the term:
Examples
Some measures which have been called security theater include:
Airport security measures
Many procedures of the TSA have been criticized as security theater. Specific measures critiqued as security theater include the "patting down the crotches of children, the elderly and even infants as part of the post-9/11 airport security show" and the use of full body scanners, which "are ineffective and can be easily manipulated." Many measures are put in place in reaction to past threats and "are ineffective at actually stopping terrorism, as potential attackers can simply change tactics."
The use of Computer Assisted Passenger Prescreening System (CAPPS) and its successor, Secure Flight – programs which rely on static screening of airline passenger profiles to choose which people should be searched – has been criticized as ineffective security theater. The TSA's Registered Traveler Program and Trusted Traveler Program have been criticized on similar grounds. The CAPPS has been demonstrated to reduce the effectiveness of searching below that of random searches, since terrorists can test the system and use those who are searched least often for their operations.
A 2010 United States Government Accountability Office (GAO) report found that the TSA's $900 million Screening Passengers by Observation Techniques (SPOT) program, a behavioral-detection program introduced in 2007 that is aimed at detecting terrorists, had detected no terrorists and failed to detect at least 16 people who had traveled through airports where the program was in use and were later involved in terrorism cases. In 2013, a GAO report found that no evidence existed to support the idea that "behavioral indicators ... can be used to identify persons who may pose a risk to aviation security." A separate 2013 report by the Department of Homeland Security Office of Inspector General found that the TSA had failed to evaluate the SPOT program and could not "show that the program is cost effective." The SPOT program has been described as security theater.
With the aim of preventing individuals on a No Fly List from flying in commercial airliners, U.S. airports require all passengers to show valid picture ID (e.g. a passport or driver's license) along with their boarding pass before entering the boarding terminal. At this checkpoint, the name on the ID is matched to that on the boarding pass, but is not recorded. In order to be effective, this practice must assume that 1) the ticket was bought under the passenger's real name (at which point the name was recorded and checked against the No Fly List), 2) the boarding pass shown is real, and 3) the ID shown is real. However, the rise of print-at-home boarding passes, which can be easily forged, allows a potential attacker to buy a ticket under someone else's name, to go into the boarding terminal using a real ID and a fake boarding pass, and then to fly on the ticket that has someone else's name on it. Additionally, a 2007 investigation showed that obviously false IDs could be used when claiming a boarding pass and entering the departures terminal, so a person on the No Fly List can simply travel under a different name.
Facial recognition technology was introduced at Manchester Airport in August 2008. A journalist for The Register claimed that "the gates in Manchester were throwing up so many false results that staff effectively turned them off." Previously matches had to be 80% the same to their passport pictures to go through, and this was quickly changed to 30%. According to Rob Jenkins, a facial recognition expert at Glasgow University, when testing similar machines at a 30% recognition level, the machines were unable to distinguish between the faces of Osama bin Laden and Winona Ryder, bin Laden and Kevin Spacey, nor between Gordon Brown and Mel Gibson.
Random search programs on public transit and sports venues
Random bag searches on subway systems – a practice that has been used on the Washington Metro and on New York City mass transit – have been condemned as ineffective security theater and a waste of resources. Such programs have also been criticized by members of the public and civil liberties groups. After eighteen months of random bag checks by the Metro Transit Police from December 2010, the Washington Metropolitan Area Transit Authority reported that the program, which was funded by a federal homeland security grant, had yielded zero arrests.
Similarly, the Chicago Transit Authority police's deployment of random explosive-residue-swabbing checkpoints at public transit stations has been criticized as an ineffective means of security. Pat-downs of fans entering arenas for National Football League and metal detectors at Major League Baseball games have also been criticized as security theater. Additionally, the effectiveness of Clear and Large Bag policies at many major sports venues in the United States has been questioned repeatedly.
Other
During the COVID-19 pandemic, some measures such as surface sanitation and temperature checks at airports have been criticized as being security theater or "hygiene theater".
Credit card signatures have been a longstanding subject of scrutiny and generally referred to as theatrical measure, as they have been notably criticized for having no true effect on deterring or stopping credit card fraud.
Benefits
While it may seem that security theater must always cause loss, it may be beneficial, at least in a localized situation. This is because perception of security is sometimes more important than security itself. If the potential victims of an attack feel more protected and safer as a result of the measures, then they may carry on activities they would have otherwise avoided, which could lead to economic benefits.
Disadvantages
By definition, security theater practices provide no measurable security benefits, or minimal benefits that do not outweigh the cost of such practices. Security theater typically involves restricting or modifying aspects of people's behavior or surroundings in very visible and highly specific ways, which could involve potential restrictions of personal liberty and privacy, ranging from mild inconveniences such as confiscating liquids over a limited amount, to sensitive issues, such as a full body strip search.
Critics such as the American Civil Liberties Union (ACLU) have argued that the benefits of security theater are temporary and illusory since after such security measures inevitably fail, not only is the feeling of insecurity increased, but there is also loss of belief in the competence of those responsible for security.
Organizations such as the US TSA, who have implemented security theater practices, have been found to be highly ineffective, with one 2015 investigation resulting in TSA agents failing to prevent illegal items in 95% of trials. A follow up study in 2017 found similar results, though the TSA did not release an exact rate of success or failure.
Researchers such as Edward Felten have described the airport security repercussions due to the September 11, 2001 attacks as security theater.
Increased casualties
In 2007, researchers at Cornell University studied the specific effects of a change to security practices instituted by the TSA in late 2002. They concluded that this change reduced the number of air travelers by 6%, and estimated that consequently, 129 more people died in car accidents in the fourth quarter of 2002. Extrapolating this rate of fatalities, New York Times contributor Nate Silver remarked that this is equivalent to "four fully loaded Boeing 737s crashing each year."
Economic costs
The 2007 Cornell University study also noted that strict airport security hurts the airline industry; it was estimated that the 6% reduction in the number of passengers in the fourth quarter of 2002 cost the industry $1.1 billion in lost business.
The ACLU has reported that between October 2008 and June 2010, over 6,500 people traveling to and from the United States had their electronic devices searched at the border. The Association of Corporate Travel Executives (ACTE), whose member companies are responsible for over 1 million travelers and represent over $300 billion in annual business travel expenditures, reported in February 2008 that 7% of their members had been subject to the seizure of a laptop or other electronic device. Electronic device seizure may have a severe economic and behavioral impact. Entrepreneurs for whom their laptop represents a mobile office can be deprived of their entire business. Fifty percent of the respondents to ACTE's survey indicated that having a laptop seizure could damage a traveler's professional standing within a company.
The executive director of the ACTE testified at a 2008 hearing of the Senate Judiciary Subcommittee on the Constitution seizure of data or computers carrying business proprietary information has forced and will force companies to implement new and expensive internal travel policies.
Increased risk of targeted attacks
The direct costs of security theater may be lower than that of more elaborate security measures. However, it may divert portions of the budget for effective security measures without resulting in an adequate, measurable gain in security.
Because security theater measures are often so specific (such as concentrating on potential explosives in shoes), it allows potential attackers to divert to other methods of attack. This not only applies to the extremely specific measures, but can also involve possible tactics such as switching from using highly scrutinized airline passengers as attackers to getting attackers employed as airline or airport staff. Another alternate tactic would be simply avoiding attacking aircraft in favor of attacking other areas where sufficient damage would be done, such as check-in counters (as was done, for example, in the attacks on Brussels airport on 22 March 2016), or simply targeting other places where people gather in large numbers, such as cinemas.
Discriminatory practices
An additional disadvantage of security theater is the potential for biases to lead to negative outcomes and unequal treatment for certain groups. Airport racial profiling in the United States is an issue that largely began in the wake of the September 11 attacks on the United States, and persists today.
Documents uncovered by the ACLU found that until late 2012, the US TSA maintained training manuals that exclusively focused on examples of Arab or Muslim terrorists. In 2022, the US GAO found that advanced imaging technologies by the TSA disproportionately selected passengers of minority groups for additional screening, and a follow up report in 2023 found the same issue.
The ACLU also filed a 2015 lawsuit against the TSA's SPOT program, and was successful in obtaining thousands of pages of documents regarding the program. The ACLU dropped their lawsuit against the TSA in 2017, but a report published by the organization, as well as reports published by the US GAO and a scientific advisory group found that the SPOT program had no scientific basis for effectiveness.
See also
Christopher Soghoian – creator of a website that generated fake airline boarding passes
Hygiene theater
Placebo effect
Target hardening
Watching-eye effect
Dramaturgy (sociology)
References
External links
Crypto-Gram, Bruce Schneier's newsletter
Sometimes, Security Theater Really Works Gadi Evron and Imri Goldberg argue that security theater saves lives
Aviation security
Airport infrastructure
Deception | Security theater | Engineering | 2,297 |
60,144 | https://en.wikipedia.org/wiki/Wireless%20community%20network | Wireless community networks or wireless community projects or simply community networks, are non-centralized, self-managed and collaborative networks organized in a grassroots fashion by communities, non-governmental organizations and cooperatives in order to provide a viable alternative to municipal wireless networks for consumers.
Many of these organizations set up wireless mesh networks which rely primarily on sharing of unmetered residential and business DSL and cable Internet. This sort of usage might be non-compliant with the terms of service of local internet service provider (ISPs) that deliver their service via the consumer phone and cable duopoly. Wireless community networks sometimes advocate complete freedom from censorship, and this position may be at odds with the acceptable use policies of some commercial services used. Some ISPs do allow sharing or reselling of bandwidth.
The First Latin American Summit of Community Networks, held in Argentina in 2018, presented the following definition for the term "community network": "Community networks are networks collectively owned and managed by the community for non-profit and community purposes. They are constituted by collectives, indigenous communities or non-profit civil society organizations that exercise their right to communicate, under the principles of democratic participation of their members, fairness, gender equality, diversity and plurality".
According to the Declaration on Community Connectivity, elaborated through a multistakeholder process organized by the Internet Governance Forum's Dynamic Coalition on Community Connectivity, community networks are recognised by a list of characteristics: Collective ownership; Social management; Open design; Open participation; Promotion of peering and transit; Promotion of the consideration of security and privacy concerns while designing and operating the network; and promotion of the development and circulation of local content in local languages.
History
Wireless community networks started as projects that evolved from amateur radio using packet radio, and from the free software community which substantially overlapped with the amateur radio community. Wireless neighborhood networks were established by technology enthusiasts in the early 2000s. The Redbricks Intranet Collective (RIC) started 1999 in Manchester, UK, to allow about 30 flats in the Bentley House Estate to share the subscription cost of one leased line from British Telecom (BT). Wi-Fi was quickly adopted by technology enthusiasts and hobbyists, because it was an open standard and consumer Wi-Fi hardware was comparatively cheap.
Wireless community networks started out by turning wireless access points designed for short-range use in homes into multi-kilometre long-range Wi-Fi by building high-gain directional antennas. Rather than buying commercially available units, some of the early groups advocated home-built antennas. Examples include the cantenna and RONJA, an optical link that can be made from a smoke flue and LEDs. The circuitry and instructions for such DIY networking antennas were released under the GNU Free Documentation License (GFDL). Municipal wireless networks, funded by local governments, started being deployed from 2003 onward.
Regarding the international policy scenario, discussions on Community Networks have gained prominence over the last few years, especially since the creation of the Internet Governance Forum's Dynamic Coalition on Community Connectivity in 2016, providing "a much needed platform through which various individuals and entities interested in the advancement of CNs have the possibility to associate, organise and develop, in a bottom-up participatory fashion collective 'principles, rules, decision-making procedures and shared programs that give shape to the evolution and use of the Internet.'".
Early community projects
By 2003, a number of wireless community projects had established themselves in urban areas across North America, Europe and Australia. In June 2000, Melbourne Wireless Inc. was established in Melbourne Australia as a not-for-profit project to establish a metropolitan area wireless network using off-the-shelf 802.11 wireless equipment. By 2003, it had 1,200 hotspots. In 2000 Seattle Wireless was founded with the stated aim of providing free WiFi access and share the cost of Internet connectivity in Seattle, USA. By April 2011, it had 80 free wireless access points all over Seattle and was steadily growing.
In August 2000, Consume was founded in London England as "collaborative strategy for the self provisioning of a broadband telecommunications infrastructure". Founded by Ben Laurie and others, Consume aimed to build a wireless infrastructure as alternative to the monopoly-held wired metropolitan area network. Besides providing Wi-Fi access in East London, Consume installed a large antenna on the roof of the former Greenwich Town Hall and documented the states of wireless connections in London. Consume created political pressure on municipal authorities, by staging public events, exhibitions, encouraging consumers to set up wireless equipment and setting up temporary Wi-Fi hotspots at events in East London. While Consume generated sustained media attention, it did not establish a lasting wireless community network.
The Wireless Leiden hobbyist project was established in September 2001 and constituted as non-profit foundation in 2003 with more than 300 active users. The Wireless Leiden foundation aimed to facilitate the cooperation of local government, businesses and residents to provide wireless networking in Leiden Netherlands. The first wireless community network in Spain was RedLibre, founded in September 2001 in Madrid. By 2002 RedLibre coordinated the efforts of 15 local wireless groups and maintained free RedLibre Wi-Fi hotspots in five cities. RedLibre has been credited for facilitating the widespread availability of WLAN in the urban areas of Spain.
In Italy, Ninux.org was founded by students and hackers in 2001 to create a grassroots wireless network in Rome, similar to Seattle Wireless. A turning point for Ninux was the lowering of prices in 2008 for consumer wireless equipment, such as antennas and routers. Ninux volunteers installed an increasing number of antennas on the roofs of Rome. The network served as example for other urban community wireless networks in Italy. By 2016, similar wireless networks had been installed in Florence, Bolongna, Pisa and Cosenza. While they share common technical and organizational frameworks, the working groups supporting these urban wireless community networks are driven by the different needs of the city in which they operate.
Houston Wireless was founded in summer 2001 as the Houston Wireless Users Group. The telecommunications providers were slow to roll out third-generation wireless (3G), so Houston Wireless was established to promote high-speed wireless access across Houston and its suburbs. Houston Wireless experimented with network protocols such as IPsec, mobile IP and IPv6, as well as wireless technologies, including 802.11a, 802.11g and ultra-wideband (UWB). By 2003, it had 30 WLAN hotspots, 100 people on their mailing lists and their monthly meetings were attended by about 25 people.
NYCwirelsss was established in New York City in May 2001 to provide public hotspots and promote the use of consumer owned unlicensed low-cost wireless networking equipment. In order to get more public Wi-Fi hotspots installed, NYCwirelsss contracted with the for-profit company Cloud Networks, which was staffed by some of the founding members of the NYCwireless community project. In the aftermath of the September 11 attacks in 2001 NYCwirelsss helped to provide emergency communication by quickly assembling and deploying free Wi-Fi hotspots in areas of New York City that had no other telecommunications. In summer 2002, the Bryant Park wireless network became the flagship project of NYCwireless, with about 50 users every day. By 2003 NYCwireless had more than 100 active hotspots throughout New York City.
Early project in rural areas
In 2000, guifi.net was founded because commercial internet service providers did not build a broadband Internet infrastructure in rural Catalonia. Guifi.net was conceived as a wireless mesh network, where households can become a node in the network by operating a radio transmitter. Not every node needs to be a wireless router, but the network relies on some volunteers being connected to the Internet and sharing that access with others. In 2017 guifi.net had 23,000 nodes and was described as the biggest mesh network in the world.
In 2001, BCWireless founded to help communities in British Columbia, Canada, set up local Wi-Fi networks. BCWireless hobbyists experimented with IEEE 802.11b wireless networks and antennas to extend the range and power of signal, allow bandwidth sharing among local group members and establish wireless mesh networks. The Lac Seul First Nation communities set up their Wi-Fi network and constituted the non-profit K-Net to manage a wireless network based on IEEE 802.11g to provide the entire reserve with Wi-Fi using the unlicensed spectrum in combination with licensed spectrum at 3.5 GHz.
Co-operation between community networks
For the most, early wireless community projects had a local scope, but many still had a global awareness. In 2003, wireless community networks initiated the Pico Peering Agreement (PPA) and the Wireless Commons Manifesto. The two initiatives defined attempts to build an infrastructure, so that local wireless mesh networks could become extensive wireless ad hoc networks across local and national boundaries. In 2004, Freifunk released the OpenWrt-based firmware FFF for Wi-Fi devices that participate in a community network, which included a PPA, so that the owner of the node agrees to provide free transit across the network.
Technical approach
There are at least three technical approaches to building a wireless community network:
Cluster: Advocacy groups which simply encourage sharing of unmetered internet bandwidth via Wi-Fi, may also index nodes, suggest uniform SSID (for low-quality roaming), supply equipment, DNS services, etc.
Wireless mesh network: Technology groups which coordinate building a mesh network to provide Wi-Fi access to the internet
Device-as-infrastructure: In 2013 the Open Technology Institute released the Commotion Wireless mesh network firmware, which allows Wi-Fi enabled mobile phones and computers to join a wireless community network by establishing a peer-to-peer network that still works when not connected to the wide area network.
Firmware
Wireless equipment, like many other consumer electronics, comes with hard-to-alter firmware that is preinstalled by the manufacturer. When the Linksys WRT54G series was launched in 2003 with an open source Linux kernel as firmware, it immediately became the subject of hacks and became the most popular hardware among community wireless volunteers. In 2005, Linksys released the WRT54GL version of its firmware, to make it even easier for customers to modify it. Community network hackers experimented with increasing the transmission power of the Linksys WRT54G or increasing the clock speed of the CPU to speed up data transmission.
Hobbyists got another boost when in 2004 the OpenWrt firmware was released as open source alternative to proprietary firmware. The Linux-based embedded operating system could be used on embedded devices to route network traffic. Through successive versions, OpenWrt eventually could work on several hundred types of wireless devices and Wi-Fi routers. OpenWrt was named in honor of the WRT54G. The OpenWrt developers provided extensive documentation and the ability to include one's own code in the OpenWrt source code and compile the firmware.
In 2004, Freifunk released the FFF firmware for wireless community projects, which modified OpenWrt so that the node could be configured via a web interface and added features to better support a wireless ad hoc network with traffic shaping, statistics, Internet gateway support and an implementation of the Optimized Link State Routing Protocol (OLSR). A Wi-Fi access point that booting the FFF firmware joined the network by automatically announcing its Internet gateway capabilities to other nodes using OLSR HNA4. When a node disappeared, the other nodes registered the change in the network topology through the discontinuation of HNA4 announcements. At the time, Freifunk in Berlin had 500 Wi-Fi access points and about 2,200 Berlin residents used the network free of charge. The Freifunk FFF firmware is among the oldest approaches to establishing a wireless mesh network at significant scale. Other early attempts at developing an operating system for wireless devices that supported large scale wireless community projects were Open-Mesh and Netsukuku.
In 2006, Meraki Networks Inc was founded. The Meraki hardware and firmware had been developed as part of a PhD research project at the Massachusetts Institute of Technology to provide wireless access to graduate students. For years, the low-cost Meraki products fueled the growth of wireless mesh networks in 25 countries. Early Meraki-based wireless community networks included the Free-the-Net Meraki mesh in Vancouver, Canada. Constituted in 2006 as legal co-operative, members of the Vancouver Open Network Initiatives Cooperative paid five Canadian dollars per month to access the community wireless network provided by individuals who attached Meraki nodes to their home wireless connection, sharing bandwidth with any cooperative members nearby and participating in a meshed wireless network.
Community network software
By 2003, the Sidney Wireless community project had launched the NodeDB software, to facilitate the work of community networks by mapping the nodes participating in a wireless mesh network. Nodes needed to be registered in the database, but the software generated a list of adjacent nodes. When registering a node that participated in a community network, the maintainer of the node could leave a note on the hardware, antenna reach and firmware in operation and so find other network community members who were willing to participate in a mesh.
Organization
Organizationally, a wireless community network requires either a set of affordable commercial technical solutions or a critical mass of hobbyists willing to tinker to maintain operations. Mesh networks require that a high level of community participation and commitment be maintained for the network to be viable. The mesh approach currently requires uniform equipment. One market-driven aspect of the mesh approach is that users who receive a weak mesh signal can often convert it to a strong signal by obtaining and operating a repeater node, thus extending the network.
Such volunteer organizations focusing on technology that is rapidly advancing sometimes have schisms and mergers. The Wi-Fi service provided by such groups is usually free and without the stigma of piggybacking. An alternative to the voluntary model is to use a co-operative structure.
Business models
Wireless community projects made volunteer bandwidth-sharing technically feasible and have been credited with contributing to the emergence of alternative business models in the consumer Wi-Fi market. The commercial Wi-Fi provider Fon was established in 2006 in Spain. Fon customers were equipped with a Linksys Wi-Fi access point that runs a modified OpenWrt firmware so that Fon customers shared Wi-Fi access among each other. Public Wi-Fi provisioning through FON customers was broadened when FON entered a 50% revenue-sharing agreement with customers who made their entire unused bandwidth available for resale. In 2009, this business model gained broader acceptance when British Telecom allowed its own home customers to sell unused bandwidth to BT and FON roamers.
Wireless community projects for the most provide best-effort Wi-Fi coverage. However, since the mid-2000s local authorities started to contract with wireless community networks to provide municipal wireless networks or stable Wi-Fi access in a defined urban area, such as a park. Wireless community networks started to participate in a variety of public-private partnerships. The non-profit community network ZAP Sherbrooke has partnered with public and private entities to provide Wi-Fi access and received financial support from the University of Sherbrooke and Bishop's University to extend the coverage of its wireless mesh throughout the city of Sherbrooke, Canada.
Regulation
Certain countries regulate the selling of internet access, requiring a license to sell internet access over a wireless network. In South Africa it is regulated by the Independent Communications Authority of South Africa (ICASA). They require that WISP's apply for a VANS or ECNS/ECS license before being allowed to resell internet access over a wireless link. The Internet Society's publication "Community Networks in Latin America: Challenges, Regulations and Solutions" brings a summary of regulations regarding Community Networks among Latin American countries, the United States and Canada.
See also
AWMN
Bryggenet
Community Broadband Network
Computer network
DD-WRT
List of wireless community networks by region
Multiple-input multiple-output communications (MIMO)
Neighborhood Internet service provider
South African wireless community networks
OpenWireless.org, a project by the Electronic Frontier Foundation (EFF)
Roofnet
Wireless LAN Security
nodewatcher, an open-source node database project
References
Wireless Internet service providers | Wireless community network | Technology | 3,342 |
41,213,990 | https://en.wikipedia.org/wiki/Neutrino%20minimal%20standard%20model | The neutrino minimal standard model (often abbreviated as νMSM) is an extension of the Standard Model of particle physics, by the addition of three right-handed neutrinos with masses smaller than the electroweak scale. Introduced by Takehiko Asaka and Mikhail Shaposhnikov in 2005, it has provided a highly constrained model for many topics in physics and cosmology, such as baryogenesis and neutrino oscillations.
References
External links
Brief technical description
Neutrinos
Physics beyond the Standard Model | Neutrino minimal standard model | Physics | 111 |
55,917,496 | https://en.wikipedia.org/wiki/Spt-Ada-Gcn5%20acetyltransferase | Spt-Ada-Gcn5 acetyltransferase (SAGA) complex is a multicomponent regulator of acetylation. It has been found that this complex is highly conserved between different organisms, such as humans, Drosophila, and yeast. This 15 subunit complex has been best characterized for its histone acetyltransferase activity (HAT). The acetylating activity has been found to occur in the lysine residues of the N-terminal tails of H3 and H2 histones. It has been found recently that this activity is actually a deubiquitination of a monoubiquitin that occurs in residue Lys 123 of the H2b histone and the acetylation of the H3 histone. The histone acetylation is mediated by the GCN5 histone acetyl transferase, while the deubiquitinating activity is mediated by a deubiquitinating module (DUBm), which is composed of 4 proteins, Ubp8 ubiquitin hydrolase, Sgf11, Sus1, and Sgf73. This DUB module is an independently folding subcomplex that is connected to the C-terminal tail of Sgf 73, Sgf73, as well as Sus1, also have a role in facilitating SAGA complex's role in nuclear export by binding to components of the nuclear pore complex. Even though Ubp8 has ubiquitin specific hydrolase (USP) domain, the protein remains inactive unless it is in complex with the other 3 DUBm proteins.
References
Transferases | Spt-Ada-Gcn5 acetyltransferase | Chemistry | 342 |
24,324,535 | https://en.wikipedia.org/wiki/Armillaria%20affinis | Armillaria affinis is a species of agaric fungus in the family Physalacriaceae. This species is found in Central America.
See also
List of Armillaria species
References
affinis
Fungi described in 1989
Fungi of Central America
Fungal tree pathogens and diseases
Fungus species | Armillaria affinis | Biology | 61 |
59,202 | https://en.wikipedia.org/wiki/Object%E2%80%93relational%20mapping | Object–relational mapping (ORM, O/RM, and O/R mapping tool) in computer science is a programming technique for converting data between a relational database and the memory (usually the heap) of an object-oriented programming language. This creates, in effect, a virtual object database that can be used from within the programming language.
In object-oriented programming, data-management tasks act on objects that combine scalar values into objects. For example, consider an address book entry that represents a single person along with zero or more phone numbers and zero or more addresses. This could be modeled in an object-oriented implementation by a "Person object" with an attribute/field to hold each data item that the entry comprises: the person's name, a list of phone numbers, and a list of addresses. The list of phone numbers would itself contain "PhoneNumber objects" and so on. Each such address-book entry is treated as a single object by the programming language (it can be referenced by a single variable containing a pointer to the object, for instance). Various methods can be associated with the object, such as methods to return the preferred phone number, the home address, and so on.
By contrast, relational databases, such as SQL, group scalars into tuples, which are then enumerated in tables. Tuples and objects have some general similarity, in that they are both ways to collect values into named fields such that the whole collection can be manipulated as a single compound entity. They have many differences, though, in particular: lifecycle management (row insertion and deletion, versus garbage collection or reference counting), references to other entities (object references, versus foreign key references), and inheritance (non-existent in relational databases). As well, objects are managed on-heap and are under full control of a single process, while database tuples are shared and must incorporate locking, merging, and retry. Object–relational mapping provides automated support for mapping tuples to objects and back, while accounting for all of these differences.
The heart of the problem involves translating the logical representation of the objects into an atomized form that is capable of being stored in the database while preserving the properties of the objects and their relationships so that they can be reloaded as objects when needed. If this storage and retrieval functionality is implemented, the objects are said to be persistent.
Overview
Implementation-specific details of storage drivers are generally wrapped in an API in the programming language in use, exposing methods to interact with the storage medium in a way which is simpler and more in line with the paradigms of surrounding code.
The following is a simple example, written in C# code, to execute a query written in SQL using a database engine.
var sql = "SELECT id, first_name, last_name, phone, birth_date, sex, age FROM persons WHERE id = 10";
var result = context.Persons.FromSqlRaw(sql).ToList();
var name = result[0]["first_name"];
In contrast, the following makes use of an ORM-job API which makes it possible to write code that naturally makes use of the features of the language.
var person = repository.GetPerson(10);
var firstName = person.GetFirstName();
The case above makes use of an object representing the storage repository and methods of that object. Other frameworks might provide code as static methods, as in the example below, and yet other methods may not implement an object-oriented system at all. Often the choice of paradigm is made for the best fit of the ORM into the surrounding language's design principles.
var person = Person.Get(10);
Comparison with traditional data access techniques
Compared to traditional techniques of exchange between an object-oriented language and a relational database, ORM often reduces the amount of code that needs to be written.
Disadvantages of ORM tools generally stem from the high level of abstraction obscuring what is actually happening in the implementation code. Also, heavy reliance on ORM software has been cited as a major factor in producing poorly designed databases.
Object-oriented databases
Another approach is to use an object-oriented database management system (OODBMS) or document-oriented databases such as native XML databases that provide more flexibility in data modeling. OODBMSs are databases designed specifically for working with object-oriented values. Using an OODBMS eliminates the need for converting data to and from its SQL form, as the data is stored in its original object representation and relationships are directly represented, rather than requiring join tables/operations. The equivalent of ORMs for document-oriented databases are called object-document mappers (ODMs).
Document-oriented databases also prevent the user from having to "shred" objects into table rows. Many of these systems also support the XQuery query language to retrieve datasets.
Object-oriented databases tend to be used in complex, niche applications. One of the arguments against using an OODBMS is that it may not be able to execute ad-hoc, application-independent queries. For this reason, many programmers find themselves more at home with an object-SQL mapping system, even though most object-oriented databases are able to process SQL queries to a limited extent. Other OODBMS provide replication to SQL databases, as a means of addressing the need for ad-hoc queries, while preserving well-known query patterns.
Challenges
A variety of difficulties arise when considering how to match an object system to a relational database. These difficulties are referred to as the object–relational impedance mismatch.
An alternative to implementing ORM is use of the native procedural languages provided with every major database. These can be called from the client using SQL statements. The Data Access Object (DAO) design pattern is used to abstract these statements and offer a lightweight object-oriented interface to the rest of the application.
ORMs are limited to their predefined functionality, which may not cover all edge cases or database features. They usually mitigate this limitation by providing users with an interface to write raw queries, such as Django ORM.
See also
List of object–relational mapping software
Comparison of object–relational mapping software
AutoFetch – automatic query tuning
Common Object Request Broker Architecture (CORBA)
Object database
Object persistence
Object–relational database
Object–relational impedance mismatch
Relational model
SQL (Structured Query Language)
Java Data Objects (JDO)
Java Persistence API (JPA), now Jakarta Persistence
Service Data Objects
Entity Framework
Active record pattern
Data mapper pattern
Single Table Inheritance
References
External links
About ORM by Anders Hejlsberg
Mapping Objects to Relational Databases: O/R Mapping In Detail by Scott W. Ambler
Data mapping
Articles with example C Sharp code | Object–relational mapping | Engineering | 1,399 |
4,271,664 | https://en.wikipedia.org/wiki/List%20of%20atmospheric%20dispersion%20models | Atmospheric dispersion models are computer programs that use mathematical algorithms to simulate how pollutants in the ambient atmosphere disperse and, in some cases, how they react in the atmosphere.
US Environmental Protection Agency models
Many of the dispersion models developed by or accepted for use by the U.S. Environmental Protection Agency (U.S. EPA) are accepted for use in many other countries as well. Those EPA models are grouped below into four categories.
Preferred and recommended models
AERMOD – An atmospheric dispersion model based on atmospheric boundary layer turbulence structure and scaling concepts, including treatment of multiple ground-level and elevated point, area and volume sources. It handles flat or complex, rural or urban terrain and includes algorithms for building effects and plume penetration of inversions aloft. It uses Gaussian dispersion for stable atmospheric conditions (i.e., low turbulence) and non-Gaussian dispersion for unstable conditions (high turbulence). Algorithms for plume depletion by wet and dry deposition are also included in the model. This model was in development for approximately 14 years before being officially accepted by the U.S. EPA.
CALPUFF – A non-steady-state puff dispersion model that simulates the effects of time- and space-varying meteorological conditions on pollution transport, transformation, and removal. CALPUFF can be applied for long-range transport and for complex terrain.
BLP – A Gaussian plume dispersion model designed to handle unique modelling problems associated with industrial sources where plume rise and downwash effects from stationary line sources are important.
CALINE3 – A steady-state Gaussian dispersion model designed to determine pollution concentrations at receptor locations downwind of highways located in relatively uncomplicated terrain.
CAL3QHC and CAL3QHCR – CAL3QHC is a CALINE3 based model with queuing calculations and a traffic model to calculate delays and queues that occur at signalized intersections. CAL3QHCR is a more refined version based on CAL3QHC that requires local meteorological data.
CTDMPLUS – A complex terrain dispersion model (CTDM) plus algorithms for unstable situations (i.e., highly turbulent atmospheric conditions). It is a refined point source Gaussian air quality model for use in all stability conditions (i.e., all conditions of atmospheric turbulence) for complex terrain.
OCD – Offshore and coastal dispersion model (OCD) is a Gaussian model developed to determine the impact of offshore emissions from point, area or line sources on the air quality of coastal regions. It incorporates overwater plume transport and dispersion as well as changes that occur as the plume crosses the shoreline.
Alternative models
ADAM – Air force dispersion assessment model (ADAM) is a modified box and Gaussian dispersion model which incorporates thermodynamics, chemistry, heat transfer, aerosol loading, and dense gas effects.
ADMS 5 – Atmospheric Dispersion Modelling System (ADMS 5) is an advanced dispersion model developed in the United Kingdom for calculating concentrations of pollutants emitted both continuously from point, line, volume and area sources, or discretely from point sources.
AFTOX – A Gaussian dispersion model that handles continuous or puff, liquid or gas, elevated or surface releases from point or area sources.
DEGADIS – Dense gas dispersion (DEGADIS) is a model that simulates the dispersion at ground level of area source clouds of denser-than-air gases or aerosols released with zero momentum into the atmosphere over flat, level terrain.
HGSYSTEM – A collection of computer programs developed by Shell Research Ltd. and designed to predict the source-term and subsequent dispersion of accidental chemical releases with an emphasis on dense gas behavior.
HOTMAC and RAPTAD – HOTMAC is a model for weather forecasting used in conjunction with RAPTAD which is a puff model for pollutant transport and dispersion. These models are used for complex terrain, coastal regions, urban areas, and around buildings where other models fail.
HYROAD – The hybrid roadway model integrates three individual modules simulating the pollutant emissions from vehicular traffic and the dispersion of those emissions. The dispersion module is a puff model that determines concentrations of carbon monoxide (CO) or other gaseous pollutants and particulate matter (PM) from vehicle emissions at receptors within 500 meters of the roadway intersections.
ISC3 – A Gaussian model used to assess pollutant concentrations from a wide variety of sources associated with an industrial complex. This model accounts for: settling and dry deposition of particles; downwash; point, area, line, and volume sources; plume rise as a function of downwind distance; separation of point sources; and limited terrain adjustment. ISC3 operates in both long-term and short-term modes.
OBODM – A model for evaluating the air quality impacts of the open burning and detonation (OB/OD) of obsolete munitions and solid propellants. It uses dispersion and deposition algorithms taken from existing models for instantaneous and quasi-continuous sources to predict the transport and dispersion of pollutants released by the open burning and detonation operations.
PANACHE – Fluidyn-PANACHE is an Eulerian (and Lagrangian for particulate matter), 3-dimensional finite volume fluid mechanics model designed to simulate continuous and short-term pollutant dispersion in the atmosphere, in simple or complex terrain.
PLUVUEII – A model that estimates atmospheric visibility degradation and atmospheric discoloration caused by plumes resulting from the emissions of particles, nitrogen oxides, and sulfur oxides. The model predicts the transport, dispersion, chemical reactions, optical effects and surface deposition of such emissions from a single point or area source.
SCIPUFF – A puff dispersion model that uses a collection of Gaussian puffs to predict three-dimensional, time-dependent pollutant concentrations. In addition to the average concentration value, SCIPUFF predicts the statistical variance in the concentrations resulting from the random fluctuations of the wind.
SDM – Shoreline dispersion model (SDM) is a Gaussian dispersion model used to determine ground-level concentrations from tall stationary point source emissions near a shoreline.
SLAB – A model for denser-than-air gaseous plume releases that utilizes the one-dimensional equations of momentum, conservation of mass and energy, and the equation of state. SLAB handles point source ground-level releases, elevated jet releases, releases from volume sources and releases from the evaporation of volatile liquid spill pools.
Screening models
These are models that are often used before applying a refined air quality model to determine if refined modelling is needed.
AERSCREEN – The screening version of AERMOD. It produces estimates of concentrations, without the need for meteorological data, that are equal to or greater than the estimates produced by AERMOD with a full set of meteorological data. The U.S. EPA released version 11060 of AERSCREEN on 11 March 2010 with a subsequent update, version 11076, on 17 March 2010. The U.S. EPA published the "Clarification memorandum on AERSCREEN as the recommended screening model" on 11 April 2011.
CTSCREEN – The screening version of CTDMPLUS.
SCREEN3 – The screening version of ISC3.
TSCREEN – Toxics screening model (TSCREEN) is a Gaussian model for screening toxic air pollutant emissions and their subsequent dispersion from possible releases at superfund sites. It contains 3 modules: SCREEN3, PUFF, and RVD (Relief Valve Discharge).
VALLEY – A screening, complex terrain, Gaussian dispersion model for estimating 24-hour or annual concentrations resulting from up to 50 point and area emission sources.
COMPLEX1 – A multiple point source screening model with terrain adjustment that uses the plume impaction algorithm of the VALLEY model.
RTDM3.2 – Rough terrain diffusion model (RTDM3.2) is a Gaussian model for estimating ground-level concentrations of one or more co-located point sources in rough (or flat) terrain.
VISCREEN – A model that calculates the impact of specified emissions for specific transport and dispersion conditions.
Photochemical models
Photochemical air quality models have become widely utilized tools for assessing the effectiveness of control strategies adopted by regulatory agencies. These models are large-scale air quality models that simulate the changes of pollutant concentrations in the atmosphere by characterizing the chemical and physical processes in the atmosphere. These models are applied at multiple geographical scales ranging from local and regional to national and global.
Models-3/CMAQ – The latest version of the community multi-scale air quality (CMAQ) model has state-of-the-science capabilities for conducting urban to regional scale simulations of multiple air quality issues, including tropospheric ozone, fine particles, toxics, acid deposition, and visibility degradation.
CAMx – The comprehensive air quality model with extensions (CAMx) simulates air quality over many geographic scales. It handles a variety of inert and chemically active pollutants, including ozone, particulate matter, inorganic and organic PM2.5/PM10, and mercury and other toxics.
REMSAD – The regional modeling system for aerosols and deposition (REMSAD) calculates the concentrations of both inert and chemically reactive pollutants by simulating the atmospheric processes that affect pollutant concentrations over regional scales. It includes processes relevant to regional haze, particulate matter and other airborne pollutants, including soluble acidic components and mercury.
UAM-V – The urban airshed model was a pioneering effort in photochemical air quality modelling in the early 1970s and has been used widely for air quality studies focusing on ozone.
Other models developed in the United States
CHARM – A model capable of simulating dispersion of toxics and particles. It can calculate impacts of thermal radiation from fires, overpressures from mechanical failures and explosions, and nuclear radiation from radionuclide releases. CHARM is capable of handling effects of complex terrain and buildings. A Lagrangian puff screening version and Eulerian full-function version are available. More information is available here.
HYSPLIT – Hybrid Single Particle Lagrangian Integrated Trajectory Model. Developed at NOAA's Air Resources Laboratory. The HYSPLIT model is a complete system for computing simple air parcel trajectories to complex dispersion and deposition simulations. More information about this model can be found at
PUFF-PLUME – A Gaussian chemical/radionuclide dispersion model that includes wet and dry deposition, real-time input of meteorological observations and forecasts, dose estimates from inhalation and gamma shine, and puff or plume dispersion modes. It is the primary model for emergency response use for atmospheric releases of radioactive materials at the Savannah River Site of the United States Department of Energy. It was first developed by the Pacific Northwest National Laboratory (PNNL) in the 1970s.
Puff model – Puff is a volcanic ash tracking model developed at the University of Alaska Fairbanks. It requires NWP wind field data on a geographic grid covering the area over which ash may be dispersed. Representative ash particles are initiated at the volcano's location and then allowed to advect, diffuse, and settle within the atmosphere. The location of the particles at any time after the eruption can be viewed using the post-processing software included with the model. Output data is in netCDF format and can also be viewed with a variety of software. More information on the model is available here.
Models developed in the United Kingdom
ADMS-5 – See the description of this model in the alternative models section of the models accepted by the U.S. EPA.
ADMS-URBAN – A model for simulating dispersion on scales ranging from a street scale to citywide or county-wide scale, handling most relevant emission sources such as traffic, industrial, commercial, and domestic sources. It is also used for air quality management and assessments of current and future air quality vis-a-vis national and regional standards in Europe and elsewhere.
ADMS-Roads – A model for simulating dispersion of vehicular pollutant emissions from small road networks in combination with emissions from industrial plants. It handles multiple road sources as well as multiple point, line or area emission sources and the model operation is similar to the other ADMS models
ADMS-Screen – A screening model for rapid assessment of the air quality impact of a single industrial stack to determine if more detailed modelling is needed. It combines the dispersion modelling algorithms of the ADMS models with a user interface requiring minimal input data.
GASTAR – A model for simulating accidental releases of denser-than-air flammable and toxic gases. It handles instantaneous and continuous releases, releases from jet sources, releases from evaporation of volatile liquid pools, variable terrain slopes and ground roughness, obstacles such as fences and buildings, and time-varying releases.
NAME – Numerical atmospheric-dispersion modelling environment (NAME) is a local to global scale model developed by the UK's Met Office. It is used for: forecasting of air quality, air pollution dispersion, and acid rain; tracking radioactive emissions and volcanic ash discharges; analysis of accidental air pollutant releases and assisting in emergency response; and long-term environmental impact analysis. It is an integrated model that includes boundary layer dispersion modelling.
UDM – Urban dispersion model is a Gaussian puff based model for predicting the dispersion of atmospheric pollutants in the range of 10m to 25 km throughout the urban environment. It is developed by the Defense Science and Technology Laboratory for the UK Ministry of Defence. It handles instantaneous, continuous, and pool releases, and can model gases, particulates, and liquids. The model has a three regime structure: that of single building (area density < 5%), urban array (area density > 5%) and open. The model can be coupled with the US model SCIPUFF to replace the open regime and extend the model's prediction range.
Models developed in continental Europe
The European Topic Centre on Air and Climate Change, which is part of the European Environment Agency (EEA), maintains an online Model Documentation System (MDS) that includes descriptions and other information for almost all of the dispersion models developed by the countries of Europe. The MDS currently (July 2012) contains 142 models, mostly developed in Europe. Of those 142 models, some were subjectively selected for inclusion here. Anyone interested in seeing the complete MDS can access it here.
Some of the European models listed in the MDS are public domain and some are not. Many of them include a pre-processor module for the input of meteorological and other data, and many also include a post-processor module for graphing the output data and/or plotting the area impacted by the air pollutants on maps.
The country of origin is included for each of the European models listed below.
AEROPOL (Estonia) – The AERO-POLlution model developed at the Tartu Observatory in Estonia is a Gaussian plume model for simulating the dispersion of continuous, buoyant plumes from stationary point, line and area sources over flat terrain on a local to regional scale. It includes plume depletion by wet and/or dry deposition as well as the effects of buildings in the plume path.
Airviro Gauss (Sweden) – A gaussian dispersion model that handles point, road, area and grid sources developed by SMHI. Plumes follow trajectories from a wind model and each plume has a cutoff dependent on wind speed. The model also support irregular calculation grids.
Airviro Grid (Sweden) – A simplified eulerian model developed by SMHI. Can handle point, road, area and grid sources. Includes dry and wet deposition and sedimentation.
Airviro Heavy Gas (Sweden) – A model for heavy gas dispersion developed by SMHI.
Airviro receptor model (Sweden)- An inverse dispersion model developed by SMHI. Used to find emission sources.
ATSTEP (Germany) – Gaussian puff dispersion and deposition model used in the decision support system RODOS (real-time on-line decision support) for nuclear emergency management. RODOS is operational in Germany by the Federal Office for Radiation Protection (BfS) and test-operational in many other European countries. More information on RODOS is available here and on the ATSTEP model here.
AUSTAL2000 (Germany) – The official air dispersion model to be used in the permitting of industrial sources by the German Federal Environmental Agency. The model accommodates point, line, area and volume sources of buoyant plumes. It has capabilities for building effects, complex terrain, plume depletion by wet or dry deposition, and first order chemical reactions. It is based on the LASAT model developed by Ingenieurbüro Janicke Gesellschaft für Umweltphysik.
BUO-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) specifically for estimating the atmospheric dispersion of neutral or buoyant plume gases and particles emitted from fires in warehouses and chemical stores. It is a hybrid of a local scale Gaussian plume model and another model type. Plume depletion by dry deposition is included but wet deposition is not included.
CAR-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) for evaluating atmospheric dispersion and chemical transformation of vehicular emissions of inert (CO, NOx) and reactive (NO, NO2, O3) gases from a road network of line sources on a local scale. It is a Gaussian line source model which includes an analytical solution for the chemical cycle NO-O3-NO2.
CAR-International (The Netherlands) – Calculation of air pollution from road traffic (CAR-International) is an atmospheric dispersion model developed by the Netherlands Organisation for Applied Scientific Research. It is used for simulating the dispersion of vehicular emissions from roadway traffic.
DIPCOT (Greece) – Dispersion over complex terrain (DIPCOT) is a model developed in the National Centre of Scientific Research "DEMOKRITOS" of Greece that simulates dispersion of buoyant plumes from multiple point sources over complex terrain on a local to regional scale. It does not include wet deposition or chemical reactions.
DISPERSION21 (Sweden) – This model was developed by the Swedish Meteorological and Hydrological Institute (SMHI) for evaluating air pollutant emissions from existing or planned industrial or urban sources on a local scale. It is a Gaussian plume model for point, area, line and vehicular traffic sources. It includes plume penetration of inversions aloft, building effects, NOx chemistry and it can handle street canyons. It does not include wet or dry deposition, complex atmospheric chemistry, or the effects of complex terrain.
DISPLAY-2 (Greece) – A vapour cloud dispersion model for neutral or denser-than-air pollution plumes over irregular, obstructed terrain on a local scale. It accommodates jet releases as well as two-phase (i.e., liquid-vapor mixtures) releases. This model was also developed at the National Centre of Scientific Research "DEMOKRITOS" of Greece.
EK100W (Poland) – A Gaussian plume model used for air quality impact assessments of pollutants from industrial point sources as well as for urban air quality studies on a local scale. It includes wet and dry deposition. The effects of complex terrain are not included.
FARM (Italy) – The Flexible Air quality Regional Model (FARM) is a multi-grid Eulerian model for dispersion, transformation and deposition of airborne pollutants in gas and aerosol phases, including photo-oxidants, aerosols, heavy metals and other toxics. It is suited for case studies, air quality assessments, scenarios analyses and pollutants forecast.
FLEXPART (Austria/Germany/Norway) – An efficient and flexible Lagrangian particle transport and diffusion model for regional to global applications, with capability for forward and backward mode. Freely available. Developed at BOKU Vienna, Technical University of Munich, and NILU.
GRAL (Austria) – The GRAz Lagrangian model was initially developed at the Graz University of Technology and it is a dispersion model for buoyant plumes from multiple point, line, area and tunnel portal sources. It handles flat or complex terrain (mesoscale prognostic flow field model) including building effects (microscale prognostic flow field model) but it has no chemistry capabilities. The model is freely available: http://lampz.tugraz.at/~gral/
HAVAR (Czech Republic) – A Gaussian plume model integrated with a puff model and a hybrid plume-puff model, developed by the Czech Academy of Sciences, is intended for routine and/or accidental releases of radionuclides from single point sources within nuclear power plants. The model includes radioactive plume depletion by dry and wet deposition as well as by radioactive decay. For the decay of some nuclides, the creation of daughter products that then grow into the plume is taken into account.
IFDM (Belgium) – The immission frequency distribution model, developed at the Flemish Institute for Technological Research (VITO), is a Gaussian dispersion model used for point and area sources dispersing over flat terrain on a local scale. The model includes plume depletion by dry or wet deposition and has been updated to handle building effects and the O3-NOx-chemistry. It is not designed for complex terrain or other chemically reactive pollutants.
INPUFF-U (Romania) – This model was developed by the National Institute of Meteorology and Hydrology in Bucharest, Romania. It is a Gaussian puff model for calculating the dispersion of radionuclides from passive emission plumes on a local to urban scale. It can simulate accidental or continuous releases from stationary or mobile point sources. It includes wet and dry deposition. Building effects, buoyancy effects, chemical reactions and effects of complex terrain are not included.
LAPMOD (Italy) – The LAPMOD (LAgrangian Particle MODel) modeling system is developed by Enviroware and it is available for free. LAPMOD is a Lagrangian partile model fully coupled to the diagnostic meteorological model CALMET and can be used to simulate the dispersion of inert pollutants as well as odors and radioactive substances. It includes dry and wet deposition algorithms and advanced numerical schemes for plume rise (Janicke and Janicke, Webster and Thomson). It can simulate inert pollutants, odors and radioactive substances and it is part of ARIES, the official Italian modeling system for nuclear emergencies operated by ISPRA and by the regional environmental protection agency of Emilia-Romagna, Italy.
LOTOS-EUROS (The Netherlands) – the long term ozone simulation – European operational smog (LOTOS-EUROS) model was developed by the Netherlands Organisation for Applied Scientific Research (TNO) and Netherlands National Institute for Public Health and the Environment (RIVM) in The Netherlands. It is designed for modelling the dispersion of pollutants (such as: photo-oxidants, aerosols, heavy metals) over all of Europe. It includes simple reaction chemistry as well as wet and dry deposition.
MATCH (Sweden) – A multi-scale atmospheric transport and chemistry (MATCH). A three-dimensional, Eulerian model, suitable from urban to global scale.
MEMO (Greece) – A Eulerian non-hydrostatic prognostic mesoscale model for wind flow simulation. It was developed by the Aristotle University of Thessaloniki in collaboration with the Universität Karlsruhe. This model is designed for describing atmospheric transport phenomena in the local-to-regional scale, often referred to as mesoscale air pollution models.
MERCURE (France) – An atmospheric dispersion modeling CFD code developed by Electricite de France (EDF) and distributed by ARIA Technologies, a French company. The code is a version of the CFD software ESTET, developed by EDF's Laboratoire National d'Hydraulique.
MODIM (Slovak Republic) – A model for calculating the dispersion of continuous, neutral or buoyant plumes on a local to regional scale. It integrates a Gaussian plume model for single or multiple point and area sources with a numerical model for line sources, street networks and street canyons. It is intended for regulatory and planning purposes.
MSS (France) – Micro-swift-spray is a Lagrangian particle model used to predict the transport and dispersion of contaminants in urban environments. The SWIFT portion of this model predicts a mass-consistent wind field that considers terrain; no-penetration conditions for building boundaries; Rockle zones for recirculation, edge, and rooftop separation; and background and locally generated turbulence. The spray portion of the tool handles the dispersion of passive gases, dense gases, and particulates. Spray also accounts for plume buoyancy effects, wet and dry depositions, and calculates microscale pressure fields for integration with building models. The MSS development team is found at ARIA Technologies (France) and U.S. integration activities are led by Leidos. Validation testing of MSS has been done in conjunction with JEM and HPAC tool releases and the model is coupled with SCIPUFF/UDM to create a nested dispersion capability inside HPAC. For more information on MSS see http://www.aria.fr.
MUSE (Greece) – A photochemical atmospheric dispersion model developed by Professor Nicolas Moussiopoulos at the Aristotle University of Thessaloniki in Greece. It is intended for the study of photochemical smog formation in urban areas and assessment of control strategies on a local to regional scale. It can simulate dry deposition and transformation of pollutants can be treated using any suitable chemical reaction mechanism.
OML (Denmark) – A model for dispersion calculations of continuous neutral or buoyant plumes from single or multiple, stationary point and area sources. It has some simple methods for handling photochemistry (primarily for NO2) and for handling complex terrain. The model was developed by the National Environmental Research Institute of Denmark. It is now maintained by the Department of Environmental Science, Aarhus University. For further reference see as well: OML home page
ONM9440 (Austria) – A Gaussian dispersion model for continuous, buoyant plumes from stationary sources for use in flat terrain areas. It includes plume depletion by dry deposition of solid particulates.
OSPM (Denmark) – The operational street pollution model (OSPM) is a practical street pollution model, developed by the National Environmental Research Institute of Denmark. It is now maintained by the Department of Environmental Science, Aarhus University. For almost 20 years, OSPM has been routinely used in many countries for studying traffic pollution, performing analyses of field campaign measurements, studying efficiency of pollution abatement strategies, carrying out exposure assessments and as reference in comparisons to other models. OSPM is generally considered as state-of-the-art in applied street pollution modelling. For further reference see as well: OSPM home page
PANACHE (France) – fluidyn-PANACHE is a self-contained fully 3D fluid dynamics software package designed to simulate accidental or continuous industrial and urban pollutant dispersion into the atmosphere. It simulates release and toxic/flammables pollutants dispersion in various weather conditions in calculated 3D complex winds and turbulence fields. Gas, particles, droplets induced flow and transport/diffusion is simulated with Navier-Stokes equations for jet-like, dense, cold, cryogenic or hot, buoyant releases. The application covers the very short scale (tens of meters) and the local scale (ten kilometers) where the complex flow pattern as related to obstacles, variable land uses, topography is calculated explicitly.
PROKAS-V (Germany) – A Gaussian dispersion model for evaluating the atmospheric dispersion of air pollutants emitted from vehicular traffic on a road network of line sources on a local scale.
PLUME (Bulgaria) – A conventional Gaussian plume model used in many regulatory applications. The basis of the model is a single simple formula which assumes constant wind speed and reflection from the ground surface. The horizontal and vertical dispersion parameters are a function of downwind distance and stability. The model was developed for routine applications in air quality assessment, regulatory purposes and policy support.
POLGRAPH (Portugal) – This model was developed at the University of Aveiro, Portugal by Professor Carlos Borrego. It was designed for evaluating the impact of industrial pollutant releases and for air quality assessments. It is a Gaussian plume dispersion model for continuous, elevated point sources to be used on a local scale over flat or gently rolling terrain.
RADM (France) – The random-walk advection and dispersion model (RADM) was developed by ACRI-ST, an independent research and development organization in France. It can model gas plumes and particles (including pollutants with exponential decay or formation rates) from single or multiple stationary, mobile or area sources. Chemical reaction, radioactive decay, deposition, complex terrain, and inversion conditions are accommodated.
RIMPUFF (Denmark) – A local and regional scale real-time puff diffusion model developed by Risø National Laboratory for Sustainable Energy, Technical University of Denmark. Risø DTU. RIMPUFF is an operational emergency response model in use for assisting emergency management organisations dealing with chemical, nuclear, biological and radiological (CBRN) releases to the atmosphere. RIMPUFF is in operation in several European national emergency centres for preparedness and prediction of nuclear accidental releases (RODOS, EURANOS, ARGOS), chemical gas releases (ARGOS), and serves also as a decision support tool during active combatting of airborne transmission of various biological infections, including e.g. Foot-and Mouth Disease outbreaks. DEFRA Foot and Mouth Disease.
SAFE AIR II (Italy) – The simulation of air pollution from emissions II (SAFE AIR II) was developed at the Department of Physics, University of Genoa, Italy to simulate the dispersion of air pollutants above complex terrain at local and regional scales. It can handle point, line, area and volume sources and continuous plumes as well as puffs. It includes first-order chemical reactions and plume depletion by wet and dry deposition, but it does not include any photochemistry.
SEVEX (Belgium) – The Seveso expert model simulates the accidental release of toxic and/or flammable material over flat or complex terrain from multiple pipe and vessel sources or from evaporation of volatile liquid spill pools. The accidental releases may be continuous, transient or catastrophic. The integrated model can handle denser-than-air gases as well as neutral gases (i.e., neither denser than or lighter than air). It does not include handling of multi-component material, nor does it provide for chemical transformation of the releases. The model's name is derived from the major disaster caused by the accidental release of highly toxic gases that occurred in Seveso, Italy in 1976.
SNAP (Norway) – The Severe Nuclear Accident Programme (SNAP) model is a Lagrangian type atmospheric dispersion model specialized on modelling dispersion of radioactive debris.
SPRAY (Italy, France) – A Lagrangian particle dispersion model (LPDM) which simulates the transport, dispersion and deposition of pollutants emitted from sources of different kind over complex terrain and with the presence of obstacles. The model easily takes into account complex situations, such as the presence of breeze cycles, strong meteorological inhomogeneities and non-stationary, low wind calm conditions and recirculations. Simulations can cover area ranging from very local (less than one kilometer) to regional (hundreds of kilometres) scales. Plume rise of hot emission from stack is taken into account using a Briggs formulation. Algorithms for particle-oriented dry/wet deposition processes and for considering the gravitational settling are present. Dry deposition can be computed on ground and also on ceil/roof and on lateral faces of obstacles. Dispersion under generalized geometries like arches, tunnels and walkways can be performed. Dense gas dispersion is simulated using five conservation equations (mass, energy, vertical momentum and two horizontal momenta) based on Glandening et al. (1984) and Hurley and Manins (1995). Plume spread at the ground due to gravity is also simulated by a method (Anfossi et al., 2009), based on Eidsvik (1980).
STACKS (The Netherlands) – A Gaussian plume dispersion model for point and area buoyant plumes to be used over flat terrain on a local scale. It includes building effects, NO2 chemistry and plume depletion by deposition. It is used for environmental impact studies and evaluation of emission reduction strategies.
STOER.LAG (Germany) – A dispersion model designed to evaluate accidental releases of hazardous and/or flammable materials from point or area sources in industrial plants. It can handle neutral and denser-than-air gases or aerosols from ground-level or elevated sources. The model accommodates building and terrain effects, evaporation of volatile liquid spill pools, and combustion or explosion of flammable gas-air mixtures (including the impact of heat and pressure waves caused by a fire or explosion).
SYMOS'97 (Czech Republic) – A model developed by the Czech Hydrometeorological Institute for dispersion calculations of continuous neutral or buoyant plumes from single or multiple point, area or line sources. It can handle complex terrain and it can also be used to simulate the dispersion of cooling tower plumes.
TCAM is a multiphase three-dimensional eulerian grid model designed by ESMA group of University of Brescia, for modelling dispersion of pollutants (in particular photochemical and aerosol) at mesoscale.
UDM-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) as an integrated Gaussian urban scale model intended for regulatory pollution control. It handles multiple point, line, area and volume sources and it includes chemical transformation (for NO2), wet and dry deposition (for SO2), and downwash phenomena (but no building effects).
VANADIS (Poland) – 3D unsteady state eulerian type model – Demo – 3d dispersion model – please read vanadis_eng.txt.
Models developed in Australia
AUSPLUME – A dispersion model that has been designated as the primary model accepted by the Environmental Protection Authority (EPA) of the Australian state of Victoria. (update:AUSPLUME V6 will no longer be the air pollution dispersion regulatory model in Victoria from 1 January 2014. From this date the air pollution dispersion regulatory model in Victoria will be AERMOD.)
pDsAUSMOD – Australian graphical user interface for AERMOD
pDsAUSMET – Australian meteorological data processor for AERMOD
LADM – An advanced model developed by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) for simulating the dispersion of buoyant pollution plumes and predicting the photochemical formation of smog over complex terrain on a local to regional scale. The model can also handle fumigated plumes (see the books listed below in the "Further reading" section for an explanation of fumigated plumes).
TAPM – An advanced dispersion model integrated with a pre-processor for providing meteorological data inputs. It can handle multiple pollutants, and point, line, area and volume sources on a local, city or regional scale. The model capabilities include building effects, plume depletion by deposition, and a photochemistry module. This model was also developed by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO).
DISPMOD – A Gaussian atmospheric dispersion model for point sources located in coastal regions. It was designed specifically by the Western Australian Department of Environment to simulate the plume fumigation that occurs when an elevated onshore pollution plume intersects a growing thermal internal boundary layer (TIBL) contained within offshore air flow coming onshore.
AUSPUFF – A Gaussian puff model designed for regulatory use by CSIRO. It includes some simple algorithms for the chemical transformation of reactive air pollutants.
Models developed in Canada
MLCD – Modèle Lagrangien à courte distance is a Lagrangian particle dispersion model (LPDM) developed in collaboration by Environment Canada's Canadian Meteorological Centre (CMC) and by the Department of Earth and Atmospheric Sciences of University of Alberta. This atmospheric dispersion and deposition model is designed to estimate air concentrations and surface deposition of pollutants for very short range emergency problems (less than ~10 km from the source).
MLDPn – Modèle Lagrangien de dispersion de particules d'ordre n is a Lagrangian particle dispersion model (LPDM) developed by Environment Canada's Canadian Meteorological Centre (CMC). This atmospheric and aquatic transport and dispersion model is designed to estimate air and water concentrations and ground deposition of pollutants for various emergency response problems at different scales (local to global). It is used to forecast and track volcanic ash, radioactive material, forest fire smoke, chemical hazardous substances as well as oil slicks.
Trajectory – The trajectory model, developed by Environment Canada's Canadian Meteorological Centre (CMC), is a simple tool designed to calculate the trajectory of a few air parcels moving in the 3D wind field of the atmosphere. The model provides a quick estimate of the expected trajectory of an air parcel by the advection transport mechanism, originating from (forward trajectory) or arriving at (backward trajectory) a specified geographical location and a vertical level.
Models developed in India
HAMS-GPS – Software used for management of environment, health and safety (EHS). It can be used for training as well as research involving dispersion modeling, accident analysis, fires, explosions, risk assessments and other related subjects.
Air pollution dispersion models
ADMS 5
AERMOD
CALPUFF
DISPERSION21
PUFF-PLUME
MERCURE
NAME
OSPM
SAFE AIR
RIMPUFF
HAMS-GPS EIA modeling
Others
Air pollution dispersion terminology
Atmospheric dispersion modeling
Bibliography of atmospheric dispersion modeling
Roadway air dispersion modeling
Wind profile power law
References
Schenk R (1996) Entwicklung von IBS Verkehr, Fördervorhaben des Ministeriums für Umwelt und Landwirtschaft des Landes Sachsen-Anhalt, FKZ 76213//95, 1996
Schenk R (1980) Numerische Behandlung instationärer Transportprobleme, Habilitation an der TU Dresden, 1980
Further reading
For those who would like to learn more about atmospheric dispersion models, it is suggested that either one of the following books be read:
www.crcpress.com
External links
Air Quality Modeling – From the website of Stuff in the Air
The Model Documentation System (MDS) of the European Topic Centre on Air and Climate Change (part of the European Environment Agency)
USA EPA Preferred/Recommended Models Alternative Models Screening Models Photochemical Models
Wiki on Atmospheric Dispersion Modelling. Addresses the international community of atmospheric dispersion modellers – primarily researchers, but also users of models. Its purpose is to pool experiences gained by dispersion modellers during their work.
The ADMS models and the GASTAR model
The AUSPLUME model
The CHARM model
Fluidyn-PANACHE: 3D Computational Fluid Dynamcis(CFD) model for Dispersion Analysis
The HAMS-GPS software
The LADM, DISPMOD and AUSPUFF models
The LAPMOD model
The NAME model
The RIMPUFF model
The SPRAY model
The TAPM model
Validation of the Urban Dispersion Model (UDM)
Atmospheric dispersion modeling | List of atmospheric dispersion models | Chemistry,Engineering,Environmental_science | 8,417 |
3,018,168 | https://en.wikipedia.org/wiki/Kleptolagnia | Kleptolagnia (from Greek kleptein meaning "to steal", and lagnia meaning "sexual excitement") is the state of being sexually aroused by theft. A kleptolagniac is a person aroused by the act of theft. It is also known as kleptophilia, and is a sexual form of kleptomania.
See also
Chremastistophilia
References
Sexual fetishism
Theft | Kleptolagnia | Biology | 96 |
11,774,223 | https://en.wikipedia.org/wiki/Function%20%28engineering%29 | In engineering, a function is interpreted as a specific process, action or task that a system is able to perform.
In engineering design
In the lifecycle of engineering projects, there are usually distinguished subsequently: Requirements and Functional specification documents. The Requirements usually specifies the most important attributes of the requested system. In the Design specification documents, physical or software processes and systems are frequently the requested functions
In products
For advertising and marketing of technical products, the number of functions they can perform is often counted and used for promotion. For example a calculator capable of the basic mathematical operations of addition, subtraction, multiplication, and division, would be called a "four-function" model; when other operations are added, for example for scientific, financial, or statistical calculations, advertisers speak of "57 scientific functions", etc. A wristwatch with stopwatch and timer facilities would similarly claim a specified number of functions. To maximise the claim, trivial operations which do not significantly enhance the functionality of a product may be counted.
References
See also
Process
System
Utility
Engineering concepts | Function (engineering) | Engineering | 217 |
11,997,299 | https://en.wikipedia.org/wiki/Nonode | A nonode is a type of thermionic valve that has nine active electrodes. The term most commonly applies to a seven-grid vacuum tube, also sometimes called an enneode. An example was the EQ80/UQ80, which was used as an FM quadrature detector. It was developed during the introduction of TV and FM radio and delivered an output voltage large enough to directly drive an end pentode while still allowing for some negative feedback. As most of the grids were tied together, even an 8-pin Rimlock base was sufficient in the case of the EQ40.
See also
References
Vacuum tubes
de:Elektronenröhre#Enneode | Nonode | Physics | 144 |
29,426,653 | https://en.wikipedia.org/wiki/Wolstenholme%20prime | In number theory, a Wolstenholme prime is a special type of prime number satisfying a stronger version of Wolstenholme's theorem. Wolstenholme's theorem is a congruence relation satisfied by all prime numbers greater than 3. Wolstenholme primes are named after mathematician Joseph Wolstenholme, who first described this theorem in the 19th century.
Interest in these primes first arose due to their connection with Fermat's Last Theorem. Wolstenholme primes are also related to other special classes of numbers, studied in the hope to be able to generalize a proof for the truth of the theorem to all positive integers greater than two.
The only two known Wolstenholme primes are 16843 and 2124679 . There are no other Wolstenholme primes less than 1011.
Definition
Wolstenholme prime can be defined in a number of equivalent ways.
Definition via binomial coefficients
A Wolstenholme prime is a prime number p > 7 that satisfies the congruence
where the expression in left-hand side denotes a binomial coefficient.
In comparison, Wolstenholme's theorem states that for every prime p > 3 the following congruence holds:
Definition via Bernoulli numbers
A Wolstenholme prime is a prime p that divides the numerator of the Bernoulli number Bp−3. The Wolstenholme primes therefore form a subset of the irregular primes.
Definition via irregular pairs
A Wolstenholme prime is a prime p such that (p, p–3) is an irregular pair.
Definition via harmonic numbers
A Wolstenholme prime is a prime p such that
i.e. the numerator of the harmonic number expressed in lowest terms is divisible by p3.
Search and current status
The search for Wolstenholme primes began in the 1960s and continued over the following decades, with the latest results published in 2022. The first Wolstenholme prime 16843 was found in 1964, although it was not explicitly reported at that time. The 1964 discovery was later independently confirmed in the 1970s. This remained the only known example of such a prime for almost 20 years, until the discovery announcement of the second Wolstenholme prime 2124679 in 1993. Up to 1.2, no further Wolstenholme primes were found. This was later extended to 2 by McIntosh in 1995 and Trevisan & Weber were able to reach 2.5. The latest result as of 2022 is that there are only those two Wolstenholme primes up to 1011.
Expected number of Wolstenholme primes
It is conjectured that infinitely many Wolstenholme primes exist. It is conjectured that the number of Wolstenholme primes ≤ x is about ln ln x, where ln denotes the natural logarithm. For each prime p ≥ 5, the Wolstenholme quotient is defined as
Clearly, p is a Wolstenholme prime if and only if Wp ≡ 0 (mod p). Empirically one may assume that the remainders of Wp modulo p are uniformly distributed in the set {0, 1, ..., p–1}. By this reasoning, the probability that the remainder takes on a particular value (e.g., 0) is about 1/p.
See also
Wieferich prime
Wall–Sun–Sun prime
Wilson prime
Table of congruences
Notes
References
Further reading
External links
Caldwell, Chris K. Wolstenholme prime from The Prime Glossary
McIntosh, R. J. Wolstenholme Search Status as of March 2004 e-mail to Paul Zimmermann
Bruck, R. Wolstenholme's Theorem, Stirling Numbers, and Binomial Coefficients
Conrad, K. The p-adic Growth of Harmonic Sums interesting observation involving the two Wolstenholme primes
Classes of prime numbers
Unsolved problems in number theory | Wolstenholme prime | Mathematics | 856 |
77,142,648 | https://en.wikipedia.org/wiki/PH-797804 | PH-797804 is a drug which acts as a selective inhibitor of the enzyme p38 mitogen-activated protein kinase (p38 MAPK). It has antiinflammatory effects and has been researched for the treatment of inflammatory lung conditions such as chronic obstructive pulmonary disease and COVID-19. While it has not been adopted for clinical use, it remains widely used in research.
See also
NJK14047
Pamapimod
References
Benzamides
2-Pyridones
Fluoroarenes
Bromoarenes
Aromatic ethers | PH-797804 | Chemistry | 119 |
27,228,915 | https://en.wikipedia.org/wiki/Nodal%20precession | Nodal precession is the precession of the orbital plane of a satellite around the rotational axis of an astronomical body such as Earth. This precession is due to the non-spherical nature of a rotating body, which creates a non-uniform gravitational field. The following discussion relates to low Earth orbit of artificial satellites, which have no measurable effect on the motion of Earth. The nodal precession of more massive, natural satellites like the Moon is more complex.
Around a spherical body, an orbital plane would remain fixed in space around the gravitational primary body. However, most bodies rotate, which causes an equatorial bulge. This bulge creates a gravitational effect that causes orbits to precess around the rotational axis of the primary body.
The direction of precession is opposite the direction of revolution. For a typical prograde orbit around Earth (that is, in the direction of primary body's rotation), the longitude of the ascending node decreases, that is the node precesses westward. If the orbit is retrograde, this increases the longitude of the ascending node, that is the node precesses eastward. This nodal precession enables heliosynchronous orbits to maintain a nearly constant angle relative to the Sun.
Description
A non-rotating body of planetary scale or larger would be pulled by gravity into a spherical shape. Virtually all bodies rotate, however. The centrifugal force deforms the body so that it has an equatorial bulge. Because of the bulge of the central body, the gravitational force on a satellite is not directed toward the center of the central body, but is offset toward its equator. Whichever hemisphere of the central body the satellite lies over, it is preferentially pulled slightly toward the equator of the central body. This creates a torque on the satellite. This torque does not reduce the inclination; rather, it causes a torque-induced gyroscopic precession, which causes the orbital nodes to drift with time.
Equation
The rate of precession depends on the inclination of the orbital plane to the equatorial plane, as well as the orbital eccentricity.
For a satellite in a prograde orbit around Earth, the precession is westward (nodal regression), that is, the node and satellite move in opposite directions. A good approximation of the precession rate is
where
is the precession rate (in rad/s),
is the body's equatorial radius ( for Earth),
is the semi-major axis of the satellite's orbit,
is the eccentricity of the satellite's orbit,
is the angular velocity of the satellite's motion (2 radians divided by its period in seconds),
is its inclination,
is the body's second dynamic form factor
The nodal progression of low Earth orbits is typically a few degrees per day to the west (negative). For a satellite in a circular ( = 0) 800 km altitude orbit at 56° inclination about Earth:
The orbital period is , so the angular velocity is . The precession is therefore
This is equivalent to −3.683° per day, so the orbit plane will make one complete turn (in inertial space) in 98 days.
The apparent motion of the sun is approximately +1° per day (360° per year / 365.2422 days per tropical year ≈ 0.9856473° per day), so apparent motion of the sun relative to the orbit plane is about 2.8° per day, resulting in a complete cycle in about 127 days. For retrograde orbits is negative, so the precession becomes positive. (Alternatively, can be thought of as positive but the inclination is greater than 90°, so the cosine of the inclination is negative.) In this case it is possible to make the precession approximately match the apparent motion of the sun, resulting in a heliosynchronous orbit.
The used in this equation is the dimensionless coefficient from the geopotential model or gravity field model for the body.
See also
Axial precession, or "precession of the equinoxes" for Earth
Apsidal precession, another kind of orbital precession (the change in the argument of periapsis)
Lunar standstill, in which the Moon's declination on the lunistices depends on the precession of its orbital nodes
Lunar node
References
External links
Nodal regression description from USENET
Discussion of nodal regression from Analytical Graphics
Astrodynamics
Precession | Nodal precession | Physics,Engineering | 928 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.