id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
2,178,278 | https://en.wikipedia.org/wiki/Jet%20%28particle%20physics%29 | A jet is a narrow cone of hadrons and other particles produced by the hadronization of quarks and gluons in a particle physics or heavy ion experiment. Particles carrying a color charge, i.e. quarks and gluons, cannot exist in free form because of quantum chromodynamics (QCD) confinement which only allows for colorless states. When protons collide at high energies, their color charged components each carry away some of the color charge. In accordance with confinement, these fragments create other colored objects around them to form colorless hadrons. The ensemble of these objects is called a jet, since the fragments all tend to travel in the same direction, forming a narrow "jet" of particles. Jets are measured in particle detectors and studied in order to determine the properties of the original quarks.
A jet definition includes a jet algorithm and a recombination scheme. The former defines how some inputs, e.g. particles or detector objects, are grouped into jets, while the latter specifies how a momentum is assigned to a jet. For example, jets can be characterized by the thrust. The jet direction (jet axis) can be defined as the thrust axis. In particle physics experiments, jets are usually built from clusters of energy depositions in the detector calorimeter. When studying simulated processes, the calorimeter jets can be reconstructed based on a simulated detector response. However, in simulated samples, jets can also be reconstructed directly from stable particles emerging from fragmentation processes. Particle-level jets are often referred to as truth-jets. A good jet algorithm usually allows for obtaining similar sets of jets at different levels in the event evolution. Typical jet reconstruction algorithms are, e.g., the anti-kT algorithm, kT algorithm, cone algorithm. A typical recombination scheme is the E-scheme, or 4-vector scheme, in which the 4-vector of a jet is defined as the sum of 4-vectors of all its constituents.
In relativistic heavy ion physics, jets are important because the originating hard scattering is a natural probe for the QCD matter created in the collision, and indicate its phase. When the QCD matter undergoes a phase crossover into quark gluon plasma, the energy loss in the medium grows significantly, effectively quenching (reducing the intensity of) the outgoing jet.
Example of jet analysis techniques are:
jet correlation
flavor tagging (e.g., b-tagging)
jet substructure.
The Lund string model is an example of a jet fragmentation model.
Jet production
Jets are produced in QCD hard scattering processes, creating high transverse momentum quarks or gluons, or collectively called partons in the partonic picture.
The probability of creating a certain set of jets is described by the jet production cross section, which is an average of elementary perturbative QCD quark, antiquark, and gluon processes, weighted by the parton distribution functions. For the most frequent jet pair production process, the two particle scattering, the jet production cross section in a hadronic collision is given by
with
x, Q2: longitudinal momentum fraction and momentum transfer
: perturbative QCD cross section for the reaction ij → k
: parton distribution function for finding particle species i in beam a.
Elementary cross sections are e.g. calculated to the leading order of perturbation theory in Peskin & Schroeder (1995), section 17.4. A review of various parameterizations of parton distribution functions and the calculation in the context of Monte Carlo event generators is discussed in T. Sjöstrand et al. (2003), section 7.4.1.
Jet fragmentation
Perturbative QCD calculations may have colored partons in the final state, but only the colorless hadrons that are ultimately produced are observed experimentally. Thus, to describe what is observed in a detector as a result of a given process, all outgoing colored partons must first undergo parton showering and then combination of the produced partons into hadrons. The terms fragmentation and hadronization are often used interchangeably in the literature to describe soft QCD radiation, formation of hadrons, or both processes together.
As the parton which was produced in a hard scatter exits the interaction, the strong coupling constant will increase with its separation. This increases the probability for QCD radiation, which is predominantly shallow-angled with respect to the progenitor parton. Thus, one parton will radiate gluons, which will in turn radiate pairs and so on, with each new parton nearly collinear with its parent. This can be described by convolving the spinors with fragmentation functions , in a similar manner to the evolution of parton density functions. This is described by a -Gribov-Lipatov-Altarelli-Parisi (DGLAP) type equation
Parton showering produces partons of successively lower energy, and must therefore exit the region of validity for perturbative QCD. Phenomenological models must then be applied to describe the length of time when showering occurs, and then the combination of colored partons into bound states of colorless hadrons, which is inherently not-perturbative. One example is the Lund String Model, which is implemented in many modern event generators.
Infrared and collinear safety
A jet algorithm is infrared safe if it yields the same set of jets after modifying an event to add a soft radiation. Similarly, a jet algorithm is collinear safe if the final set of jets is not changed after introducing a collinear splitting of one of the inputs. There are several reasons why a jet algorithm must fulfill these two requirements. Experimentally, jets are useful if they carry information about the seed parton. When produced, the seed parton is expected to undergo a parton shower, which may include a series of nearly-collinear splittings before the hadronization starts. Furthermore, the jet algorithm must be robust when it comes to fluctuations in the detector response. Theoretically, If a jet algorithm is not infrared and collinear safe, it can not be guaranteed that a finite cross-section can be obtained at any order of perturbation theory.
See also
Dijet event
References
M. Gyulassy et al., "Jet Quenching and Radiative Energy Loss in Dense Nuclear Matter", in R.C. Hwa & X.-N. Wang (eds.), Quark Gluon Plasma 3 (World Scientific, Singapore, 2003).
J. E. Huth et al., in E. L. Berger (ed.), Proceedings of Research Directions For The Decade: Snowmass 1990, (World Scientific, Singapore, 1992), 134. (Preprint at Fermilab Library Server)
M. E. Peskin, D. V. Schroeder, "An Introduction to Quantum Field Theory" (Westview, Boulder, CO, 1995).
T. Sjöstrand et al., "Pythia 6.3 Physics and Manual", Report LU TP 03-38 (2003).
G. Sterman, "QCD and Jets", Report YITP-SB-04-59 (2004).
External links
The Pythia/Jetset Monte Carlo event generator
The FastJet jet clustering program
Experimental particle physics | Jet (particle physics) | [
"Physics"
] | 1,528 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
2,178,320 | https://en.wikipedia.org/wiki/Milwaukee%20Metropolitan%20Sewerage%20District | The Milwaukee Metropolitan Sewerage District (MMSD) is a regional government agency that provides water reclamation and flood management services for about 1.1 million people in 28 communities in the Greater Milwaukee Area. A recipient of the U.S. Water Prize and many other awards, the District has a record of 98.4 percent, since 1994, for capturing and cleaning wastewater from 28 communities in a area. The national goal is 85 percent of all the rain and wastewater that enters their sewer systems.
With headquarters and a central laboratory along the Menomonee River near downtown Milwaukee, it has two wastewater treatment plants: the Jones Island Water Reclamation Facility, which is located at Jones Island () in Milwaukee, and a second facility at the South Shore () in Oak Creek. These facilities were operated by United Water under a 10-year agreement ending March 1, 2008. Veolia Water is the current operator.
"The world’s first large scale wastewater treatment plant was constructed on Jones Island, near the shore of Lake Michigan." The primary wastewater treatment plant at Jones Island was one of the first of its kind when the original activated sludge plant was constructed in 1925. MMSD was the first to market biosolids created through this process as a fertilizer under the name "Milorganite." The Jones Island Plant was among the first sewage treatment plants in the United States to succeed in using the activated sludge treatment process. "It was the first treatment facility to economically dispose of the recovered sludge by producing an organic fertilizer." In the early 1980s the plant needed extensive reworking, "this does not detract from its historic significance as a pioneering facility in the field of pollution control technology." It had the largest capacity of any plant in the world when constructed. Its present treatment capacity is 390 million gallons per day, but average flow was only 105 million gallons per day between 2015 and 2019. The 1925 plant has been designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers. MMSD has maintained an inline storage system (ISS) based on tunnels to store and convey wet weather flows, including combined sewage, since 1994. The ISS tunnels have a total capacity of and a combined length of over . Since 1994, the ISS tunnels have prevented more than of combined sewer overflows (CSOs) and sanitary sewer overflows (SSOs) from entering area waterways, including Lake Michigan. Between 1994 and 2000, CSOs decreased from 40 to 60 events per year to an average of 2.5 events per year (WDNR 2001).
MMSD Initiatives
Flood Management
Flooding and erosion of the watersheds in the greater Milwaukee area threaten public health and private property. Watersheds boundaries do not necessarily follow municipal boundaries, reducing the risk of flooding requires looking at the watershed as a whole, including the complete river system and its tributaries. The Milwaukee Metropolitan Sewerage District has discretionary authority to maintain the watersheds in the Greater Milwaukee Area and authority to reduce the risk of flooding is in Wisconsin Statutes, Section 200.31(1). In the past, work has included: rehabilitation and removal of concrete, removal of sediment and flow-impeding objects, and widening floodplains for flood management purposes.
Fresh Coast Guardians Resource Center - Green Infrastructure
The Fresh Coast Resource Center (FCRC) helps southeastern Wisconsin improve the health of Lake Michigan through smart use of green infrastructure. The FCRC assists the community by providing the inspiration, education, and tools needed to create successful green infrastructure projects.
In 2017, MMSD opened the FCRC to empower people, homeowners, businesses, nonprofits, and government to take an active role in protecting the most precious natural resource: water. By helping the community to protect area rivers and Lake Michigan, MMSD works to achieve its goal of capturing the first of rainfall in its service area. By capturing the first of rain, of water will stay out of sewers, helping to prevent sewer overflows and reducing runoff pollution.
Combined Sewer Overflows
Diverting combined sewer overflows (CSOs) to waterways is an emergency measure to prevent sewage backups into basements when wastewater treatment facilities reach capacity. MMSD follows the 2014 State of Wisconsin Department of Natural Resources Discharge Permit for sewer overflows. CSOs are sewers that are designed to collect rainwater runoff, domestic sewage, and industrial wastewater in the same pipe. When a CSO happens, they post it on their website and have 5 days to report it to the DNR. According to scientists at the UW-Milwaukee School of Freshwater Sciences, bacteria from CSO's only survive for up to 10 days due to the frigid temperatures of Lake Michigan. Combined sewer overflows are 90 to 95 percent stormwater and groundwater.
MMSD's CSOs are smaller than those of other cities on the Great Lakes, including Cleveland's and Detroit's, and are similar to those of the smaller city of Grand Rapids. Year-to-year CSOs vary depending on local rainfall but as a recent example in 2014 MMSD CSOs totaled , meaning that 99.5 percent of the total flow through the municipal sewer system was treated. MMSD’s permit requires that CSOs be limited to no more than six overflows per year, consistent with the presumption approach in the CSO Control Policy.
Separating the sanitary and storm sewers would decrease the amount of water captured and treated; however, the amount of pollutants going into area rivers and Lake Michigan would increase. In urban areas with many impervious surfaces (buildings, parking lots, and streets), there is little opportunity for stormwater to be absorbed into green areas. Resulting in run off with a high degree of pollutants that would further erode water quality.
Deep Tunnel
David Biello of Scientific American writes, "Since 1994, a more than 26-mile- (42-kilometer-) long tunnel has been keeping Milwaukee's sewage from spilling into Lake Michigan. This deep water tunnel—a holding tank on steroids—comprises two legs roughly 300 feet (90 meters) below ground that can hold nearly 500 million gallons (1.9 billion liters) of sewage and storm water during a downpour. And for the last 14 years it has kept 74 billion gallons (280 billion liters) of wastewater out of Lake Michigan, according to Bill Graffin, a spokesman for the Milwaukee Metropolitan Sewerage District."
The Deep Tunnel has prevented more than of pollution from getting into Lake Michigan. Thanks to the tunnel and many other improvements, MMSD has captured and cleaned 98.4 percent of all the stormwater and wastewater that's entered the regional sewer system since 1994. The goal nationally is to capture and clean 85 percent for the more than 700 cities with systems like Milwaukee's.
See also
2010 Milwaukee flood
References
External links
Milwaukee metropolitan area
Water management authorities in the United States
Historic American Engineering Record in Wisconsin
Historic Civil Engineering Landmarks | Milwaukee Metropolitan Sewerage District | [
"Engineering"
] | 1,413 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
2,178,380 | https://en.wikipedia.org/wiki/Induced%20gamma%20emission | In physics, induced gamma emission (IGE) refers to the process of fluorescent emission of gamma rays from excited nuclei, usually involving a specific nuclear isomer. It is analogous to conventional fluorescence, which is defined as the emission of a photon (unit of light) by an excited electron in an atom or molecule. In the case of IGE, nuclear isomers can store significant amounts of excitation energy for times long enough for them to serve as nuclear fluorescent materials. There are over 800 known nuclear isomers but almost all are too intrinsically radioactive to be considered for applications. there were two proposed nuclear isomers that appeared to be physically capable of IGE fluorescence in safe arrangements: tantalum-180m and hafnium-178m2.
History
Induced gamma emission is an example of interdisciplinary research bordering on both nuclear physics and quantum electronics. Viewed as a nuclear reaction it would belong to a class in which only photons were involved in creating and destroying states of nuclear excitation. It is a class usually overlooked in traditional discussions. In 1939 Pontecorvo and Lazard reported the first example of this type of reaction. Indium was the target and in modern terminology describing nuclear reactions it would be written 115In(γ,γ')115mIn. The product nuclide carries an "m" to denote that it has a long enough half life (4.5 h in this case) to qualify as being a nuclear isomer. That is what made the experiment possible in 1939 because the researchers had hours to remove the products from the irradiating environment and then to study them in a more appropriate location.
With projectile photons, momentum and energy can be conserved only if the incident photon, X-ray or gamma, has precisely the energy corresponding to the difference in energy between the initial state of the target nucleus and some excited state that is not too different in terms of quantum properties such as spin. There is no threshold behavior and the incident projectile disappears and its energy is transferred into internal excitation of the target nucleus. It is a resonant process that is uncommon in nuclear reactions but normal in the excitation of fluorescence at the atomic level. Only as recently as 1988 was the resonant nature of this type of reaction finally proven. Such resonant reactions are more readily described by the formalities of atomic fluorescence and further development was facilitated by an interdisciplinary approach of IGE.
There is little conceptual difference in an IGE experiment when the target is a nuclear isomer. Such a reaction as mX(γ,γ')X where mX is one of the five candidates listed above, is only different because there are lower energy states for the product nuclide to enter after the reaction than there were at the start. Practical difficulties arise from the need to ensure safety from the spontaneous radioactive decay of nuclear isomers in quantities sufficient for experimentation. Lifetimes must be long enough that doses from the spontaneous decay from the targets always remain within safe limits. In 1988 Collins and coworkers reported the first excitation of IGE from a nuclear isomer. They excited fluorescence from the nuclear isomer tantalum-180m with x-rays produced by an external beam radiotherapy linac. Results were surprising and considered to be controversial until the resonant states excited in the target were identified.
Distinctive features
If an incident photon is absorbed by an initial state of a target nucleus, that nucleus will be raised to a more energetic state of excitation. If that state can radiate its energy only during a transition back to the initial state, the result is a scattering process as seen in the schematic figure. That is not an example of IGE.
If an incident photon is absorbed by an initial state of a target nucleus, that nucleus will be raised to a more energetic state of excitation. If there is a nonzero probability that sometimes that state will start a cascade of transitions as shown in the schematic, that state has been called a "gateway state" or "trigger level" or "intermediate state". One or more fluorescent photons are emitted, often with different delays after the initial absorption and the process is an example of IGE.
If the initial state of the target nucleus is its ground (lowest energy) state, then the fluorescent photons will have less energy than that of the incident photon (as seen in the schematic figure). Since the scattering channel is usually the strongest, it can "blind" the instruments being used to detect the fluorescence and early experiments preferred to study IGE by pulsing the source of incident photons while detectors were gated off and then concentrating upon any delayed photons of fluorescence when the instruments could be safely turned back on.
If the initial state of the target nucleus is a nuclear isomer (starting with more energy than the ground) it can also support IGE. However, in that case the schematic diagram is not simply the example seen for 115In but read from right to left with the arrows turned the other way. Such a "reversal" would require simultaneous (to within <0.25 ns) absorption of two incident photons of different energies to get from the 4 h isomer back up to the "gateway state". Usually the study of IGE from a ground state to an isomer of the same nucleus teaches little about how the same isomer would perform if used as the initial state for IGE. In order to support IGE an energy for an incident photon would have to be found that would "match" the energy needed to reach some other gateway state not shown in the schematic that could launch its own cascade down to the ground state.
If the target is a nuclear isomer storing a considerable amount of energy then IGE might produce a cascade that contains a transition that emits a photon with more energy than that of the incident photon. This would be the nuclear analog of upconversion in laser physics.
If the target is a nuclear isomer storing a considerable amount of energy then IGE might produce a cascade through a pair of excited states whose lifetimes are "inverted" so that in a collection of such nuclei, population would build up in the longer lived upper level while emptying rapidly from the shorter lived lower member of the pair. The resulting inversion of population might support some form of coherent emission analogous to amplified spontaneous emission (ASE) in laser physics. If the physical dimensions of the collection of target isomer nuclei were long and thin, then a form of gamma-ray laser might result.
Potential applications
Energy-specific dosimeters
Since the IGE from ground state nuclei requires the absorption of very specific photon energies to produce delayed fluorescent photons that are easily counted, there is the possibility to construct energy-specific dosimeters by combining several different nuclides. This was demonstrated for the calibration of the radiation spectrum from the DNA-PITHON pulsed nuclear simulator. Such a dosimeter could be useful in radiation therapy where X-ray beams may contain many energies. Since photons of different energies deposit their effects at different depths in the tissue being treated, it could help calibrate how much of the total dose would be deposited in the actual target volume.
Aircraft power
In February 2003, the non-peer reviewed New Scientist wrote about the possibility of an IGE-powered airplane, a variant on nuclear propulsion. The idea was to utilize 178m2Hf (presumably due to its high energy to weight ratio) which would be triggered to release gamma rays that would heat air in a chamber for jet propulsion. This power source is described as a "quantum nucleonic reactor", although it is not clear if this name exists only in reference to the New Scientist article.
Nuclear weaponry
It is partly this theoretical density that has made the entire IGE field so controversial. It has been suggested that the materials might be constructed to allow all of the stored energy to be released very quickly in a "burst". The possible energy release of the gammas alone would make IGE a potential high power "explosive" on its own, or a potential radiological weapon.
Fusion bomb ignition
The density of gammas produced in this reaction would be high enough that it might allow them to be used to compress the fusion fuel of a fusion bomb. If this turns out to be the case, it might allow a fusion bomb to be constructed with no fissile material inside (i.e. a pure fusion weapon); it is the control of the fissile material and the means for making it that underlies most attempts to stop nuclear proliferation.
See also
Particle-induced gamma emission
References
Literature
External links
"Scary Things Come in Small Packages", Washington Post article of 2004 by Sharon Weinberger |
Hf-isomer Summary Page of Results , C.B. Collins, University of Texas, Dallas
"Atomic Powered Global Hawk Jet Reving For Take-Off?", a SciScoop weblog entry |
Conflicting Results on a Long-Lived Nuclear Isomer of Hafnium Have Wider Implications This Physics Today article provides a balanced view from 2004.
Reprints of articles about nuclear isomers in peer reviewed journals. - The Center for Quantum Electronics, The University of Texas at Dallas.
Nuclear interdisciplinary topics | Induced gamma emission | [
"Physics"
] | 1,881 | [
"Nuclear interdisciplinary topics",
"Nuclear physics"
] |
2,178,421 | https://en.wikipedia.org/wiki/Straw%20chamber | A straw chamber is a type of Gaseous ionization detector. It is a long tube with a wire down the center and a gas which becomes ionized when a particle passes through. A potential difference is maintained between the wire and the walls of the tube, so that once the gas is ionized electrons move in one direction and ions in the other. This produces a current which indicates that a particle has passed through the chamber.
Many straws together can be used to track particles in a straw tracker. A straw tracker is a type of particle detector which uses many straw chambers to track the path of a particle. The path of a particle is determined by the best fit to all the straws with hits. Since the time for a particular straw to produce a signal is proportional to the distance of the particle's closest approach to that chamber's wire, if a particle on a predictable path (e.g. a helix in a magnetic field) passes through many straws, the path of the particle can be determined more precisely than the size of any particular straw.
Specific uses
There are about 298,000 drift tubes (straws) in the Transition Radiation Tracker (TRT) of the ATLAS_experiment at the Large Hadron Collider.
References
Particle detectors | Straw chamber | [
"Physics",
"Technology",
"Engineering"
] | 257 | [
"Particle physics stubs",
"Particle detectors",
"Particle physics",
"Measuring instruments"
] |
2,178,474 | https://en.wikipedia.org/wiki/Particle%20identification | Particle identification is the process of using information left by a particle passing through a particle detector to identify the type of particle. Particle identification reduces backgrounds and improves measurement resolutions, and is essential to many analyses at particle detectors.
Charged particles
Charged particles have been identified using a variety of techniques. All methods rely on a measurement of the momentum in a tracking chamber combined with a measurement of the velocity to determine the charged particle's mass, and therefore its identity.
Specific ionization
A charged particle loses energy in matter by ionization at a rate determined in part by its velocity. The energy loss per unit distance is typically called . The energy loss is measured either in dedicated detectors, or in tracking chambers designed to also measure energy loss. The energy lost in a thin layer of material is subject to large fluctuations, and therefore accurate determination requires a large number of measurements. Individual measurements in the low- and high-energy tails are excluded.
Time of flight
Time-of-flight detectors determine charged particle velocity by measuring the time required to travel from the interaction point to the time-of-flight detector, or between two detectors. The ability to distinguish particle types diminishes as the particle velocity approaches its maximum allowed value, the speed of light, and thus is efficient only for particles with a small Lorentz factor.
Cherenkov detectors
Cherenkov radiation is emitted by a charged particle when it passes through a material with a speed greater than , where is the index of refraction of the material. The angle of the photons with respect to the charged particle's direction depends on velocity. A number of Cherenkov detector geometries have been used.
Photons
Photons are identified because they leave all their energy in a detector's electromagnetic calorimeter, but do not appear in the tracking chamber (see, for example, ATLAS Inner Detector) because they are neutral. A neutral pion which decays inside the EM calorimeter can replicate this effect.
Electrons
Electrons appear as tracks in the inner detector and deposit all their energy in the electromagnetic calorimeter. The energy deposited in the calorimeter must match the momentum measured in the tracking chamber.
Muons
Muons penetrate more material than other charged particles, and can therefore be identified by their presence in the outermost detectors.
Tau particles
Tau identification requires differentiating the narrow "jet" produced by the hadronic decay of the tau from ordinary quark jets.
Neutrinos
Neutrinos do not interact in particle detectors, and therefore escape undetected. Their presence can be inferred by the momentum imbalance of the visible particles in an event. In electron-positron colliders, both the neutrino momentum in all three dimensions and the neutrino energy can be reconstructed. Neutrino energy reconstruction requires accurate charged particle identification. In colliders using hadrons, only the momentum transverse to the beam direction can be determined.
Neutral hadrons
Neutral hadrons can sometimes be identified in calorimeters. In particular, antineutrons and Ks can be identified. Neutral hadrons can also be identified at electron-positron colliders in the same way as neutrinos.
Heavy quarks
Quark flavor tagging identifies the flavor of quark that a jet comes from. B-tagging, the identification of bottom quarks, is the most important example. B-tagging relies on the bottom quark being the heaviest quark involved in a hadronic decay (tops are heavier, but to have a top in a decay, it is necessary to produce some heavier particle to have a subsequent decay into a top). This implies that the bottom quark has a short lifetime and it is possible to look for its decay vertex in the inner tracker. Additionally, its decay products are transversal to the beam, resulting in a high jet multiplicity. Charm tagging using similar techniques is also possible, but extremely difficult due to the lower mass. Tagging jets from lighter quarks is simply impossible; due to QCD background, there are simply too many indistinguishable jets.
See also
Spark chamber
Wire chamber
References
Experimental particle physics | Particle identification | [
"Physics"
] | 848 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
2,178,487 | https://en.wikipedia.org/wiki/Epitope%20mapping | In immunology, epitope mapping is the process of experimentally identifying the binding site, or epitope, of an antibody on its target antigen (usually, on a protein). Identification and characterization of antibody binding sites aid in the discovery and development of new therapeutics, vaccines, and diagnostics. Epitope characterization can also help elucidate the binding mechanism of an antibody and can strengthen intellectual property (patent) protection. Experimental epitope mapping data can be incorporated into robust algorithms to facilitate in silico prediction of B-cell epitopes based on sequence and/or structural data.
Epitopes are generally divided into two classes: linear and conformational/discontinuous. Linear epitopes are formed by a continuous sequence of amino acids in a protein. Conformational epitopes epitopes are formed by amino acids that are nearby in the folded 3D structure but distant in the protein sequence. Note that conformational epitopes can include some linear segments. B-cell epitope mapping studies suggest that most interactions between antigens and antibodies, particularly autoantibodies and protective antibodies (e.g., in vaccines), rely on binding to discontinuous epitopes.
Importance for antibody characterization
By providing information on mechanism of action, epitope mapping is a critical component in therapeutic monoclonal antibody (mAb) development. Epitope mapping can reveal how a mAb exerts its functional effects - for instance, by blocking the binding of a ligand or by trapping a protein in a non-functional state. Many therapeutic mAbs target conformational epitopes that are only present when the protein is in its native (properly folded) state, which can make epitope mapping challenging. Epitope mapping has been crucial to the development of vaccines against prevalent or deadly viral pathogens, such as chikungunya, dengue, Ebola, and Zika viruses, by determining the antigenic elements (epitopes) that confer long-lasting immunization effects.
Complex target antigens, such as membrane proteins (e.g., G protein-coupled receptors [GPCRs]) and multi-subunit proteins (e.g., ion channels) are key targets of drug discovery. Mapping epitopes on these targets can be challenging because of the difficulty in expressing and purifying these complex proteins. Membrane proteins frequently have short antigenic regions (epitopes) that fold correctly only when in the context of a lipid bilayer. As a result, mAb epitopes on these membrane proteins are often conformational and, therefore, are more difficult to map.
Importance for intellectual property (IP) protection
Epitope mapping has become prevalent in protecting the intellectual property (IP) of therapeutic mAbs. Knowledge of the specific binding sites of antibodies strengthens patents and regulatory submissions by distinguishing between current and prior art (existing) antibodies. The ability to differentiate between antibodies is particularly important when patenting antibodies against well-validated therapeutic targets (e.g., PD1 and CD20) that can be drugged by multiple competing antibodies. In addition to verifying antibody patentability, epitope mapping data have been used to support broad antibody claims submitted to the United States Patent and Trademark Office.
Epitope data have been central to several high-profile legal cases involving disputes over the specific protein regions targeted by therapeutic antibodies. In this regard, the Amgen v. Sanofi/Regeneron Pharmaceuticals PCSK9 inhibitor case hinged on the ability to show that both the Amgen and Sanofi/Regeneron therapeutic antibodies bound to overlapping amino acids on the surface of PCSK9.
Methods
There are several methods available for mapping antibody epitopes on target antigens:
X-ray co-crystallography and cryogenic electron microscopy (cryo-EM). X-ray co-crystallography has historically been regarded as the gold-standard approach for epitope mapping because it allows direct visualization of the interaction between the antigen and antibody. Cryo-EM can similarly provide high-resolution maps of antibody-antigen interactions. However, both approaches are technically challenging, time-consuming, and expensive, and not all proteins are amenable to crystallization. Moreover, these techniques are not always feasible due to the difficulty in obtaining sufficient quantities of correctly folded and processed protein. Finally, neither technique can distinguish key epitope residues (energetic "hot spots") for mAbs that bind to the same group of amino acids.
Array-based oligopeptide scanning. Also known as overlapping peptide scan or pepscan analysis, this technique uses a library of oligopeptide sequences from overlapping and non-overlapping segments of a target protein, and tests for their ability to bind the antibody of interest. This method is fast, relatively inexpensive, and specifically suited to profile epitopes for large numbers of candidate antibodies against a defined target. The epitope mapping resolution depends on the number of overlapping peptides that are used. The main disadvantage of this approach is that discontinuous epitopes are deconstructed into smaller peptides, which can cause lower binding affinities. However, advances have been made with technologies such as constrained peptides, which can be used to mimic conformational as well as discontinuous epitopes. For example, an antibody against CD20 was mapped in a study using array-based oligopeptide scanning, by combining non-adjacent peptide sequences from different parts of the target protein and enforcing conformational rigidity onto this combined peptide (e.g., by using CLIPS scaffolds). Replacement analysis on peptides also allows single amino acid resolution, and can therefore pinpoint key epitope residues.
Site-directed mutagenesis mapping. The molecular biological technique of site-directed mutagenesis (SDM) can be used to enable epitope mapping. In SDM, systematic mutations of amino acids are introduced into the sequence of the target protein. Binding of an antibody to each mutated protein is tested to identify the amino acids that comprise the epitope. This technique can be used to map both linear and conformational epitopes but is labor-intensive and time-consuming, typically limiting analysis to a small number of amino-acid residues.
High-throughput shotgun mutagenesis epitope mapping. Shotgun mutagenesis is a high-throughput approach for mapping the epitopes of mAbs. The shotgun mutagenesis technique begins with the creation of a mutation library of the entire target antigen, with each clone containing a unique amino acid mutation (typically an alanine substitution). Hundreds of plasmid clones from the library are individually arrayed in 384-well microplates, expressed in human cells, and tested for antibody binding. Amino acids of the target required for antibody binding are identified by a loss of immunoreactivity. These residues are mapped onto structures of the target protein to visualize the epitope. Benefits of high-throughput shotgun mutagenesis epitope mapping include: 1) the ability to identify both linear and conformational epitopes, 2) a shorter assay time than other methods, 3) the presentation of properly folded and post-translationally modified proteins, and 4) the ability to identify key amino acids that drive the energetic interactions (energetic "hot spots" of the epitope).
Hydrogen–deuterium exchange (HDX). This method gives information about the solvent accessibility of various parts of the antigen and antibody, demonstrating reduced solvent accessibility in regions of protein-protein interactions. One of its advantages is that it determines the interaction site of the antigen-antibody complex in its native solution, and does not introduce any modifications (e.g. mutation) to either the antigen or the antibody. HDX epitope mapping has also been demonstrated to be the effective method to rapidly supply complete information for epitope structure. It does not usually provide data at the level of amino acid, but this limitation is being improved by new technology advancements. It has recently been recommended as a fast and cost-effective epitope mapping approach, using the complex protein system influenza hemagglutinin as an example.
Cross-linking-coupled mass spectrometry. Antibody and antigen are bound to a labeled cross-linker, and complex formation is confirmed by high-mass MALDI detection. The binding location of the antibody to the antigen can then be identified by mass spectrometry (MS). The cross-linked complex is highly stable and can be exposed to various enzymatic and digestion conditions, allowing many different peptide options for detection. MS or MS/MS techniques are used to detect the amino-acid locations of the labelled cross-linkers and the bound peptides (both epitope and paratope are determined in one experiment). The key advantage of this technique is the high sensitivity of MS detection, which means that very little material (hundreds of micrograms or less) is needed.
Other methods, such as yeast display, phage display, and limited proteolysis, provide high-throughput monitoring of antibody binding but lack resolution, especially for conformational epitopes.
See also
Epitope binning
References
External links
Immunologic tests
Antigenic determinant | Epitope mapping | [
"Biology"
] | 1,933 | [
"Immunologic tests"
] |
2,178,511 | https://en.wikipedia.org/wiki/Ephebos | Ephebos (; pl. epheboi, ), latinized as ephebus (pl. ephebi) and anglicised as ephebe (pl. ephebes), is a term for a male adolescent in Ancient Greece. The term was particularly used to denote one who was doing military training and preparing to become an adult. From about 335 BC, ephebes from Athens (aged between 18–20) underwent two years of military training under supervision, during which time they were exempt from civic duties and deprived of most civic rights. During the 3rd century BC, ephebic service ceased to be compulsory and its time was reduced to one year. By the 1st century BC, the ephebia became an institution reserved for wealthy individuals and, besides military training, it also included philosophic and literary studies.
History
Though the word ephebos (from epi "upon" + hebe "youth", "early manhood") can simply refer to the adolescent age of young men of training age, its main use is for the members, exclusively from that age group, of an official institution (ephebia) that saw to building them into citizens, but especially to training them as soldiers, sometimes already sent into the field; the Greek city states (poleis) mainly depended (like the Roman Republic) on its militia of citizens for defense.
In the time of Aristotle (384–322 BC), Athens engraved the names of the enrolled ephebi on a bronze pillar (formerly on wooden tablets) in front of the council-chamber. After admission to the college, the ephebus took the oath of allegiance (as recorded in histories by Pollux and Stobaeus—but not in Aristotle) in the temple of Aglaurus and was sent to Munichia or Acte as a member of the garrison. At the end of the first year of training the ephebi were reviewed; if their performance was satisfactory, the state provided each with a spear and a shield, which, together with the (cloak) and (broad-brimmed hat), made up their equipment. In their second year they were transferred to other garrisons in Attica, patrolled the frontiers, and on occasion took an active part in war. During these two years they remained free from taxation, and were generally not allowed to appear in the law courts as plaintiffs or defendants. The ephebi took part in some of the most important Athenian festivals. Thus during the Eleusinian Mysteries they were sent to fetch the sacred objects from Eleusis and to escort the image of Iacchus on the sacred way. They also performed police duty at the meetings of the ecclesia.
After the end of the 4th century BC, the institution underwent a radical change. Enrolment ceased to be obligatory, lasted only for a year, and the limit of age was dispensed with. Inscriptions attest a continually decreasing number of ephebi, and with the admission of foreigners the college lost its representative national character. This was mainly due to the weakening of the military spirit and to the progress of intellectual culture. The military element was no longer all-important, and the ephebia became a sort of university for well-to-do young men of good family, whose social position has been compared with that of the Athenian "knights" of earlier times. The institution lasted till the end of the 3rd century AD.
In the Hellenistic and Roman periods, foreigners, including Romans, began to be admitted as ephebes. At this period the college of ephebi was a miniature city, which possessed an archon, strategos, herald and other officials, after the model of the city of Athens.
Sculpture
In Ancient Greek sculpture, an ephebe is a sculptural type depicting a nude ephebos (Archaic examples of the type are also often known as the kouros type, or kouroi in the plural). This typological name often occurs in the form "the Ephebe", where is the collection to which the object belongs or belonged, or the site on which it was found (e.g. the Agrigento Ephebe).
Gallery
See also
Bishōnen
Ephebe, a fictional nation in Terry Pratchett's Discworld
Ephebic oath
Ephebophilia
Kóryos
Kouros
Pauly-Wissowa
References
H. Jeanmaire, Couroi et Courètes: Essai sur l'éducation spartiate et sur les rites d'adolescence dans l'Antiquité hellénique, Bibliothèque universitaire, Lille, 1939
C. Pélékidis, Éphébie: Histoire de l'éphébie attique, des origines à 31 av. J.-C., éd. de Boccard, Paris, 1962
O. W. Reinmuth, The Ephebic Inscriptions of the Fourth Century B.C., Leiden Brill, Leyde, 1971
P. Vidal-Naquet, Le Chasseur noir et l'origine de l'éphébie athénienne, Maspéro, 1981
P. Vidal-Naquet, Le Chasseur noir. Formes de pensée et formes de société dans le monde grec, Maspéro, 1981
U. von Wilamowitz-Moellendorf, Aristoteles: Aristoteles und Athen, 2 vol., Berlin, 1916
Further reading
External links
Ephebarchic Law of Amphipolis
Ancient Greek sculptures
Human development
Social classes in ancient Greece
Society of ancient Greece
Society of ancient Rome
Pederasty in ancient Greece | Ephebos | [
"Biology"
] | 1,175 | [
"Behavioural sciences",
"Behavior",
"Human development"
] |
2,178,530 | https://en.wikipedia.org/wiki/Tetraamminecopper%28II%29%20sulfate | Tetraamminecopper(II) sulfate monohydrate, or more precisely tetraammineaquacopper(II) sulfate, is the salt with the formula , or more precisely [. This dark blue to purple solid is a sulfuric acid salt of the metal complex (tetraammineaquacopper(II) cation). It is closely related to Schweizer's reagent, which is used for the production of cellulose fibers in the production of rayon.
Synthesis
This compound can be prepared by adding concentrated aqueous solution of ammonia to a saturated aqueous solution of copper(II) sulfate pentahydrate followed by precipitation of the product with ethanol or isopropanol.
Chemical reaction and solubility
The deep blue crystalline solid tends to hydrolyse and evolve (release) ammonia upon standing in air. It is fairly soluble in water. The brilliant dark blue-violet color of tetraamminecopper(II) sulfate solution is due to presence of (tetraamminecopper(II) cation). Often, the dark blue-violet color is used as a positive test to verify the presence of in a solution.
Structure and properties
The solid state structure of tetraamminecopper(II) sulfate monohydrate confirms that the compound is a salt. The complex cation is (tetraammineaquacopper(II) cation), which has a square pyramidal molecular geometry. The Cu-N and Cu-O bond length are about 210 and 233 pm, respectively, as determined by X-ray crystallography. The correct concentrations of ammonia and copper(II) sulfate solution needed to synthesize the complex can be determined by colorimetry. The combination of the correct concentrations will produce the highest absorbance read out on the colorimeter and as a result the formula of the complex can be verified.
Corrosion
The characteristic deep blue colour of the tetraammine complex is found in brass and copper alloys where attack from ammonia has occurred, leading to cracking. The problem was first found in ammunition cartridge cases when they were stored near animal waste, which produced trace amounts of ammonia. This type of corrosion is known as season cracking.
Uses
The closely related Schweizer's reagent is used for the production of cuprammonium rayon.
References
Copper complexes
Sulfates
Ammine complexes
Tetraamminecopper(II) compounds | Tetraamminecopper(II) sulfate | [
"Chemistry"
] | 505 | [
"Sulfates",
"Salts"
] |
2,178,570 | https://en.wikipedia.org/wiki/Cosmic%20dust | Cosmic dustalso called extraterrestrial dust, space dust, or star dustis dust that occurs in outer space or has fallen onto Earth. Most cosmic dust particles measure between a few molecules and , such as micrometeoroids (<30 μm) and meteoroids (>30 μm). Cosmic dust can be further distinguished by its astronomical location: intergalactic dust, interstellar dust, interplanetary dust (as in the zodiacal cloud), and circumplanetary dust (as in a planetary ring). There are several methods to obtain space dust measurement.
In the Solar System, interplanetary dust causes the zodiacal light. Solar System dust includes comet dust, planetary dust (like from Mars), asteroidal dust, dust from the Kuiper belt, and interstellar dust passing through the Solar System. Thousands of tons of cosmic dust are estimated to reach Earth's surface every year, with most grains having a mass between 10−16 kg (0.1 pg) and 10−4 kg (0.1 g). The density of the dust cloud through which the Earth is traveling is approximately 10−6 dust grains/m3.
Cosmic dust contains some complex organic compounds (amorphous organic solids with a mixed aromatic–aliphatic structure) that could be created naturally, and rapidly, by stars. A smaller fraction of dust in space is "stardust" consisting of larger refractory minerals that condensed as matter left by stars.
Interstellar dust particles were collected by the Stardust spacecraft and samples were returned to Earth in 2006.
Study and importance
Cosmic dust was once solely an annoyance to astronomers, as it obscures objects they wished to observe. When infrared astronomy began, the dust particles were observed to be significant and vital components of astrophysical processes. Their analysis can reveal information about phenomena like the formation of the Solar System. For example, cosmic dust can drive the mass loss when a star is nearing the end of its life, play a part in the early stages of star formation, and form planets. In the Solar System, dust plays a major role in the zodiacal light, Saturn's B Ring spokes, the outer diffuse planetary rings at Jupiter, Saturn, Uranus and Neptune, and comets.
The interdisciplinary study of dust brings together different scientific fields: physics (solid-state, electromagnetic theory, surface physics, statistical physics, thermal physics), fractal mathematics, surface chemistry on dust grains, meteoritics, as well as every branch of astronomy and astrophysics. These disparate research areas can be linked by the following theme: the cosmic dust particles evolve cyclically; chemically, physically and dynamically. The evolution of dust traces out paths in which the Universe recycles material, in processes analogous to the daily recycling steps with which many people are familiar: production, storage, processing, collection, consumption, and discarding.
Observations and measurements of cosmic dust in different regions provide an important insight into the Universe's recycling processes; in the clouds of the diffuse interstellar medium, in molecular clouds, in the circumstellar dust of young stellar objects, and in planetary systems such as the Solar System, where astronomers consider dust as in its most recycled state. The astronomers accumulate observational ‘snapshots’ of dust at different stages of its life and, over time, form a more complete movie of the Universe's complicated recycling steps.
Parameters such as the particle's initial motion, material properties, intervening plasma and magnetic field determined the dust particle's arrival at the dust detector. Slightly changing any of these parameters can give significantly different dust dynamical behavior. Therefore, one can learn about where that object came from, and what is (in) the intervening medium.
Detection methods
A wide range of methods is available to study cosmic dust. Cosmic dust can be detected by remote sensing methods that utilize the radiative properties of cosmic dust particles, c.f. Zodiacal light measurement.
Cosmic dust can also be detected directly ('in-situ') using a variety of collection methods and from a variety of collection locations. Estimates of the daily influx of extraterrestrial material entering the Earth's atmosphere range between 5 and 300 tonnes.
NASA collects samples of star dust particles in the Earth's atmosphere using plate collectors under the wings of stratospheric-flying airplanes. Dust samples are also collected from surface deposits on the large Earth ice-masses (Antarctica and Greenland/the Arctic) and in deep-sea sediments.
Don Brownlee at the University of Washington in Seattle first reliably identified the extraterrestrial nature of collected dust particles in the latter 1970s. Another source is the meteorites, which contain stardust extracted from them. Stardust grains are solid refractory pieces of individual presolar stars. They are recognized by their extreme isotopic compositions, which can only be isotopic compositions within evolved stars, prior to any mixing with the interstellar medium. These grains condensed from the stellar matter as it cooled while leaving the star.
In interplanetary space, dust detectors on planetary spacecraft have been built and flown, some are presently flying, and more are presently being built to fly. The large orbital velocities of dust particles in interplanetary space (typically 10–40 km/s) make intact particle capture problematic. Instead, in-situ dust detectors are generally devised to measure parameters associated with the high-velocity impact of dust particles on the instrument, and then derive physical properties of the particles (usually mass and velocity) through laboratory calibration (i.e., impacting accelerated particles with known properties onto a laboratory replica of the dust detector). Over the years dust detectors have measured, among others, the impact light flash, acoustic signal and impact ionisation. Recently the dust instrument on Stardust captured particles intact in low-density aerogel.
Dust detectors in the past flew on the HEOS 2, Helios, Pioneer 10, Pioneer 11, Giotto, Galileo, Ulysses and Cassini space missions, on the Earth-orbiting LDEF, EURECA, and Gorid satellites, and some scientists have utilized the Voyager 1 and 2 spacecraft as giant Langmuir probes to directly sample the cosmic dust. Presently dust detectors are flying on the Ulysses, Proba, Rosetta, Stardust, and the New Horizons spacecraft. The collected dust at Earth or collected further in space and returned by sample-return space missions is then analyzed by dust scientists in their respective laboratories all over the world. One large storage facility for cosmic dust exists at the NASA Houston JSC.
Infrared light can penetrate cosmic dust clouds, allowing us to peer into regions of star formation and the centers of galaxies. NASA's Spitzer Space Telescope was the largest infrared space telescope, before the launch of the James Webb Space Telescope. During its mission, Spitzer obtained images and spectra by detecting the thermal radiation emitted by objects in space between wavelengths of 3 and 180 micrometres. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground. Findings from the Spitzer have revitalized the studies of cosmic dust. One report showed some evidence that cosmic dust is formed near a supermassive black hole.
Another detection mechanism is polarimetry. Dust grains are not spherical and tend to align to interstellar magnetic fields, preferentially polarizing starlight that passes through dust clouds. In nearby interstellar space, where interstellar reddening is not intense enough to be detected, high precision optical polarimetry has been used to glean the structure of dust within the Local Bubble.
In 2019, researchers found interstellar dust in Antarctica which they relate to the Local Interstellar Cloud. The detection of interstellar dust in Antarctica was done by the measurement of the radionuclides iron-60 and manganese-53 by highly sensitive Accelerator mass spectrometry.
Radiation properties
A dust particle interacts with electromagnetic radiation in a way that depends on its cross section, the wavelength of the electromagnetic radiation, and on the nature of the grain: its refractive index, size, etc. The radiation process for an individual grain is called its emissivity, dependent on the grain's efficiency factor. Further specifications regarding the emissivity process include extinction, scattering, absorption, or polarisation. In the radiation emission curves, several important signatures identify the composition of the emitting or absorbing dust particles.
Dust particles can scatter light nonuniformly. Forward scattered light is light that is redirected slightly off its path by diffraction, and back-scattered light is reflected light.
The scattering and extinction ("dimming") of the radiation gives useful information about the dust grain sizes. For example, if the in one's data is many times brighter in forward-scattered visible light than in back-scattered visible light, then it is understood that a significant fraction of the particles are about a micrometer in diameter.
The scattering of light from dust grains in long exposure visible photographs is quite noticeable in reflection nebulae, and gives clues about the individual particle's light-scattering properties. In X-ray wavelengths, many scientists are investigating the scattering of X-rays by interstellar dust, and some have suggested that astronomical X-ray sources would possess diffuse haloes, due to the dust.
Presolar grains
Presolar grains are contained within meteorites, from which they are extracted in terrestrial laboratories. The term "stardust" or "presolar stardust" is sometimes used to distinguish grains from a single star in comparison to aggregated interstellar dust particles, though this distinction is not universally applied. Presolar material was a component of the dust in the interstellar medium before its incorporation into meteorites. The meteorites have stored those presolar grains ever since the meteorites first assembled within the planetary accretion disk more than four billion years ago. Carbonaceous chondrites are especially fertile reservoirs of presolar material. Presolar grains definitionally existed before the Earth was formed. Presolar grain (and, less frequently, "stardust" or "presolar stardust") is the scientific term referring to refractory dust grains that condensed from cooling ejected gases from individual presolar stars and incorporated into the cloud from which the Solar System condensed.
Many different types of presolar grains have been identified by laboratory measurements of the highly unusual isotopic composition of the chemical elements that comprise each presolar grain. These refractory mineral grains may earlier have been coated with volatile compounds, but those are lost in the dissolving of meteorite matter in acids, leaving only insoluble refractory minerals. Finding the grain cores without dissolving most of the meteorite has been possible, but difficult and labor-intensive.
Many new aspects of nucleosynthesis have been discovered from the isotopic ratios within the presolar grains. An important property of presolar is the hard, refractory, high-temperature nature of the grains. Prominent are silicon carbide, graphite, aluminium oxide, aluminium spinel, and other such solids that would condense at high temperature from a cooling gas, such as in stellar winds or in the decompression of the inside of a supernova. They differ greatly from the solids formed at low temperature within the interstellar medium.
Also important are their extreme isotopic compositions, which are expected to exist nowhere in the interstellar medium. This also suggests that the presolar grains condensed from the gases of individual stars before the isotopes could be diluted by mixing with the interstellar medium. These allow the source stars to be identified. For example, the heavy elements within the silicon carbide (SiC) grains are almost pure S-process isotopes, fitting their condensation within AGB star red giant winds inasmuch as the AGB stars are the main source of S-process nucleosynthesis and have atmospheres observed by astronomers to be highly enriched in dredged-up s process elements.
Another dramatic example is given by supernova condensates, usually shortened by acronym to SUNOCON (from SUperNOva CONdensate) to distinguish them from other grains condensed within stellar atmospheres. SUNOCONs contain in their calcium an excessively large abundance of 44Ca, demonstrating that they condensed containing abundant radioactive 44Ti, which has a 65-year half-life. The outflowing 44Ti nuclei were thus still "alive" (radioactive) when the SUNOCON condensed near one year within the expanding supernova interior, but would have become an extinct radionuclide (specifically 44Ca) after the time required for mixing with the interstellar gas. Its discovery proved the prediction from 1975 that it might be possible to identify SUNOCONs in this way. The SiC SUNOCONs (from supernovae) are only about 1% as numerous as are SiC stardust from AGB stars.
Stardust itself (SUNOCONs and AGB grains that come from specific stars) is but a modest fraction of the condensed cosmic dust, forming less than 0.1% of the mass of total interstellar solids. The high interest in presolar grains derives from new information that it has brought to the sciences of stellar evolution and nucleosynthesis.
Laboratories have studied solids that existed before the Earth was formed. This was once thought impossible, especially in the 1970s when cosmochemists were confident that the Solar System began as a hot gas virtually devoid of any remaining solids, which would have been vaporized by high temperature. The existence of presolar grains proved this historic picture incorrect.
Some bulk properties
Cosmic dust is made of dust grains and aggregates into dust particles. These particles are irregularly shaped, with porosity ranging from fluffy to compact. The composition, size, and other properties depend on where the dust is found, and conversely, a compositional analysis of a dust particle can reveal much about the dust particle's origin. General diffuse interstellar medium dust, dust grains in dense clouds, planetary rings dust, and circumstellar dust, are each different in their characteristics. For example, grains in dense clouds have acquired a mantle of ice and on average are larger than dust particles in the diffuse interstellar medium. Interplanetary dust particles (IDPs) are generally larger still.
Most of the influx of extraterrestrial matter that falls onto the Earth is dominated by meteoroids with diameters in the range 50 to 500 micrometers, of average density 2.0 g/cm3 (with porosity about 40%). The total influx rate of meteoritic sites of most IDPs captured in the Earth's stratosphere range between 1 and 3 g/cm3, with an average density at about 2.0 g/cm3.
Other specific dust properties: in circumstellar dust, astronomers have found molecular signatures of CO, silicon carbide, amorphous silicate, polycyclic aromatic hydrocarbons, water ice, and polyformaldehyde, among others (in the diffuse interstellar medium, there is evidence for silicate and carbon grains). Cometary dust is generally different (with overlap) from asteroidal dust. Asteroidal dust resembles carbonaceous chondritic meteorites. Cometary dust resembles interstellar grains which can include silicates, polycyclic aromatic hydrocarbons, and water ice.
In September 2020, evidence was presented of solid-state water in the interstellar medium, and particularly, of water ice mixed with silicate grains in cosmic dust grains.
Dust grain formation
The large grains in interstellar space are probably complex, with refractory cores that condensed within stellar outflows topped by layers acquired during incursions into cold dense interstellar clouds. That cyclic process of growth and destruction outside of the clouds has been modeled to demonstrate that the cores live much longer than the average lifetime of dust mass. Those cores mostly start with silicate particles condensing in the atmospheres of cool, oxygen-rich red-giants and carbon grains condensing in the atmospheres of cool carbon stars. Red giants have evolved or altered off the main sequence and have entered the giant phase of their evolution and are the major source of refractory dust grain cores in galaxies. Those refractory cores are also called stardust (section above), which is a scientific term for the small fraction of cosmic dust that condensed thermally within stellar gases as they were ejected from the stars. Several percent of refractory grain cores have condensed within expanding interiors of supernovae, a type of cosmic decompression chamber. Meteoriticists who study refractory stardust (extracted from meteorites) often call it presolar grains but that within meteorites is only a small fraction of all presolar dust. Stardust condenses within the stars via considerably different condensation chemistry than that of the bulk of cosmic dust, which accretes cold onto preexisting dust in dark molecular clouds of the galaxy. Those molecular clouds are very cold, typically less than 50K, so that ices of many kinds may accrete onto grains, in cases only to be destroyed or split apart by radiation and sublimation into a gas component. Finally, as the Solar System formed many interstellar dust grains were further modified by coalescence and chemical reactions in the planetary accretion disk. The history of the various types of grains in the early Solar System is complicated and only partially understood.
Astronomers know that the dust is formed in the envelopes of late-evolved stars from specific observational signatures. In infrared light, emission at 9.7 micrometres is a signature of silicate dust in cool evolved oxygen-rich giant stars. Emission at 11.5 micrometres indicates the presence of silicon carbide dust in cool evolved carbon-rich giant stars. These help provide evidence that the small silicate particles in space came from the ejected outer envelopes of these stars.
Conditions in interstellar space are generally not suitable for the formation of silicate cores. This would take excessive time to accomplish, even if it might be possible. The arguments are that: given an observed typical grain diameter a, the time for a grain to attain a, and given the temperature of interstellar gas, it would take considerably longer than the age of the Universe for interstellar grains to form. On the other hand, grains are seen to have recently formed in the vicinity of nearby stars, in nova and supernova ejecta, and in R Coronae Borealis variable stars which seem to eject discrete clouds containing both gas and dust. So mass loss from stars is unquestionably where the refractory cores of grains formed.
Most dust in the Solar System is highly processed dust, recycled from the material out of which the Solar System formed and subsequently collected in the planetesimals, and leftover solid material such as comets and asteroids, and reformed in each of those bodies' collisional lifetimes. During the Solar System's formation history, the most abundant element was (and still is) H2. The metallic elements: magnesium, silicon, and iron, which are the principal ingredients of rocky planets, condensed into solids at the highest temperatures of the planetary disk. Some molecules such as CO, N2, NH3, and free oxygen, existed in a gas phase. Some molecules, for example, graphite (C) and SiC would condense into solid grains in the planetary disk; but carbon and SiC grains found in meteorites are presolar based on their isotopic compositions, rather than from the planetary disk formation. Some molecules also formed complex organic compounds and some molecules formed frozen ice mantles, of which either could coat the "refractory" (Mg, Si, Fe) grain cores. Stardust once more provides an exception to the general trend, as it appears to be totally unprocessed since its thermal condensation within stars as refractory crystalline minerals. The condensation of graphite occurs within supernova interiors as they expand and cool, and do so even in gas containing more oxygen than carbon, a surprising carbon chemistry made possible by the intense radioactive environment of supernovae. This special example of dust formation has merited specific review.
Planetary disk formation of precursor molecules was determined, in large part, by the temperature of the solar nebula. Since the temperature of the solar nebula decreased with heliocentric distance, scientists can infer a dust grain's with knowledge of the grain's materials. Some materials could only have been formed at high temperatures, while other grain materials could only have been formed at much lower temperatures. The materials in a single interplanetary dust particle often show that the grain elements formed in different locations and at different times in the solar nebula. Most of the matter present in the original solar nebula has since disappeared; drawn into the Sun, expelled into interstellar space, or reprocessed, for example, as part of the planets, asteroids or comets.
Due to their highly processed nature, IDPs (interplanetary dust particles) are fine-grained mixtures of thousands to millions of mineral grains and amorphous components. We can picture an IDP as a "matrix" of material with embedded elements which were formed at different times and places in the solar nebula and before the solar nebula's formation. Examples of embedded elements in cosmic dust are GEMS, chondrules, and CAIs.
From the solar nebula to Earth
The arrows in the adjacent diagram show one possible path from a collected interplanetary dust particle back to the early stages of the solar nebula.
We can follow the trail to the right in the diagram to the IDPs that contain the most volatile and primitive elements. The trail takes us first from interplanetary dust particles to chondritic interplanetary dust particles. Planetary scientists classify chondritic IDPs in terms of their diminishing degree of oxidation so that they fall into three major groups: the carbonaceous, the ordinary, and the enstatite chondrites. As the name implies, the carbonaceous chondrites are rich in carbon, and many have anomalies in the isotopic abundances of H, C, N, and O. From the carbonaceous chondrites, we follow the trail to the most primitive materials. They are almost completely oxidized and contain the lowest condensation temperature elements ("volatile" elements) and the largest amount of organic compounds. Therefore, dust particles with these elements are thought to have been formed in the early life of the Solar System. The volatile elements have never seen temperatures above about 500 K, therefore, the IDP grain "matrix" consists of some very primitive Solar System material. Such a scenario is true in the case of comet dust. The provenance of the small fraction that is stardust (see above) is quite different; these refractory interstellar minerals thermally condense within stars, become a small component of interstellar matter, and therefore remain in the presolar planetary disk. Nuclear damage tracks are caused by the ion flux from solar flares. Solar wind ions impacting on the particle's surface produce amorphous radiation damaged rims on the particle's surface. And spallogenic nuclei are produced by galactic and solar cosmic rays. A dust particle that originates in the Kuiper Belt at 40 AU would have many more times the density of tracks, thicker amorphous rims and higher integrated doses than a dust particle originating in the main-asteroid belt.
Based on 2012 computer model studies, the complex organic molecules necessary for life (extraterrestrial organic molecules) may have formed in the protoplanetary disk of dust grains surrounding the Sun before the formation of the Earth. According to the computer studies, this same process may also occur around other stars that acquire planets.
In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics – "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."
In February 2014, NASA announced a greatly upgraded database for detecting and monitoring polycyclic aromatic hydrocarbons (PAHs) in the universe. According to NASA scientists, over 20% of the carbon in the Universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are abundant in the Universe, and are associated with new stars and exoplanets.
In March 2015, NASA scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the Universe, may have been formed in red giants or in interstellar dust and gas clouds, according to the scientists.
Some "dusty" clouds in the universe
The Solar System has its own interplanetary dust cloud, as do extrasolar systems. There are different types of nebulae with different physical causes and processes: diffuse nebula, infrared (IR) reflection nebula, supernova remnant, molecular cloud, HII regions, photodissociation regions, and dark nebula.
Distinctions between those types of nebula are that different radiation processes are at work. For example, H II regions, like the Orion Nebula, where a lot of star-formation is taking place, are characterized as thermal emission nebulae. Supernova remnants, on the other hand, like the Crab Nebula, are characterized as nonthermal emission (synchrotron radiation).
Some of the better known dusty regions in the Universe are the diffuse nebulae in the Messier catalog, for example: M1, M8, M16, M17, M20, M42, M43.
Some larger dust catalogs are Sharpless (1959) A Catalogue of HII Regions, Lynds (1965) Catalogue of Bright Nebulae, Lynds (1962) Catalogue of Dark Nebulae, van den Bergh (1966) Catalogue of Reflection Nebulae, Green (1988) Rev. Reference Cat. of Galactic SNRs, The National Space Sciences Data Center (NSSDC), and CDS Online Catalogs.
Dust sample return
The Discovery program's Stardust mission, was launched on 7 February 1999 to collect samples from the coma of comet Wild 2, as well as samples of cosmic dust. It returned samples to Earth on 15 January 2006. In 2007, the recovery of particles of interstellar dust from the samples was announced.
Dust particles on Earth
In 2017, Genge et al published a paper about "urban collection" of dust particles on Earth. The team were able to collect 500 micrometeorites from rooftops. Dust was collected in Oslo and in Paris, and "all particles are silicate-dominated (S type) cosmic spherules with subspherical shapes that form by melting during atmospheric entry and consist of quench crystals of magnesian olivine, relict crystals of forsterite, and iron-bearing olivine within glass". In the UK, scientists look for micrometeorites on the rooftops of cathedrals, like Canterbury Cathedral and Rochester Cathedral. Currently 40,000 tons of cosmic dust falls to Earth each year.
See also
Accretion
Astrochemistry
Atomic and molecular astrophysics
Cosmochemistry
Extraterrestrial materials
List of interstellar and circumstellar molecules
Micrometeoroid
Tanpopo, a mission that collected cosmic dust in low Earth orbit
WR 140, a prototypical cosmic dust factory
References
Further reading
External links
Cosmic Dust Group
Evidence for interstellar origin of seven dust particles collected by the Stardust spacecraft
Astrobiology
Astrochemistry
Extragalactic astronomy
Galactic astronomy
Planetary science | Cosmic dust | [
"Chemistry",
"Astronomy",
"Biology"
] | 5,825 | [
"Origin of life",
"Cosmic dust",
"Outer space",
"Speculative evolution",
"Galactic astronomy",
"Astrobiology",
"Astrochemistry",
"Planetary science",
"nan",
"Biological hypotheses",
"Extragalactic astronomy",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
2,178,587 | https://en.wikipedia.org/wiki/Rake%20%28theatre%29 | A rake or raked stage is a theatre stage that slopes upwards, away from the audience. Such a design was typical of English theatre in the Middle Ages and early Modern era, and improves the view and sound for spectators. It also helps with the illusion of perspective. When features of the scenery are made to align with a notional vanishing point beyond the rear of the stage, the rake supports the illusion. These elements of scenery are termed raking pieces.
Raked seating refers to seating which is positioned on an upwards slope away from the stage, in order to give those in the audience at the back a better view than if the seats were all on the same level.
Calculating incline
The slope of the rake is measured by the number of horizontal units it takes for one vertical unit measured in the direction of the slope, or by the equivalent percentage. A rake of one horizontal unit to one vertical unit (1 in 1, or a 100% slope) would give an angle of 45° from the horizontal. Rakes of 1 in 18 (5.56%) to 1 in 48 (2.08%) were more common.
Converting the rake ratio to an angle requires the application of some basic trigonometry. The angle in degrees = arctan(vertical/horizontal), where "vertical" is the vertical rise in distance and "horizontal" is the horizontal distance over which this rise occurs. For example, for a rake of 1:18, the corresponding angle is arctan(1/18) = 3.18°.
History
Theatres constructed after the beginning of the 20th century feature a raked audience section. This change back to the method of construction seen in Greek and Roman theaters (flat stage and terraced audience) was effected due to the difficulty encountered when one tries to walk across a sloped surface, which had resulted in unnatural movement patterns to avoid the appearance of limping caused by the non-level surface.
Raked stages can still be seen in some opera, Broadway and West End productions, where a temporary raked acting surface is built over a theatre's permanent flat stage. Creating a raked stage can also assist set designs which are designed with forced perspective.
Upstaging
On a raked stage an actor who is further from the audience is higher than an actor who is closer to the audience. This led to the theatre positions "upstage" and "downstage", meaning, respectively, farther from or closer to the audience.
The term "upstaging" refers to one actor moving to a more elevated position on the rake (stage), causing the upstaged actor (who stays more downstage, closer to the audience) to turn their back to the audience to address the cast member. The term "upstaging" has since taken on the figurative meaning of an actor unscrupulously drawing the audience's attention away from another actor.
References
Scenic design
Stage terminology
Parts of a theatre | Rake (theatre) | [
"Technology",
"Engineering"
] | 590 | [
"Scenic design",
"Parts of a theatre",
"Design",
"Components"
] |
2,178,720 | https://en.wikipedia.org/wiki/Koopmans%27%20theorem | Koopmans' theorem states that in closed-shell Hartree–Fock theory (HF), the first ionization energy of a molecular system is equal to the negative of the orbital energy of the highest occupied molecular orbital (HOMO). This theorem is named after Tjalling Koopmans, who published this result in 1934.
Koopmans' theorem is exact in the context of restricted Hartree–Fock theory if it is assumed that the orbitals of the ion are identical to those of the neutral molecule (the frozen orbital approximation). Ionization energies calculated this way are in qualitative agreement with experiment – the first ionization energy of small molecules is often calculated with an error of less than two electron volts. Therefore, the validity of Koopmans' theorem is intimately tied to the accuracy of the underlying Hartree–Fock wavefunction. The two main sources of error are orbital relaxation, which refers to the changes in the Fock operator and Hartree–Fock orbitals when changing the number of electrons in the system, and electron correlation, referring to the validity of representing the entire many-body wavefunction using the Hartree–Fock wavefunction, i.e. a single Slater determinant composed of orbitals that are the eigenfunctions of the corresponding self-consistent Fock operator.
Empirical comparisons with experimental values and higher-quality ab initio calculations suggest that in many cases, but not all, the energetic corrections due to relaxation effects nearly cancel the corrections due to electron correlation.
A similar theorem (Janak's theorem) exists in density functional theory (DFT) for relating the exact first vertical ionization energy and electron affinity to the HOMO and LUMO energies, although both the derivation and the precise statement differ from that of Koopmans' theorem. Ionization energies calculated from DFT orbital energies are usually poorer than those of Koopmans' theorem, with errors much larger than two electron volts possible depending on the exchange-correlation approximation employed. The LUMO energy shows little correlation with the electron affinity with typical approximations. The error in the DFT counterpart of Koopmans' theorem is a result of the approximation employed for the exchange correlation energy functional so that, unlike in HF theory, there is the possibility of improved results with the development of better approximations.
Generalizations
While Koopmans' theorem was originally stated for calculating ionization energies from restricted (closed-shell) Hartree–Fock wavefunctions, the term has since taken on a more generalized meaning as a way of using orbital energies to calculate energy changes due to changes in the number of electrons in a system.
Ground-state and excited-state ions
Koopmans’ theorem applies to the removal of an electron from any occupied molecular orbital to form a positive ion. Removal of the electron from different occupied molecular orbitals leads to the ion in different electronic states. The lowest of these states is the ground state and this often, but not always, arises from removal of the electron from the HOMO. The other states are excited electronic states.
For example, the electronic configuration of the H2O molecule is (1a1)2 (2a1)2 (1b2)2 (3a1)2 (1b1)2, where the symbols a1, b2 and b1 are orbital labels based on molecular symmetry. From Koopmans’ theorem the energy of the 1b1 HOMO corresponds to the ionization energy to form the H2O+ ion in its ground state (1a1)2 (2a1)2 (1b2)2 (3a1)2 (1b1)1. The energy of the second-highest MO 3a1 refers to the ion in the excited state (1a1)2 (2a1)2 (1b2)2 (3a1)1 (1b1)2, and so on. In this case the order of the ion electronic states corresponds to the order of the orbital energies. Excited-state ionization energies can be measured by photoelectron spectroscopy.
For H2O, the near-Hartree–Fock orbital energies (with sign changed) of these orbitals are 1a1 559.5, 2a1 36.7 1b2 19.5, 3a1 15.9 and 1b1 13.8 eV. The corresponding ionization energies are 539.7, 32.2, 18.5, 14.7 and 12.6 eV. As explained above, the deviations are due to the effects of orbital relaxation as well as differences in electron correlation energy between the molecular and the various ionized states.
For N2 in contrast, the order of orbital energies is not identical to the order of ionization energies. Near-Hartree–Fock calculations with a large basis set indicate that the 1πu bonding orbital is the HOMO. However the lowest ionization energy corresponds to removal of an electron from the 3σg bonding orbital. In this case the deviation is attributed primarily to the difference in correlation energy between the two orbitals.
For electron affinities
It is sometimes claimed that Koopmans' theorem also allows the calculation of electron affinities as the energy of the lowest unoccupied molecular orbitals (LUMO) of the respective systems. However, Koopmans' original paper makes no claim with regard to the significance of eigenvalues of the Fock operator other than that corresponding to the HOMO. Nevertheless, it is straightforward to generalize the original statement of Koopmans' to calculate the electron affinity in this sense.
Calculations of electron affinities using this statement of Koopmans' theorem have been criticized on the grounds that virtual (unoccupied) orbitals do not have well-founded physical interpretations, and that their orbital energies are very sensitive to the choice of basis set used in the calculation. As the basis set becomes more complete; more and more "molecular" orbitals that are not really on the molecule of interest will appear, and care must be taken not to use these orbitals for estimating electron affinities.
Comparisons with experiment and higher-quality calculations show that electron affinities predicted in this manner are generally quite poor.
For open-shell systems
Koopmans' theorem is also applicable to open-shell systems, however, orbital energies (eigenvalues of Roothaan equations) should be corrected, as was shown in the 1970s. Despite this early work, application of Koopmans theorem to open-shell systems continued to cause confusion, e.g., it was stated that Koopmans theorem can only be applied for removing the unpaired electron. Later, the validity of Koopmans’ theorem for ROHF was revisited and several procedures for obtaining meaningful orbital energies were reported. The spin up (alpha) and spin down (beta) orbital energies do not necessarily have to be the same.
Counterpart in density functional theory
Kohn–Sham (KS) density functional theory (KS-DFT) admits its own version of Koopmans' theorem (sometimes called the DFT-Koopmans' theorem) very similar in spirit to that of Hartree-Fock theory. The theorem equates the first (vertical) ionization energy of a system of electrons to the negative of the corresponding KS HOMO energy . More generally, this relation is true even when the KS system describes a zero-temperature ensemble with non-integer number of electrons for integer and . When considering electrons the infinitesimal excess charge enters the KS LUMO of the N electron system but then the exact KS potential jumps by a constant known as the "derivative discontinuity". It can be argued that the vertical electron affinity is equal exactly to the negative of the sum of the LUMO energy and the derivative discontinuity.
Unlike the approximate status of Koopmans' theorem in Hartree Fock theory (because of the neglect of orbital relaxation), in the exact KS mapping the theorem is exact, including the effect of orbital relaxation. A sketchy proof of this exact relation goes in three stages. First, for any finite system determines the asymptotic form of the density, which decays as . Next, as a corollary (since the physically interacting system has the same density as the KS system), both must have the same ionization energy. Finally, since the KS potential is zero at infinity, the ionization energy of the KS system is, by definition, the negative of its HOMO energy, i.e., .
While these are exact statements in the formalism of DFT, the use of approximate exchange-correlation potentials makes the calculated energies approximate and often the orbital energies are very different from the corresponding ionization energies (even by several eV!).
A tuning procedure is able to "impose" Koopmans' theorem on DFT approximations, thereby improving many of its related predictions in actual applications. In approximate DFTs one can estimate to high degree of accuracy the deviance from Koopmans' theorem using the concept of energy curvature. It provides excitation energies to zeroth-order and .
Orbital picture within many-body formalisms
The concept of molecular orbitals and a Koopmans-like picture of ionization or electron attachment processes can be extended to correlated many-body wavefunctions by introducing Dyson orbitals. Dyson orbitals are defined as the generalized overlap between an -electron molecular wavefunction and the electron wave function of the ionized system (or electron wave function of an electron-attached system):
Hartree-Fock canonical orbitals are Dyson orbitals computed for the Hartree-Fock wavefunction of the -electron system and Koopmans approximation of the electron system. When correlated wavefunctions are used, Dyson orbitals include correlation and orbital relaxation effects. Dyson orbitals contain all information about the initial and final states of the system needed to compute experimentally observable quantities, such as total and differential photoionization/phtodetachment cross sections.
References
External links
Quantum chemistry
Computational chemistry
Theoretical chemistry | Koopmans' theorem | [
"Physics",
"Chemistry"
] | 2,063 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
2,178,786 | https://en.wikipedia.org/wiki/Laumontite | Laumontite is a mineral, one of the zeolite group. Its molecular formula is , a hydrated calcium-aluminium silicate. Potassium or sodium may substitute for the calcium but only in very small amounts.
It is monoclinic, space group C2/m. It forms prismatic crystals with a diamond-shaped cross-section and an angled termination. When pure, the color is colorless or white. Impurities may color it orange, brownish, gray, yellowish, pink, or reddish. It has perfect cleavage on [010] and [110] and its fracture is conchoidal. It is very brittle. The Mohs scale hardness is 3.5-4. It has a vitreous luster and a white streak.
It is found in hydrothermal deposits left in calcareous rocks, often formed as a result of secondary mineralization. Host rock types include basalt, andesite, metamorphic rocks and granites. It forms at a temperature of about , and becomes unstable above about , and so its presence in sedimentary rocks indicates that these have experienced intermediate diagenesis.
The identification of laumontite goes back to the early days of mineralogy. It was first named lomonite by R. Jameson (System of Mineralogy) in 1805, and laumonite by René Just Haüy in 1809. The current name was given by K.C. von Leonhard (Handbuch der Oryktognosie) in 1821. It is named after Gillet de Laumont who collected samples from lead mines in Huelgoat, Brittany, making them the type locality.
Laumontite easily dehydrates when stored in a low humidity environment. When freshly collected, if it has not already been exposed to the environment, it can be translucent or transparent. Over a period of hours to days the loss of water turns it opaque white. In the past, this variety has been called leonhardite, though this is not a valid mineral species. The dehydrated laumontite is very friable, often falling into a powder at the slightest touch.
It is a common mineral, found worldwide. It can be locally abundant, forming seams and veins. It is frequently associated with other zeolites, including stilbite and heulandite. Notable occurrences are India; Paterson, New Jersey; Pine Creek, California; Iceland; Scotland; and the Bay of Fundy, Nova Scotia. Prehnite pseudomorphs after laumontite (epimorphs) have been found in India.
References
Mindat Laumontite
Webmineral Laumontite
IMA Zeolite Classification
External links
Structure type LAU
Calcium minerals
Aluminium minerals
Zeolites
Monoclinic minerals
Minerals in space group 12
Luminescent minerals | Laumontite | [
"Chemistry"
] | 567 | [
"Luminescence",
"Luminescent minerals"
] |
2,178,874 | https://en.wikipedia.org/wiki/Leadhillite | Leadhillite is a lead sulfate carbonate hydroxide mineral, often associated with anglesite. It has the formula Pb4SO4(CO3)2(OH)2. Leadhillite crystallises in the monoclinic system, but develops pseudo-hexagonal forms due to crystal twinning. It forms transparent to translucent variably coloured crystals with an adamantine lustre. It is quite soft with a Mohs hardness of 2.5 and a relatively high specific gravity of 6.26 to 6.55.
It was discovered in 1832 in the Susannah Mine, Leadhills in the county of Lanarkshire, Scotland. It is trimorphous with susannite and macphersonite (these three minerals have the same formula, but different structures). Leadhillite is monoclinic, susannite is trigonal and macphersonite is orthorhombic. Leadhillite was named in 1832 after the locality.
Unit cell
Leadhillite belongs to the monoclinic crystal class 2/m, which is the class with the highest symmetry in the monoclinic system. It has a two-fold axis of symmetry perpendicular to a mirror plane, and the general form is an open-ended prism. The space group is P21/a, meaning that the two-fold axis is a screw axis and the mirror plane is a glide plane.
There are 8 formula units per unit cell (Z = 8) and the angle β is very nearly equal to 90°. The side-lengths of the unit cell are a = 9.11 Å, b = 20.82 Å and c = 11.59 Å.
Structure
Leadhillite has a layered structure. The mineral contains both carbonate and sulfate groups, and these are arranged in separate sheets. Pairs of carbonate sheets 8(PbCO3) alternate with pairs of sulfate sheets 8[Pb(SO4)0.5OH]. The carbonate sheets virtually have trigonal symmetry, but the sulfate sheets do not. All the lead (Pb) atoms in the carbonate sheets are surrounded by 9 oxygens from carbonate groups and by one hydroxyl from an adjacent sulfate sheet. The Pb atoms in the sulfate sheets are bonded to 9 or 10 oxygens.
Appearance
Crystals are usually small to microscopic, and nearly always pseudo-hexagonal, being tabular with a hexagonal outline. Prismatic forms also occur. The simplest form with faces parallel to the b axis and cutting the a and c axes (represented as {101}) may develop. When it does it may be striated or curved. The colour is white or pale shades of green, blue or yellow, but the commonest is clear to white. Leadhillite is transparent to translucent, with a white streak and a resinous to adamantine lustre, pearly on faces parallel to the plane containing the a and b axes. Tabular forms of susannite are very similar.
Optical properties
Leadhillite is biaxial (-) with the optical Z axis parallel to the crystallographic b axis, and the optical X axis inclined to the crystallographic c axis at an angle of −5.5°.
The refractive indices are large, giving the mineral its high lustre, nα = 1.87, nβ = 2.00 and nγ = 2.01. Compare these values with that for ordinary window glass, which is only 1.5. The refractive index depends upon the direction of travel of light through the crystal. The maximum birefringence δ is the difference between the highest and the lowest values. For leadhillite δ = 0.140.
An important characteristic of a biaxial material is the angle between the two optic axes, called the optic angle and designated 2V. It is possible to calculate this value from the refractive indices, and also to measure it. For leadhillite both the measured and calculated values of 2V are 10°.
When the colour of the incident light is changed the refractive indices change, and so does the value of 2V. This effect is known as dispersion of the optic axes. In leadhillite this effect is strong, with 2V smaller for red light than for violet light (r < v). The mineral may fluoresce yellowish longwave or shortwave ultraviolet light.
Physical properties
Leadhillite is a soft mineral, with hardness only to 3, a little less than that of calcite. It breaks with an irregular to conchoidal fracture and it is somewhat sectile. That is, thin shavings can be pared off it. It is heavy, due to the lead content, with specific gravity 6.55, similar to other lead minerals such as cerussite (6.5) and anglesite (6.3).
Cleavage is perfect on a plane perpendicular to the c crystal axis.
The mineral is usually twinned, according to a variety of twin laws, forming contact, penetration and lamellar twins. The typical habit is platy or tabular pseudohexagonal cyclic twinned crystals.
Leadhillite is soluble with effervescence in nitric acid HNO3, leaving lead sulfate.
Occurrence
The type locality is the Susanna Mine at Leadhills, Strathclyde, Scotland, UK.
Leadhillite is a secondary mineral found in the oxidised zone of lead deposits associated with cerussite, anglesite, lanarkite, caledonite, linarite and pyromorphite. It may form pseudomorphs after galena or calcite, and conversely calcite and cerussite may form pseudomorphs after leadhillite. Heating leadhillite causes it to reversibly transform into susannite.
References
Lead minerals
Carbonate minerals
Sulfate minerals
Monoclinic minerals
Minerals in space group 14
Luminescent minerals | Leadhillite | [
"Chemistry"
] | 1,210 | [
"Luminescence",
"Luminescent minerals"
] |
2,178,880 | https://en.wikipedia.org/wiki/Diabatic%20representation | The diabatic representation as a mathematical tool for theoretical calculations of atomic collisions and of molecular interactions.
One of the guiding principles in modern chemical dynamics and spectroscopy is that the motion of the nuclei in a molecule is slow compared to that of its electrons. This is justified by the large disparity between the mass of an electron, and the typical mass of a nucleus and leads to the Born–Oppenheimer approximation and the idea that the structure and dynamics of a chemical species are largely determined by nuclear motion on potential energy surfaces.
The potential energy surfaces are obtained within the adiabatic or Born–Oppenheimer approximation. This corresponds to a representation of the molecular wave function where the variables corresponding to the molecular geometry and the electronic degrees of freedom are separated. The non separable terms are due to the nuclear kinetic energy terms in the molecular Hamiltonian and are said to couple the potential energy surfaces. Nearby an avoided crossing or conical intersection, these terms are substantive. Therefore one unitary transformation is performed from the adiabatic representation to the so-called diabatic representation in which the nuclear kinetic energy operator is diagonal. In this representation, the coupling is due to the electronic energy and is a scalar quantity that is significantly easier to estimate numerically.
In the diabatic representation, the potential energy surfaces are smoother, so that low order Taylor series expansions of the surface capture much of the complexity of the original system. However strictly diabatic states do not exist in the general case. Hence, diabatic potentials generated from transforming multiple electronic energy surfaces together are generally not exact. These can be called pseudo-diabatic potentials, but generally the term is not used unless it is necessary to highlight this subtlety. Hence, pseudo-diabatic potentials are synonymous with diabatic potentials.
Applicability
The motivation to calculate diabatic potentials often occurs when the Born–Oppenheimer approximation breaks down, or is not justified for the molecular system under study. For these systems, it is necessary to go beyond the Born–Oppenheimer approximation. This is often the terminology used to refer to the study of nonadiabatic systems.
A well-known approach involves recasting the molecular Schrödinger equation into a set of coupled eigenvalue equations. This is achieved by expanding the exact wave function in terms of products of electronic and nuclear wave functions (adiabatic states) followed by integration over the electronic coordinates. The coupled operator equations thus obtained depend on nuclear coordinates only. Off-diagonal elements in these equations are nuclear kinetic energy terms. A diabatic transformation of the adiabatic states replaces these off-diagonal kinetic energy terms by potential energy terms. Sometimes, this is called the "adiabatic-to-diabatic transformation", abbreviated ADT.
Diabatic transformation of two electronic surfaces
In order to introduce the diabatic transformation, assume that only two Potential Energy Surfaces (PES), 1 and 2, approach each other and that all other surfaces are well separated; the argument can be generalized to more surfaces. Let the collection of electronic coordinates be indicated by , while indicates dependence on nuclear coordinates. Thus, assume with corresponding orthonormal electronic eigenstates
and . In the absence of magnetic interactions these electronic states, which depend parametrically on the nuclear coordinates, may be taken to be real-valued functions.
The nuclear kinetic energy is a sum over nuclei A with mass MA,
(Atomic units are used here). By applying the Leibniz rule for differentiation, the matrix elements of are (where coordinates are suppressed for clarity):
The subscript indicates that the integration inside the bracket is over electronic coordinates only. Let us further assume that all off-diagonal matrix elements
may be neglected except for k = 1 and p = 2. Upon making the expansion
the coupled Schrödinger equations for the nuclear part take the form (see the article Born–Oppenheimer approximation)
In order to remove the problematic off-diagonal kinetic energy terms, define two new orthonormal states by a diabatic transformation of the adiabatic states and
where is the diabatic angle. Transformation of the matrix of nuclear momentum for gives for diagonal matrix elements
These elements are zero because is real
and is Hermitian and pure-imaginary. The off-diagonal elements of the momentum operator satisfy,
Assume that a diabatic angle exists, such that to a good approximation
i.e., and diagonalize the 2 x 2 matrix of the nuclear momentum. By the definition of Smith and are diabatic states. (Smith was the first to define this concept; earlier the term diabatic was used somewhat loosely by Lichten).
By a small change of notation these differential equations for can be rewritten in the following more familiar form:
It is well known that the differential equations have a solution (i.e., the "potential" V exists) if and only if the vector field ("force") is irrotational,
It can be shown that these conditions are rarely ever satisfied, so that a strictly diabatic transformation rarely ever exists. It is common to use approximate functions leading to pseudo diabatic states.
Under the assumption that the momentum operators are represented exactly by 2 x 2 matrices, which is consistent with neglect of off-diagonal elements other than the (1,2) element and the assumption of "strict" diabaticity, it can be shown that
On the basis of the diabatic states the nuclear motion problem takes the following generalized Born–Oppenheimer form
It is important to note that the off-diagonal elements depend on the diabatic angle and electronic energies only. The surfaces and are adiabatic PESs obtained from clamped nuclei electronic structure calculations and is the usual nuclear kinetic energy operator defined above. Finding approximations for is the remaining problem before a solution of the Schrödinger equations can be attempted. Much of the current research in quantum chemistry is devoted to this determination. Once has been found and the coupled equations have been solved, the final vibronic wave function in the diabatic approximation is
Adiabatic-to-diabatic transformation
Here, in contrast to previous treatments, the non-Abelian case is considered.
Felix Smith in his article considers the adiabatic-to-diabatic transformation (ADT) for a multi-state system but a single coordinate, . In Diabatic, the ADT is defined for a system of two coordinates and , but it is restricted to two states. Such a system is defined as Abelian and the ADT matrix is expressed in terms of an angle, (see Comment below), known also as the ADT angle. In the present treatment a system is assumed that is made up of M (> 2) states defined for an N-dimensional configuration space, where N = 2 or N > 2. Such a system is defined as non-Abelian. To discuss the non-Abelian case the equation for the just mentioned ADT angle, (see Diabatic), is replaced by an equation for the MxM, ADT matrix, :
where is the force-matrix operator, introduced in Diabatic, also known as the Non-Adiabatic Coupling Transformation (NACT) matrix:
Here is the N-dimensional (nuclear) grad-operator:
and , are the electronic adiabatic eigenfunctions which depend explicitly on the electronic coordinates and parametrically on the nuclear coordinates .
To derive the matrix one has to solve the above given first order differential equation along a specified contour . This solution is then applied to form the diabatic potential matrix :
where ; j = 1, M are the Born–Oppenheimer adiabatic potentials. In order for to be single-valued in configuration space, has to be analytic and in order for to be analytic (excluding the pathological points), the components of the vector matrix, , have to satisfy the following equation:
where is a tensor field. This equation is known as the non-Abelian form of the Curl Equation. A solution of the ADT matrix along the contour can be shown to be of the form:
(see also Geometric phase). Here is an ordering operator, the dot stands for a scalar product and and are two points on .
A different type of solution is based on quasi-Euler angles according to which any -matrix can be expressed as a product of Euler matrices. For instance in case of a tri-state system this matrix can be presented as a product of three such matrices, (i < j = 2, 3) where e.g. is of the form:
The product which can be written in any order, is substituted in Eq. (1) to yield three first order differential equations for the three -angles where two of these equations are coupled and the third stands on its own. Thus, assuming: the two coupled equations for and are:
whereas the third equation (for ) becomes an ordinary (line) integral:
expressed solely in terms of and .
Similarly, in case of a four-state system is presented as a product of six 4 x 4 Euler matrices (for the six quasi-Euler angles) and the relevant six differential equations form one set of three coupled equations, whereas the other three become, as before, ordinary line integrals.
A comment concerning the two-state (Abelian) case
Since the treatment of the two-state case as presented in Diabatic raised numerous doubts we consider it here as a special case of the Non-Abelian case just discussed. For this purpose we assume the 2 × 2 ADT matrix to be of the form:
Substituting this matrix in the above given first order differential equation (for ) we get, following a few algebraic rearrangements, that the angle fulfills the corresponding first order differential equation as well as the subsequent line integral:
where is the relevant NACT matrix element, the dot stands for a scalar product and is a chosen contour in configuration space (usually a planar one) along which the integration is performed. The line integral yields meaningful results if and only if the corresponding (previously derived) Curl-equation is zero for every point in the region of interest (ignoring the pathological points).
References
Quantum chemistry | Diabatic representation | [
"Physics",
"Chemistry"
] | 2,110 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
2,178,942 | https://en.wikipedia.org/wiki/Conical%20intersection | In quantum chemistry, a conical intersection of two or more potential energy surfaces is the set of molecular geometry points where the potential energy surfaces are degenerate (intersect) and the non-adiabatic couplings between these states are non-vanishing. In the vicinity of conical intersections, the Born–Oppenheimer approximation breaks down and the coupling between electronic and nuclear motion becomes important, allowing non-adiabatic processes to take place. The location and characterization of conical intersections are therefore essential to the understanding of a wide range of important phenomena governed by non-adiabatic events, such as photoisomerization, photosynthesis, vision and the photostability of DNA.
Conical intersections are also called molecular funnels or diabolic points as they have become an established paradigm for understanding reaction mechanisms in photochemistry as important as transitions states in thermal chemistry. This comes from the very important role they play in non-radiative de-excitation transitions from excited electronic states to the ground electronic state of molecules. For example, the stability of DNA with respect to the UV irradiation is due to such conical intersection. The molecular wave packet excited to some electronic excited state by the UV photon follows the slope of the potential energy surface and reaches the conical intersection from above. At this point the very large vibronic coupling induces a non-radiative transition (surface-hopping) which leads the molecule back to its electronic ground state. The singularity of vibronic coupling at conical intersections is responsible for the existence of Geometric phase, which was discovered by Longuet-Higgins in this context.
Degenerate points between potential energy surfaces lie in what is called the intersection or seam space with a dimensionality of 3N-8 (where N is the number of atoms). Any critical points in this space of degeneracy are characterised as minima, transition states or higher-order saddle points and can be connected to each other through the analogue of an intrinsic reaction coordinate in the seam. In benzene, for example, there is a recurrent connectivity pattern where permutationally isomeric seam segments are connected by intersections of a higher symmetry point group. The remaining two dimensions that lift the energetic degeneracy of the system are known as the branching space.
Experimental observation
In order to be able to observe it, the process would need to be slowed down from femtoseconds to milliseconds. A novel 2023 quantum experiment, involving trapped-ion quantum computer, slowed down interference pattern of a single atom (caused by a conical intersection) by a factor of 100 billion, making a direct observation possible.
Local characterization
Conical intersections are ubiquitous in both trivial and non-trivial chemical systems. In an ideal system of two dimensionalities, this can occur at one molecular geometry. If the potential energy surfaces are plotted as functions of the two coordinates, they form a cone centered at the degeneracy point. This is shown in the adjacent picture, where the upper and lower potential energy surfaces are plotted in different colors. The name conical intersection comes from this observation.
In diatomic molecules, the number of vibrational degrees of freedom is 1. Without the necessary two dimensions required to form the cone shape, conical intersections cannot exist in these molecules. Instead, the potential energy curves experience avoided crossings if they have the same point group symmetry, otherwise they can cross.
In molecules with three or more atoms, the number of degrees of freedom for molecular vibrations is at least 3. In these systems, when spin–orbit interaction is ignored, the degeneracy of conical intersection is lifted through first order by displacements in a two dimensional subspace of the nuclear coordinate space.
The two-dimensional degeneracy lifting subspace is referred to as the branching space or branching plane. This space is spanned by two vectors, the difference of energy gradient vectors of the two intersecting electronic states (the g vector), and the non-adiabatic coupling vector between these two states (the h vector). Because the electronic states are degenerate, the wave functions of the two electronic states are subject to an arbitrary rotation. Therefore, the g and h vectors are also subject to a related arbitrary rotation, despite the fact that the space spanned by the two vectors is invariant. To enable a consistent representation of the branching space, the set of wave functions that makes the g and h vectors orthogonal is usually chosen. This choice is unique up to the signs and switchings of the two vectors, and allows these two vectors to have proper symmetry when the molecular geometry is symmetric.
The degeneracy is preserved through first order by differential displacements that are perpendicular to the branching space. The space of non-degeneracy-lifting displacements, which is the orthogonal complement of the branching space, is termed the seam space. Movement within the seam space will take the molecule from one point of conical intersection to an adjacent point of conical intersection. The degeneracy space connecting different conical intersections can be explored and characterised using band and molecular dynamics methods.
For an open shell molecule, when spin-orbit interaction is added to the, the dimensionality of seam space is reduced.
The presence of conical intersections can be detected experimentally. It has been proposed that two-dimensional spectroscopy can be used to detect their presence through the modulation of the frequency of the vibrational coupling mode. A more direct spectroscopy of conical intersections, based on ultrafast X-ray transient absorption spectroscopy, was proposed, offering new approaches to their study.
Categorization by symmetry of intersecting electronic states
Conical intersections can occur between electronic states with the same or different point group symmetry, with the same or different spin symmetry. When restricted to a non-relativistic Coulomb Hamiltonian, conical intersections can be classified as symmetry-required, accidental symmetry-allowed, or accidental same-symmetry, according to the symmetry of the intersecting states.
A symmetry-required conical intersection is an intersection between two electronic states carrying the same multidimensional irreducible representation. For example, intersections between a pair of E states at a geometry that has a non-abelian group symmetry (e.g. C3h, C3v or D3h). It is named symmetry-required because these electronic states will always be degenerate as long as the symmetry is present. Symmetry-required intersections are often associated with Jahn–Teller effect.
An accidental symmetry-allowed conical intersection is an intersection between two electronic states that carry different point group symmetry. It is called accidental because the states may or may not be degenerate when the symmetry is present. Movement along one of the dimensions along which the degeneracy is lifted, the direction of the difference of the energy gradients of the two electronic states, will preserve the symmetry while displacements along the other degeneracy lifting dimension, the direction of the non-adiabatic couplings, will break the symmetry of the molecule. Thus, by enforcing the symmetry of the molecule, the degeneracy lifting effect caused by inter-state couplings is prevented. Therefore, the search for a symmetry-allowed intersection becomes a one-dimensional problem and does not require knowledge of the non-adiabatic couplings, significantly simplifying the effort. As a result, all the conical intersections found through quantum mechanical calculations during the early years of quantum chemistry were symmetry-allowed intersections.
An accidental same-symmetry conical intersection is an intersection between two electronic states that carry the same point group symmetry. While this type of intersection was traditionally more difficult to locate, a number of efficient searching algorithms and methods to compute non-adiabatic couplings have emerged in the past decade. It is now understood that same-symmetry intersections play as important a role in non-adiabatic processes as symmetry-allowed intersections.
See also
Born–Oppenheimer approximation
Potential energy surface
Geometric phase
Christopher Longuet-Higgins
Diabatic Representation
Jahn–Teller effect
Avoided crossing
Bond softening
Bond hardening
Vibronic coupling
Surface hopping
Ab initio multiple spawning
References
External links
Computational Organic Photochemistry
Potential Energy Surfaces and Conical Intersections
Quantum chemistry | Conical intersection | [
"Physics",
"Chemistry"
] | 1,653 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
2,179,127 | https://en.wikipedia.org/wiki/Treasury%20tag | A treasury tag, India tag, or string tag is an item of stationery used to fasten sheets of paper together or to a folder. It consists of a short length of string, with metal or plastic cross-pieces at each end that are orthogonal to the string. They are threaded through holes in paper or card made with a hole punch or lawyers bodkin or electric drill, and the cross-pieces are sufficiently wide as to not slip back through the holes.
The names Treasury tag and India tag are first found on record in a list of stationery items published by His Majesty's Stationery Office (HMSO) in 1912, and, both being capitalised, probably refer to HM Treasury and the India Office. While the terms are now equivalent, a Treasury tag was originally a lace with a sharp metal tag at one end, which could be threaded through the holes in a stack of documents or cards and inserted into a corresponding tag at the other end, thus forming a loop and binding the documents. The tags, in that case, were in line with the string, similar to aglets on a shoelace.
References
Stationery
Fasteners | Treasury tag | [
"Engineering"
] | 235 | [
"Construction",
"Fasteners"
] |
2,179,144 | https://en.wikipedia.org/wiki/Glass%20with%20embedded%20metal%20and%20sulfides | Glass with embedded metal and sulfides (GEMS) are tiny spheroids in cosmic dust particles with bulk compositions that are approximately chondritic. They form the building blocks of anhydrous interplanetary dust particles (IDPs) in general, and cometary IDPs, in particular. Their compositions, mineralogy and petrography appear to have been shaped by exposure to ionizing radiation. Since the exposure occurred prior to the accretion of cometary IDPs, and therefore comets themselves, GEMS are likely either solar nebula or presolar interstellar grains. The properties of GEMS (size, shape, mineralogy) bear a strong resemblance to those of interstellar silicate grains as inferred from astronomical observations.
References
Footnotes
Planetary science
Solar System
Meteoroids
Glass compositions
Glass in nature | Glass with embedded metal and sulfides | [
"Chemistry",
"Astronomy"
] | 167 | [
"Glass chemistry",
"Outer space",
"Glass compositions",
"Astronomy stubs",
"Planetary science stubs",
"Planetary science",
"Solar System",
"Astronomical sub-disciplines"
] |
2,179,433 | https://en.wikipedia.org/wiki/Supersecondary%20structure | A supersecondary structure is a compact three-dimensional protein structure of several adjacent elements of a secondary structure that is smaller than a protein domain or a subunit. Supersecondary structures can act as nucleations in the process of protein folding.
Examples
Helix supersecondary structures
Helix hairpin
A helix hairpin, also known as an alpha-alpha hairpin, is composed of two antiparallel alpha helices connected by a loop of two or more residues. True to its name, it resembles a hairpin. A longer loop has a greater number of possible conformations. If short strands connect the helices, then the individual helices will pack together through their hydrophobic residues. The function of a helix hairpin is unknown; however, a four helix bundle is composed of two helix hairpins, which have important ligand binding sites.
Helix corner
A helix corner, also called an alpha-alpha corner, has two alpha helices almost at right angles to each other connected by a short 'loop'. This loop is formed from a hydrophobic residue. The function of a helix corner is unknown.
Helix-loop-helix
The helix-loop-helix structure has two helices connected by a 'loop'. These are fairly common and usually bind ligands. For example, calcium binds with the carboxyl groups of the side chains within the loop region between the helices.
Helix-turn-helix
The helix-turn-helix motif is important for DNA binding and is therefore in many DNA binding proteins.
Beta sheet supersecondary structures
Beta hairpin
A beta hairpin is a common supersecondary motif composed of two anti-parallel beta strands connected by a loop. The structure resembles a hairpin and is often found in globular proteins.
The loop between the beta strands can range anywhere from 2 to 16 residues. However, most loops contain less than seven residues. Residues in beta hairpins with loops of 2, 3, or 4 residues have distinct conformations. However, a wide range of conformations can be seen in longer loops, which are sometimes referred to as 'random coils'. A beta-meander consists of consecutive antiparallel-beta strands linked by hairpins.
Two residue loops are called beta turns or reverse turns. Type I' and Type II' reverse turns occur most frequently because they have less steric hindrance than Type I and Type II turns. The function of beta hairpins is unknown.
Beta corner
A beta hairpin has two antiparallel beta strands that are at about a 90 degree angle to each other. It is formed by a beta hairpin changing direction with one strand having a glycine residue and the other strand having a beta bulge. Beta corners have no known function.
Greek key motif
A Greek key motif has four features:
Four sequentially connected beta strands are adjacent to, but not necessarily geometrically aligned with, each other.
The beta sheet is anti-parallel, and alternate strands run in the same directions.
The first strand and last strand are next to each other and bonded by hydrogen bonds.
Connecting loops can be long and include other secondary structures.
The Greek key motif has its name because the structure looks like the pattern seen on Greek urns. This motif has no known function.
Other
β-sheets (composed of multiple hydrogen-bonded individual β-strands) are sometimes considered a secondary or supersecondary structure.
Mixed supersecondary structures
Beta-alpha-beta motifs
A beta-alpha-beta motif is composed of two beta strands joined by an alpha helix through connecting loops. The beta strands are parallel, and the helix is also almost parallel to the strands. This structure can be seen in almost all proteins with parallel strands. The loops connecting the beta strands and alpha helix can vary in length and often binds ligands.
Beta-alpha-beta helices can be either left-handed or right-handed. When viewed from the N-terminal side of the beta strands, so that one strand is on top of the other, a left-handed beta-alpha-beta motif has the alpha helix on the left side of the beta strands. The more common right-handed motif would have an alpha helix on the right side of the plane containing the beta strands.
Rossman fold
Rossman folds, named after Michael Rossman, consist of 3 beta strands and 2 helices in an alternating fashion: beta strand, helix, beta strand, helix, beta strand. This motif tends to reverse the direction of the chain within a protein. Rossman folds have an important biological function in binding nucleotides such as NAD within most dehydrogenases.
See also
Protein folding
Secondary structure
Structural motif
References
Further reading
Protein structural motifs | Supersecondary structure | [
"Biology"
] | 967 | [
"Protein structural motifs",
"Protein classification"
] |
2,179,440 | https://en.wikipedia.org/wiki/Blended%20wing%20body | A blended wing body (BWB), also known as blended body, hybrid wing body (HWB) or a lifting aerofoil fuselage, is a fixed-wing aircraft having no clear dividing line between the wings and the main body of the craft. The aircraft has distinct wing and body structures, which are smoothly blended together with no clear dividing line. This contrasts with a flying wing, which has no distinct fuselage, and a lifting body, which has no distinct wings. A BWB design may or may not be tailless.
The main advantage of the BWB is to reduce wetted area and the accompanying form drag associated with a conventional wing-body junction. It may also be given a wide airfoil-shaped body, allowing the entire craft to generate lift and thus reducing the size and drag of the wings.
The BWB configuration is used for both aircraft and underwater gliders.
History
In the early 1920s Nicolas Woyevodsky developed a theory of the BWB and, following wind tunnel tests, the Westland Dreadnought was built. It stalled on its first flight in 1924, severely injuring the pilot, and the project was cancelled. The idea was proposed again in the early 1940s for a Miles M.26 airliner project and the Miles M.30 "X Minor" research prototype was built to investigate it. The McDonnell XP-67 prototype interceptor also flew in 1944 but did not meet expectations. The 1944 Burnelli CBY-3 Loadmaster was a blended wing design intended for Canadian bush operations.
NASA and McDonnell Douglas returned to the concept in the 1990s with an artificially stabilized model (6% scale) called BWB-17, built by Stanford University, which was flown in 1997 and showed good handling qualities. From 2000 NASA went on to develop a remotely controlled research model with a wingspan.
NASA has also jointly explored BWB designs for the Boeing X-48 unmanned aerial vehicle. Studies suggested that a BWB airliner carrying from 450 to 800 passengers could achieve fuel savings of over 20 percent.
Airbus is studying a BWB design as a possible replacement for the A320neo family. A sub-scale model flew for the first time in June 2019 as part of the MAVERIC (Model Aircraft for Validation and Experimentation of Robust Innovative Controls) programme, which Airbus hopes will help it reduce CO2 emissions by up to 50% relative to 2005 levels.
The N3-X NASA concept uses a number of superconducting electric motors to drive the distributed fans to lower the fuel burn, emissions, and noise. The power to drive these electric fans is generated by two wingtip-mounted gas-turbine-driven superconducting electric generators. This idea for a possible future aircraft is called a "hybrid wing body" or sometimes a blended wing body. In this design, the wing blends seamlessly into the body of the aircraft, which makes it extremely aerodynamic and holds great promise for dramatic reductions in fuel consumption, noise and emissions. NASA develops concepts like these to test in computer simulations and as models in wind tunnels to prove whether the possible benefits would actually occur.
2020s
In 2020, Airbus presented a BWB concept as part of its ZEROe initiative and demonstrated a small-scale aircraft.
In 2022, Bombardier announced its EcoJet project.
In 2023, California startup JetZero announced its Z5 project, designed to carry 250 passengers, targeting the New Midmarket Airplane category, expecting to use existing CFM International LEAP or Pratt & Whitney PW1000G engines. In August 2023, the U.S. Air Force announced a $235-million contract awarded over a four-year period to JetZero, culminating in first flight of the full-scale demonstrator by the first quarter of 2027. The goal of the contract is to demonstrate the capabilities of BWB technology, giving the Department of Defense and commercial industry more options for their future air platforms.
Following this development, JetZero has received FAA clearance for test flights of its Pathfinder, a 'blended-wing' demonstrator plane designed to significantly reduce drag and fuel consumption. This innovative design could potentially lower emissions by 50%. Scheduled for full-scale development by 2030, JetZero plans to create variants for passengers, cargo, and military use. The project faces challenges in certification and integration with current airport infrastructures.
Characteristics
The wide interior spaces created by the blending pose novel structural challenges. NASA has been studying foam-clad stitched-fabric carbon fiber composite skinning to create uninterrupted cabin space.
The BWB form minimizes the total wetted area – the surface area of the aircraft skin, thus reducing skin drag to a minimum. It also creates a thickening of the wing root area, allowing a more efficient structure and reduced weight compared to a conventional craft. NASA also plans to integrate Ultra High Bypass (UHB) ratio jet engines with the hybrid wing body.
A conventional tubular fuselage carries 12–13% of the total lift compared to 31–43% carried by the centerbody in a BWB, where an intermediate lifting-fuselage configuration better suited to narrowbody-sized airliners would carry 25–32% for a 6.1–8.2% increase in fuel efficiency.
Potential advantages
Significant payload advantages in strategic airlift, air freight, and aerial refueling roles
Increased fuel efficiency — 10.9% better than a conventional widebody, to over 20% than a comparable conventional aircraft. A 2022 US Air Force report shows a BWB "increases aerodynamic efficiency by at least 30% over current air force tanker and mobility aircraft".
Lower noise — NASA audio simulations show a 15 dB reduction of Boeing 777-class aircraft, while other studies show reduction below Stage 4 level, depending on configuration.
Potential disadvantages
Evacuating a BWB in an emergency could be a challenge. Because of the aircraft's shape, the seating layout would be theater-style instead of tubular. This imposes inherent limits on the number of exit doors.
It has been suggested that BWB interiors would be windowless; more recent information shows that windows may be positioned differently but involve the same weight penalties as a conventional aircraft.
It has been suggested that passengers at the edges of the cabin may feel uncomfortable during wing roll; however, passengers in large conventional aircraft like the 777 are equally susceptible to such roll.
The center wingbox needs to be tall to be used as a passenger cabin, requiring a larger wing span to balance out.
A BWB has more empty weight for a given payload, and may not be economical for short missions of around four or fewer hours.
A larger wing span may be incompatible with some airport infrastructure, requiring folding wings similar to the Boeing 777X.
It is more expensive to modify the design to create differently-sized variants compared to a conventional fuselage and wing which can be stretched or shrunk easily.
Pitch control and lift capability at low speed have presented challenges for blended-wing designs. JetZero has proposed a novel landing gear design to address these issues for its Z-5 BWB concept.
List of blended wing body aircraft
|-
| Airbus Maveric|| Multinational || UAV || Experimental || 2019 || Prototype || 1 ||
|-
| Boeing X-45 || US || UAV || Experimental || 2002 || Prototype || 2 ||
|-
| Boeing X-48 (C) || US || UAV || Experimental || 2013 || Prototype || 2 || Two engine
|-
| Boeing X-48 (B) || US || UAV || Experimental || 2007 || Prototype || 2 || Three engine
|-
| Lockheed A-12, M-21 and YF-12 || US || Jet || Reconnaissance || 1962 || Production || 18 || YF-12 was a prototype interceptor
|-
| Lockheed SR-71 Blackbird || US || Jet || Reconnaissance || 1964 || Production || 32 ||
|-
| Northrop Grumman Bat || US || Prop/electric || Reconnaissance || 2006 || Production || 10 ||
|-
| McDonnell XP-67 || US || Propeller || Fighter || 1944 || Prototype || 1 || Aerofoil profile maintained throughout.
|-
| McDonnell / NASA BWB-17 || US || UAV || Experimental || 1997 || Prototype || 1 ||
|-
| Miles M.30 || UK || Propeller || Experimental || 1942 || Prototype || 1 ||
|-
| Rockwell B-1 Lancer || US || Jet || Bomber || 1974 || Production || 104 || Variable-sweep wing
|-
| Tupolev Tu-160 || USSR || Jet || Bomber || 1981 || Production || 36 || Variable-sweep wing
|-
| Tupolev Tu-404 || Russia || Propeller || Airliner || 1991 || Project || 0 || One of two alternatives studied
|-
| Westland Dreadnought || UK || Propeller || Transport || 1924 || Prototype || 1 || Mail plane. Aerofoil profile maintained throughout.
|}
In popular culture
Popular Science concept art
A concept photo of a blended wing body commercial aircraft appeared in the November 2003 issue of Popular Science magazine. Artists Neill Blomkamp and Simon van de Lagemaat from The Embassy Visual Effects created the photo for the magazine using computer graphics software to depict the future of aviation and air travel. In 2006 the image was used in an email hoax claiming that Boeing had developed a 1000-passenger jetliner (the "Boeing 797") with a "radical Blended Wing design" and Boeing refuted the claim.
See also
Aurora D8
Flying-V jet
List of flying wings
Lifting body
Silent Aircraft Initiative, a BWB study
References
Further reading
Aircraft aerodynamics
Aircraft configurations
Aircraft wing design
Lists of aircraft by wing configuration | Blended wing body | [
"Engineering"
] | 2,069 | [
"Aircraft configurations",
"Aerospace engineering"
] |
2,179,555 | https://en.wikipedia.org/wiki/Airport%20problem | In mathematics and especially game theory, the airport problem is a type of fair division problem in which it is decided how to distribute the cost of an airport runway among different players who need runways of different lengths. The problem was introduced by S. C. Littlechild and G. Owen in 1973. Their proposed solution is:
Divide the cost of providing the minimum level of required facility for the smallest type of aircraft equally among the number of landings of all aircraft
Divide the incremental cost of providing the minimum level of required facility for the second smallest type of aircraft (above the cost of the smallest type) equally among the number of landings of all but the smallest type of aircraft. Continue thus until finally the incremental cost of the largest type of aircraft is divided equally among the number of landings made by the largest aircraft type.
The authors note that the resulting set of landing charges is the Shapley value for an appropriately defined game.
Introduction
In an airport problem there is a finite population N and a nonnegative function C: N-R. For technical reasons it is assumed that the population is taken from the set of the natural numbers: players are identified with their 'ranking number'. The cost function satisfies the inequality C(i) <C(j)whenever i <j. It is typical for airport problems that the cost C(i)is assumed to be a part of the cost C(j) if i<j, i.e. a coalition S is confronted with costs c(S): =MAX C(i). In this way an airport problem generates an airport game (N,c). As the value of each one-person coalition (i) equals C(i), we can rediscover the airport problem from the airport game theory.
Nash Equilibrium
Nash equilibrium, also known as non-cooperative game equilibrium, is an essential term in game theory described by John Nash in 1951. In a game process, regardless of the opponent's strategy choice, one of the parties will choose a certain strategy, which is called dominant strategy. If any participant chooses the optimal strategy when the strategies of all other participants are determined, then this combination is defined as a Nash equilibrium. A game may include multiple Nash equilibrium or none. In addition, a combination of strategies is called the Nash balance. when each player's balance strategy is to achieve the maximum value of its expected return, at the same time, all other players also follow this strategy.
Shapley value
The Shapley value is a solution concept used in game theory. The Shapley value is mainly applicable to the following situation: the contribution of each actor is not equal, but each participant cooperates with each other to obtain profit or return. The efficiency of the resource allocation and combination of the two distribution methods are more reasonable and fair, and it also reflects the process of mutual game among the league members. However, the benefit distribution plan of the Shapley value method has not considered the risk sharing factors of organization members, which essentially implies the assumption of equal risk sharing. It is necessary to make appropriate amendments to the benefit distribution plan of Shapley value method according to the size of risk sharing.
Example
An airport needs to build a runway for 4 different aircraft types. The building cost associated with each aircraft is 8, 11, 13, 18 for aircraft A, B, C, D. We would come up with the following cost table based on Shapley value:
See also
Introduction video of confrontation analysis.
List of games in game theory.
References
Fair division
Cooperative games | Airport problem | [
"Mathematics"
] | 727 | [
"Recreational mathematics",
"Fair division",
"Game theory",
"Cooperative games"
] |
2,179,639 | https://en.wikipedia.org/wiki/Obstruction%20theory | In mathematics, obstruction theory is a name given to two different mathematical theories, both of which yield cohomological invariants.
In the original work of Stiefel and Whitney, characteristic classes were defined as obstructions to the existence of certain fields of linear independent vectors. Obstruction theory turns out to be an application of cohomology theory to the problem of constructing a cross-section of a bundle.
In homotopy theory
The older meaning for obstruction theory in homotopy theory relates to the procedure, inductive with respect to dimension, for extending a continuous mapping defined on a simplicial complex, or CW complex. It is traditionally called Eilenberg obstruction theory, after Samuel Eilenberg. It involves cohomology groups with coefficients in homotopy groups to define obstructions to extensions. For example, with a mapping from a simplicial complex X to another, Y, defined initially on the 0-skeleton of X (the vertices of X), an extension to the 1-skeleton will be possible whenever the image of the 0-skeleton will belong to the same path-connected component of Y. Extending from the 1-skeleton to the 2-skeleton means defining the mapping on each solid triangle from X, given the mapping already defined on its boundary edges. Likewise, then extending the mapping to the 3-skeleton involves extending the mapping to each solid 3-simplex of X, given the mapping already defined on its boundary.
At some point, say extending the mapping from the (n-1)-skeleton of X to the n-skeleton of X, this procedure might be impossible. In that case, one can assign to each n-simplex the homotopy class of the mapping already defined on its boundary, (at least one of which will be non-zero). These assignments define an n-cochain with coefficients in . Amazingly, this cochain turns out to be a cocycle and so defines a cohomology class in the nth cohomology group of X with coefficients in . When this cohomology class is equal to 0, it turns out that the mapping may be modified within its homotopy class on the (n-1)-skeleton of X so that the mapping may be extended to the n-skeleton of X. If the class is not equal to zero, it is called the obstruction to extending the mapping over the n-skeleton, given its homotopy class on the (n-1)-skeleton.
Obstruction to extending a section of a principal bundle
Construction
Suppose that is a simply connected simplicial complex and that is a fibration with fiber . Furthermore, assume that we have a partially defined section on the -skeleton of .
For every -simplex in , can be restricted to the boundary (which is a topological -sphere). Because sends each back to , defines a map from the -sphere to . Because fibrations satisfy the homotopy lifting property, and is contractible; is homotopy equivalent to . So this partially defined section assigns an element of to every -simplex. This is precisely the data of a -valued simplicial cochain of degree on , i.e. an element of . This cochain is called the obstruction cochain because it being the zero means that all of these elements of are trivial, which means that our partially defined section can be extended to the -skeleton by using the homotopy between (the partially defined section on the boundary of each ) and the constant map.
The fact that this cochain came from a partially defined section (as opposed to an arbitrary collection of maps from all the boundaries of all the -simplices) can be used to prove that this cochain is a cocycle. If one started with a different partially defined section that agreed with the original on the -skeleton, then one can also prove that the resulting cocycle would differ from the first by a coboundary. Therefore we have a well-defined element of the cohomology group such that if a partially defined section on the -skeleton exists that agrees with the given choice on the -skeleton, then this cohomology class must be trivial.
The converse is also true if one allows such things as homotopy sections, i.e. a map such that is homotopic (as opposed to equal) to the identity map on . Thus it provides a complete invariant of the existence of sections up to homotopy on the -skeleton.
Applications
By inducting over , one can construct a first obstruction to a section as the first of the above cohomology classes that is non-zero.
This can be used to find obstructions to trivializations of principal bundles.
Because any map can be turned into a fibration, this construction can be used to see if there are obstructions to the existence of a lift (up to homotopy) of a map into to a map into even if is not a fibration.
It is crucial to the construction of Postnikov systems.
In geometric topology
In geometric topology, obstruction theory is concerned with when a topological manifold has a piecewise linear structure, and when a piecewise linear manifold has a differential structure.
In dimension at most 2 (Rado), and 3 (Moise), the notions of topological manifolds and piecewise linear manifolds coincide. In dimension 4 they are not the same.
In dimensions at most 6 the notions of piecewise linear manifolds and differentiable manifolds coincide.
In surgery theory
The two basic questions of surgery theory are whether a topological space with n-dimensional Poincaré duality is homotopy equivalent to an n-dimensional manifold, and also whether a homotopy equivalence of n-dimensional manifolds is homotopic to a diffeomorphism. In both cases there are two obstructions for n>9, a primary topological K-theory obstruction to the existence of a vector bundle: if this vanishes there exists a normal map, allowing the definition of the secondary surgery obstruction in algebraic L-theory to performing surgery on the normal map to obtain a homotopy equivalence.
See also
Kirby–Siebenmann class
Wall's finiteness obstruction
References
Homotopy theory
Differential topology
Surgery theory
Theories | Obstruction theory | [
"Mathematics"
] | 1,281 | [
"Topology",
"Differential topology"
] |
2,179,652 | https://en.wikipedia.org/wiki/Cluster%20impact%20fusion | Cluster Impact Fusion is a suggested method of producing practical fusion power using small clusters of heavy water molecules directly accelerated into a titanium-deuteride target. Calculations suggested that such a system enhanced the cross section by many orders of magnitude. It is a particular implementation of the larger beam-target fusion concept.
The idea was first reported by researchers at Brookhaven in 1989. Intrigued by recent reports of cold fusion, they attempted to study potential causes for the effect by accelerating tiny droplets of heavy water, about 25 to 1300 D2O molecules each, into a target at about 220 eV. To their surprise they immediately saw fusion effects, at a rate that was many times what any of them could explain via conventional theory.
The experiment was fairly simple in concept but required an appropriate accelerator, so it was some time before other labs were able to repeat the experiments. One of the first was the University of Washington, who reported a null result in 1991. Further experiments and a review from MIT in 1992 solved the mystery: the fusion products were the results of contamination, which could be eliminated by filtering with a magnet. The Brookhaven experimenters tried this and the effect disappeared.
Cluster impact fusion references end abruptly at that point.
See also
Impact fusion, which fires Macron (physics) or other projectiles into fuel to compress and heat it
References
Nuclear technology
Cold fusion | Cluster impact fusion | [
"Physics",
"Chemistry"
] | 273 | [
"Nuclear fusion",
"Nuclear technology",
"Cold fusion",
"Nuclear physics"
] |
2,179,719 | https://en.wikipedia.org/wiki/Fortress%20Europe | Fortress Europe () was a military propaganda term used by both sides of World War II which referred to the areas of Continental Europe occupied by Nazi Germany, as opposed to the United Kingdom across the Channel.
World War II defences
In British phraseology, Fortress Europe meant the battle honour accorded to Royal Air Force and Allied squadrons during the war, but to qualify, operations had to be made by aircraft based in Britain against targets in Germany, Italy and other parts of German-occupied Europe, in the period from the fall of France to the Normandy invasion.
Simultaneously, the term Festung Europa was being used by Nazi propaganda, namely to refer to Hitler's and the Wehrmacht's plans to fortify the whole of occupied Europe, in order to prevent an invasion by Allied forces. These measures included the construction of the Atlantic wall, along with the reorganization of the Luftwaffe for air defence. This use of the term Fortress Europe was subsequently adopted by correspondents and historians in the English language to describe the military efforts of the Axis powers at defending the continent from the Allies.
Postwar usage
Currently, within Europe, the term is used either to describe dumping effect of external borders in commercial matters, or as a pejorative description of the state of immigration into the European Union. This can be in reference either to attitudes towards immigration, to border fortification policies pursued for instance in the Spanish North African enclaves of Ceuta and Melilla or to increasing level of externalization of borders that is used to help prevent asylum seekers and other migrants from entering the European Union.
For right-wing and nationalist parties such as the Freedom Party of Austria, 'Fortress Europe' is a positive term. They mostly claim that such a fortress does not really exist yet, and that immigrants can enter Europe far too easily. They often charge the southern states with insufficient border control, claiming that the latter are acting on the knowledge that immigrants tend to be more attracted to western/northern states with more generous welfare systems such as Switzerland, Germany, Austria, and Sweden.
Controlled external borders
Ceuta and Melilla (Spain) (from Morocco)
Italian and Maltese coast (from Libya and Tunisia)
Canary Islands (Spain) (from Morocco, Western Sahara and Mauritania)
Maritsa (Turkey) (from Near East)
Eastern border of the European Union (from Ukraine, Belarus, Moldova, Russia)
South-Eastern border of the European Union (from Bosnia and Herzegovina, Serbia, Montenegro, Albania, North Macedonia)
Strait of Gibraltar (from Morocco)
South Aegean and North Aegean (from Near East)
See also
Hindenburg Line, German defences on the Western Front of World War I
Siegfried Line, German defences against France in World War II
Maginot Line, French defenses against Germany constructed for World War II
Salpa Line, The last fortified defence line of Finland against the Soviet Union in World War II
Iron Curtain, dividing line through Europe during the Cold War
References
Operation Overlord
World War II defensive lines
Historic defensive lines
World War II propaganda
Illegal immigration to Europe
Axis powers | Fortress Europe | [
"Engineering"
] | 607 | [
"World War II defensive lines",
"Fortification lines",
"Historic defensive lines"
] |
2,180,015 | https://en.wikipedia.org/wiki/Phases%20of%20Venus | The phases of Venus are the variations of lighting seen on the planet's surface, similar to lunar phases. The first recorded observations of them are thought to have been telescopic observations by Galileo Galilei in 1610. Although the extreme crescent phase of Venus has since been observed with the naked eye, there are no indisputable historical pre-telescopic records of it being described or known.
Observation
The orbit of Venus is 224.7 Earth days (7.4 avg. Earth months [30.4 days]). The phases of Venus result from the planet's orbit around the Sun inside the Earth's orbit giving the telescopic observer a sequence of progressive lighting similar in appearance to the Moon's phases. It presents a full image when it is on the opposite side of the Sun. It is a gibbous phase when it approaches or leaves the opposite side of the Sun. It shows a quarter phase when it is at its maximum elongation from the Sun. Venus presents a thin crescent in telescopic views as it comes around to the near side between the Earth and the Sun and presents its new phase when it is between the Earth and the Sun. Since the planet has an atmosphere it can be seen at new in a telescope by the halo of light refracted around the planet. The full cycle from new to full to new again takes 584 days (the time it takes Venus to overtake the Earth in its orbit). Venus (like the Moon) has 4 primary phases of 146 days each.
The planet also changes in apparent size from 9.9 arc seconds at full (superior conjunction) up to a maximum of 68 arc seconds at new (inferior conjunction). Venus reaches its greatest magnitude of about −4.5 when it is an intermediate crescent shape at the point in its orbit, when it is 68 million km away from the Earth, at which point the illuminated part of its disk reaches its greatest angular area as seen from the Earth (a combination of its closeness and the fact that it is 28% illuminated).
Contrary to other planets its apparent magnitude around inferior conjunction does not decrease consistently but rather spikes before dimming further. This is caused by sulfuric acid droplets in Venus' atmosphere reflecting more light at a certain angle and thus phase, an effect similar to a glory on Earth.
History
The first observations of the full planetary phases of Venus were by Galileo at the end of 1610 (though not published until 1613 in the Letters on Sunspots). Using a telescope, Galileo was able to observe Venus going through a full set of phases, something prohibited by the Ptolemaic system that assumed Venus to be a perfect celestial body. In the Ptolemaic system, the Sun and Venus circle the earth, with Venus orbiting around a point on the Earth-Sun axis, so that Venus is never on the far side of the sun. One could never expect an alignment Sun-Earth-Venus or Venus-Sun-Earth to occur, so that a full Venus could never be observed. Galileo's observations of the phases of Venus essentially ruled out the Ptolemaic system, and was compatible only with the Copernican system and the Tychonic system and other models such as the Capellan and Riccioli's extended Capellan model.
There is some controversy about Galileo's claim to first observing the phases of Venus: In December of 1610, Galileo received a letter from fellow scientist Benedetto Castelli, asking if the phases of Venus were observable through Galileo's new telescope. Days later, Galileo wrote in a letter to Johannes Kepler saying that he had observed Venus going through phases, but took complete credit for himself. It is unclear, lacking copies of any earlier correspondence, whether Castelli was telling Galileo of it for the first time, or responding to Galileo having previously informed him of it.
Curiously, Galileo's letter to Kepler was encrypted so that Kepler could not scoop Galileo before he had made more exhaustive observations: Galileo took a sentence stating that Venus went through phases:
Cynthiae figuras aemulatur mater amorum (The mother of love imitates the shape of Cynthia)
And scrambled the letters into a strange anagram:
Haec immatura a me iam frustra leguntur o.y. (These are now too young to be read by me)
Cynthia was a popular epithet for the Moon, the mother of love of course being Venus. He sent the anagram to Kepler, then a few months later sent the decoded version. This way he had proof of having made the observation, without Kepler being able to publish it earlier. This technique of hiding encoded announcements in letters was not uncommon at the time.
Naked eye observations
The extreme crescent phase of Venus can be seen without a telescope by those with exceptionally acute eyesight, at the limit of human perception. The angular resolution of the naked eye is about 1 minute of arc (60 seconds). The apparent disk of Venus' extreme crescent measures between 60.2 and 66 seconds of arc, depending on the distance from Earth.
Mesopotamian priest-astronomers described Ishtar (Venus) in cuneiform text as having horns which has been interpreted as indicating observation of a crescent. However, other Mesopotamian deities were depicted with horns, so the phrase could have been simply a symbol of divinity.
See also
Aspects of Venus
Ashen light
Transit of Venus
Notes
References
External links
Observations and Theories of Planetary Motion
The crescent Venus seen with the naked eye
Owen Gingerich - Empirical proof and/or persuasion — lecture on Galileo's observation of the phases of Venus from a renowned historian of science
YouTube animation of the phases of Venus predicted by the pure geocentric Ptolemaic model
YouTube animation of the phases of Venus predicted by the heliocentric model (and implicitly also by the geo-heliocentric models such as the Tychonic)
Observational astronomy
Venus
Discoveries by Galileo Galilei | Phases of Venus | [
"Astronomy"
] | 1,216 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
2,180,085 | https://en.wikipedia.org/wiki/Otic%20ganglion | The otic ganglion is a small parasympathetic ganglion located immediately below the foramen ovale in the infratemporal fossa and on the medial surface of the mandibular nerve. It is functionally associated with the glossopharyngeal nerve and innervates the parotid gland for salivation.
It is one of four parasympathetic ganglia of the head and neck. The others are the ciliary ganglion, the submandibular ganglion and the pterygopalatine ganglion.
Structure and relations
The otic ganglion is a small (2–3 mm), oval shaped, flattened parasympathetic ganglion of a reddish-grey color, located immediately below the foramen ovale in the infratemporal fossa and on the medial surface of the mandibular nerve.
It is in relation, laterally, with the trunk of the mandibular nerve at the point where the motor and sensory roots join; medially, with the cartilaginous part of the auditory tube, and the origin of the tensor veli palatini; posteriorly, with the middle meningeal artery. It surrounds the origin of the nerve to the medial pterygoid.
Connections
The preganglionic parasympathetic fibres originate in the inferior salivatory nucleus of the glossopharyngeal nerve. They leave the glossopharyngeal nerve by its tympanic branch and then pass via the tympanic plexus and the lesser petrosal nerve to the otic ganglion. Here, the fibers synapse and the postganglionic fibers pass by communicating branches to the auriculotemporal nerve, which conveys them to the parotid gland. They produce vasodilator and secretomotor effects.
Its sympathetic root is derived from the plexus on the middle meningeal artery. It contains post-ganglionic fibers arising in the superior cervical ganglion. The fibers pass through the ganglion without relay and reach the parotid gland via the auriculotemporal nerve. They are vasomotor in function.
The sensory root comes from the auriculotemporal nerve and is sensory to the parotid gland.
The motor fibers supplying the medial pterygoid and the tensor veli palatini and the tensor tympani pass through the ganglion without relay.
Clinical significance
Frey's syndrome is caused by re-routing of parasympathetic and sympathetic fibres of the auriculotemporal nerve (V3) within the otic ganglion. It is a complication of surgery involving the parotid gland whereby injury to these branches, which innervate the parotid gland and sweat glands of the face respectively, form abnormal connections. Salivation leads to perspiration and flushing of the pre-auricular region and is called 'gustatory sweating'.
Additional images
References
External links
(, )
Autonomic ganglia of the head and neck
Parasympathetic ganglia
Glossopharyngeal nerve
Otorhinolaryngology
Nerves of the head and neck
Neurology
Nervous system | Otic ganglion | [
"Biology"
] | 671 | [
"Organ systems",
"Nervous system"
] |
2,180,123 | https://en.wikipedia.org/wiki/Bargain%20bin | A bargain bin refers to an assorted selection of merchandise, particularly clothing, tools and optical discs, which have been discounted in price. Reasons for the discount can range from the closure of a production company to a steep decline in an item's popularity in the aftermath of a fad or scandal. Another reason for the discount can be the particular product line being discontinued. The origin of the term comes from the fact such items would be found in an isolated bin rather than on store shelves.
The phrase "bargain basement" is now a synonym. "Bargain basement" used to be a literal basement in downtown department stores. Clearance merchandise would be placed there regardless of which section of the store it originally came from. Chicago's Marshall Field's had begun selling discounted stock from its cellar level before 1910, and many retailers followed suit. The country's first bargain counter was operated by Hutzler's at its Baltimore location. "Hutzler's Downstairs", an outlet for discounted merchandise opened in the basement of Hutzler's Baltimore department store in September 1929.
"Bargain Basement" was also used figuratively to describe the purchase of professional footballers from clubs in lower divisions by richer clubs for a low fee. In more recent years, both terms have taken on a figurative meaning – most notably as a synonym for 'low-quality' or 'unimpressive'. The most common implication of an item's presence in a bargain bin is its low quality. "Knock-offs", for example, are associated with the term.
The term "bargain bin" is used in New Zealand, Australia, and some other countries to refer to a retailer whose primary function is to sell cheap goods (i.e., a variety store).
See also
Inferior good
Variety store
Filene's Basement
Shovelware
Mockbuster
References
Retail store elements
Merchandising
Sales and clearances | Bargain bin | [
"Technology"
] | 388 | [
"Components",
"Retail store elements"
] |
2,180,259 | https://en.wikipedia.org/wiki/World%20Wide%20Port%20Name | In computing, a World Wide Port Name, WWPN, or WWpN, is a World Wide Name assigned to a port in a Fibre Channel fabric. Used on storage area networks, it performs a function equivalent to the MAC address in Ethernet protocol, as it is supposed to be a unique identifier in the network.
A WWPN is a World Wide Port Name; a unique identifier for each Fibre Channel port presented to a Storage Area Network (SAN). Each port on a Storage Device has a unique and persistent WWPN.
A World Wide Node Name, WWNN, or WWnN, is a World Wide Name assigned to a node (an endpoint, a device) in a Fibre Channel fabric. It is valid for the same WWNN to be seen on many different ports (different addresses) on the network, identifying the ports as multiple network interfaces of a single network node.
External links
Locating the WWPN for a Linux host
Fibre Channel
Identifiers | World Wide Port Name | [
"Technology"
] | 211 | [
"Computing stubs",
"Computer network stubs"
] |
2,180,274 | https://en.wikipedia.org/wiki/Chaining | Chaining is a type of intervention that aims to create associations between behaviors in a behavior chain. A behavior chain is a sequence of behaviors that happen in a particular order where the outcome of the previous step in the chain serves as a signal to begin the next step in the chain. In terms of behavior analysis, a behavior chain is begun with a discriminative stimulus (SD) which sets the occasion for a behavior, the outcome of that behavior serves as a reinforcer for completing the previous step and as another SD to complete the next step. This sequence repeats itself until the last step in the chain is completed and a terminal reinforcer (the outcome of a behavior chain, i.e. with brushing one's teeth the terminal reinforcer is having clean teeth) is achieved. For example, the chain in brushing one's teeth starts with seeing the toothbrush, this sets the occasion to get toothpaste, which then leads to putting it on one's brush, brushing the sides and front of mouth, spitting out the toothpaste, rinsing one's mouth, and finally putting away one's toothbrush. To outline behavior chains, as done in the example, a task analysis is used.
Chaining is used to teach complex behaviors made of behavior chains that the current learner does not have in their repertoire. Various steps of the chain can be in the learner’s repertoire, but the steps the learner doesn’t know how to do have to be in the category of can’t do instead of won’t do (issue with knowing the skill not an issue of compliance). There are three different types of chaining that can be used and they are forward chaining, backward chaining, and total task chaining (not to be confused with a task analysis).
Forward chaining
Forward chaining is a procedure where a behavior chain is learned and completed by teaching the steps in chronological order using prompting and fading. The teacher teaches the first step by presenting a distinctive stimulus to the learner. Once they complete the first step in the chain, the teacher then prompts them through the remaining steps in the chain. Once the learner is consistently completing the first step without prompting, the teacher has them complete the first and second step then prompts the learner through the remaining steps and so on until the learner is able to complete the entire chain independently. Reinforcement is delivered for completion of the step, although they do not attain the terminal reinforcer (outcome of the behavior chain) until they are prompted through the remaining steps.
Backward chaining
Backward chaining is the same process as forward chaining but starts with the last step. Backward chaining is the procedure that is typically used for people with limited abilities. This process uses prompting and fading techniques to teach the last step first. The biggest benefit of using a backward chain is that the learner receives the terminal reinforcer (the outcome of the behavior chain) naturally. Backward chaining is the preferred method when teaching skills to individuals with severe delays because they complete the last step and see the direct outcome of the chain immediately rather than having to be prompted through the remaining steps to receive that reinforcement.
The teacher begins by prompting the learner through the entire chain, starting with the last behavior. The teacher repeats this until the learner can perform the last step without prompting upon the distinctive stimulus being presented. Once the learner can complete the last step consistently, the second to last step is taught while continuing the prompts for the other steps. The teacher repeats this procedure of teaching the next step while prompting the remaining ones until the learner can perform (or achieve) all the steps without prompting.
References
Bancroft, S. L., Weiss, J. S., Libby, M. E., & Ahearn, W. H. (2011). A comparison of procedural variations in teaching behavior chains: manual guidance, trainer completion, and no completion of untrained steps. Journal of applied behavior analysis, 44(3), 559–569.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2014). Applied behavior analysis. (434-452). Harlow: Pearson Education.
Slocum, S. K., & Tiger, J. H. (2011). An assessment of the efficiency of and child preference for forward and backward chaining. Journal of Applied Behavior Analysis, 44(4), 793–805.
Behavioral concepts
Behaviorism | Chaining | [
"Biology"
] | 924 | [
"Behavior",
"Behavioral concepts",
"Behaviorism"
] |
2,180,364 | https://en.wikipedia.org/wiki/Finding%20%28jewelcrafting%29 | Jewellery findings are the parts used to join jewellery components together to form a completed article.
List of findings
Clasps to complete necklaces and bracelets
Earwires to link an earring to the wearer's ear
Ring blanks for making finger rings
Bails, metal loops, and jump rings, for completing jewellery. Jump rings can be used by themselves for chains
Pin stems and brooch assemblies
Tuxedo stud findings, letters of the alphabet, cluster settings, metal beads and balls
Plastic, fabric or metal stringing material for threading beads
Findings are available in all the jewellery metals—sterling silver, plated silver, gold, niobium, titanium, aluminium, and copper.
References
See also
Finding Glossary
Jewellery components | Finding (jewelcrafting) | [
"Technology",
"Engineering"
] | 149 | [
"Design stubs",
"Design",
"Jewellery components",
"Components"
] |
2,180,385 | https://en.wikipedia.org/wiki/Oechsle%20scale | The Oechsle scale is a hydrometer scale measuring the density of grape must, which is an indication of grape ripeness and sugar content used in wine-making. It is named for Ferdinand Oechsle (1774–1852) and it is widely used in the German, Swiss and Luxembourgish wine-making industries. On the Oechsle scale, one degree Oechsle (°Oe) corresponds to one gram of the difference between the mass of one litre of must at 20 °C and 1 kg (the mass of 1 litre of water). For example, must with a specific mass of 1084 grams per litre has 84 °Oe.
Overview
The mass difference between equivalent volumes of must and water is almost entirely due to the dissolved sugar in the must. Since the alcohol in wine is produced by fermentation of the sugar, the Oechsle scale is used to predict the maximal possible alcohol content of the finished wine. This measure is commonly used to select when to harvest grapes. In the vineyard, the must density is usually measured by using a refractometer by crushing a few grapes between the fingers and letting the must drip onto the glass prism of the refractometer. In countries using the Oechsle scale, the refractometer will be calibrated in Oechsle degrees, but this is an indirect reading, as the refractometer actually measures the refractive index of the grape must, and translates it into Oechsle or different wine must scales, based on their relationship with refractive index.
Wine classification
The Oechsle scale forms the basis of most of the German wine classification. In the highest quality category, Prädikatswein (formerly known as Qualitätswein mit Prädikat, QmP), the wine is assigned a Prädikat based on the Oechsle reading of the must. The regulations set out minimum Oechsle readings for each Prädikat, which depend on wine-growing regions and grape variety:
Kabinett – 70–85 °Oe
Spätlese – 76–95 °Oe
Auslese – 83–105 °Oe
Beerenauslese and Eiswein – 110–128° Oe (Eiswein is made by late harvesting grapes after they have frozen on the vine and not necessarily affected by noble rot, botrytis, which is the case with Beerenauslese)
Trockenbeerenauslese – 150–154 °Oe (affected by botrytis)
The sugar content indicated by the Oechsle scale only refers to the unfermented grape must, never to the finished wine.
Other scales
In Austria the Klosterneuburger Mostwaage (KMW) scale is used. The scale is divided into Klosterneuburger Zuckergrade (°KMW), and very similar to the Oechsle scale (1° KMW =~ 5° Oe). However, the KMW measures the exact sugar content of the must.
The Baumé scale is occasionally used in France and by U.S. brewers, and in the New World the Brix scale is used to describe the readings of a refractometer when measuring the sugar content of a given sample.
Since a refractometer actually measures the refractive index of the grape must, it can be translated to many different scales (both related and unrelated to wine) based on their correlation to refractive index. Thus, all of these methods are similar and the differences are more cultural than significant, but all are equally valid ways to measure the density of grape must and other sugar-based liquids.
The Normalizovaný Moštomer (°NM) measures kg of sugar in 100 L of must and is used in Czech Republic and Slovakia.
See also
Sweetness of wine
Ripeness in viticulture
References
Oenology
German wine
Units of density | Oechsle scale | [
"Physics",
"Mathematics"
] | 791 | [
"Physical quantities",
"Units of density",
"Quantity",
"Density",
"Units of measurement"
] |
9,322,698 | https://en.wikipedia.org/wiki/Agaricus%20subrutilescens | Agaricus subrutilescens, also known as the wine-colored agaricus, is a mushroom of the genus Agaricus. It was first described scientifically in 1925 as Psalliota subrutilescens, and later transferred to Agaricus in 1938.
Description
Agaricus subrutilescens has a cap that is across, dry, and has many wine to brown colored fibrils, especially near the center. The gills are close and white at first, turning pinkish and then dark brown in age. The stalk has a skirt-like ring and is long, thick, white, and covered with soft woolly scales below the ring. The flesh is white and does not stain, and the odor and taste are mild.
The purplish fibrous cap and shaggy white stem differentiate this mushroom from others which resemble it. Similar species include Agaricus hondensis and Agaricus moelleri.
This mushroom is variously described as edible, inedible, or responsible for causing gastric upset.
Habitat and distribution
The mushroom fruits in undisturbed mixed woods in Western North America and Japan. It grows by itself or scattered in small clusters, often under redwood, pine, or alder. Recently this mushroom has been identified in New Zealand and Australia.
See also
List of Agaricus species
References
subrutilescens
Fungi described in 1925
Fungi of Asia
Fungi of North America
Fungus species | Agaricus subrutilescens | [
"Biology"
] | 294 | [
"Fungi",
"Fungus species"
] |
9,322,713 | https://en.wikipedia.org/wiki/Indicators%20of%20spatial%20association | Indicators of spatial association are statistics that evaluate the existence of clusters in the spatial arrangement of a given variable. For instance, if we are studying cancer rates among census tracts in a given city local clusters in the rates mean that there are areas that have higher or lower rates than is to be expected by chance alone; that is, the values occurring are above or below those of a random distribution in space.
Global indicators
Notable global indicators of spatial association include:
Global Moran's I: The most commonly used measure of global spatial autocorrelation or the overall clustering of the spatial data developed by Patrick Alfred Pierce Moran.
Geary's C (Geary's Contiguity Ratio): A measure of global spatial autocorrelation developed by Roy C. Geary in 1954. It is inversely related to Moran's I, but more sensitive to local autocorrelation than Moran's I.
Getis–Ord G (Getis–Ord global G, Geleral G-Statistic): Introduced by Arthur Getis and J. Keith Ord in 1992 to supplement Moran's I.
Local indicators
Notable local indicators of spatial association (LISA) include:
Local Moran's I: Derived from Global Moran's I, it was introduced by Luc Anselin in 1995 and can be computed using GeoDa.
Getis–Ord Gi (local Gi): Developed by Getis and Ord based on their global G.
INDICATE's IN: Originally developed to assess the spatial distribution of stars, can be computed for any discrete 2+D dataset using python-based INDICATE tool available from GitHub.
See also
Spatial analysis
Tobler's first law of geography
References
Further reading
Spatial analysis | Indicators of spatial association | [
"Physics"
] | 355 | [
"Spacetime",
"Space",
"Spatial analysis"
] |
9,324,744 | https://en.wikipedia.org/wiki/Domus%20Academy | Domus Academy is a private school of design in Milan, Italy. It offers undergraduate and postgraduate degree courses in fashion, industrial design, design management, business and user experience design, product and interior design, design innovation, fashion and luxury brand management.
Domus Academy was founded in 1982 by the Mazzocchi family, owners of , which publishes Domus and Quattroruote magazines. Maria Grazia Mazzocchi was president of the school. Gianfranco Ferré was on the staff from 1983 to 1989, and Andrea Branzi was cultural director for the first ten years. In 2009 the school was bought by Laureate Education of Baltimore, Maryland for an estimated ten million euros. In 2018 Laureate sold it to the French group Galileo Global Education.
Domus Academy's academic offering includes bachelor of arts courses (180 ECTS credits), 1-year master's courses (60 or 90 ECTS credits) and 2-year master of arts courses (120 ECTS credits). By successfully completing one of the 1-year Master's courses, students can obtain the title of Academic Master (60 ECTS credits), recognised in Europe and worldwide and accredited by the Italian Ministry of Education, University and Research (MIUR), or the title of Dual Award Master (90 ECTS credits). The Academic Master's degree is issued by NABA, Nuova Accademia di Belle Arti, which is on the list of institutions authorised by the MIUR to issue Higher Education in Art, Music and Dance. The Dual Award Master allows students to obtain an Academic Master recognised by the MIUR and, in addition, a Master of Arts officially recognised by the British system (Privy Council) and issued by Regent's University London.
References
Art schools in Italy
Fashion schools
Design schools in Italy
Communication design
Education in Milan
Higher education in Italy
Educational institutions established in 1982
1982 establishments in Italy
Fashion in Milan | Domus Academy | [
"Engineering"
] | 389 | [
"Design",
"Communication design"
] |
9,324,893 | https://en.wikipedia.org/wiki/Alphenal | Alphenal, also known as 5-allyl-5-phenylbarbituric acid, is a barbiturate derivative developed in the 1920s. It has primarily anticonvulsant properties and was used occasionally for the treatment of epilepsy or convulsions, although not as commonly as better known barbiturates such as phenobarbital.
LD50: Mouse (Oral): 280 mg/kg
References
Barbiturates
Hypnotics
Allyl compounds
Phenyl compounds | Alphenal | [
"Biology"
] | 109 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
9,326,241 | https://en.wikipedia.org/wiki/Wanzlick%20equilibrium | The Wanzlick equilibrium is a chemical equilibrium between a relatively stable carbene compound and its dimer. The equilibrium was proposed to apply to certain electron-rich alkenes, such as tetraminoethylenes, which have been called "carbene dimers." Such equilibria occur, but the mechanism does not proceed simply, but requires catalysts.
Original conjecture
In 1960, Hans-Werner Wanzlick and E. Schikora proposed that carbenes derived from dihydroimidazol-2-ylidene were generated by vacuum pyrolysis of 2-trichloromethyl dihydroimidazole derivatives, with the loss of chloroform.
Wanzlick and Schikora believed that once prepared these carbenes existed in an unfavourable equilibrium with their corresponding dimers. This assertion was based on reactivity studies which they believed showed that the free carbene reacted with electrophiles (E-X). The dimer (a substituted tetraaminoethylene) was believed to be inactive to the electrophiles (E-X), and thought to merely act as a stable carbene reservoir.
Conjecture challenged
Wanzlick's hypothesis of a carbene-dimer equilibrium was tested by David M. Lemal and others. Heating mixtures of tetraaminoethylene derivatives did not produce mixed dimers:
This result indicates that a 'carbene-dimer equilibrium' does not occur for these dihydroimidazol-2-ylidene derivatives.
Lemal proposed that Wanzlick's observations could be explained by acid-catalyzed reactions.
Lemal proposed that the electrophile converts the tetraaminoethylenes into cationic species. He proposed that this cation then dissociated into the free carbene plus the resultant salt. The free carbene could then either re-dimerise, regenerating the tetraaminoethylene starting material, or react with E-X (as Wanzlick originally predicted), with either route eventually giving the same reaction product, the dihydroimidazolium salt.
Conjecture confirmed
In 1999 Michael K. Denk reinvestigated the cross-over experiments that supported the Wanzlick equilibrium. This report prompted Lemal to repeat his 1964 experiments. Denk's findings were confirmed only with deuterated tetrahydrofuran (THF) as a solvent. With toluene and added KH as an electrophile quencher, however, the cross-over product was again not observed.
In 1999 Lemal and others investigated an equilibrium between a dibenzotetraazafulvalene derivative and its carbene. These studies led Böhm & Herrmann to conclude in 2000 that "the Wanzlick equilibrium between a tetraaminoethylene and its corresponding carbene did exist". This notion was confirmed in 2010 by Kirmse.
Others subsequently showed that unhindered diaminocarbenes form dimers by acid-catalysed dimerisation as shown in the Lemal.
Sublimation experiments with carbene dimers and their protonated derivatives quantify acid catalysis and corroborate that tetraaminoolefins may dissociate without adventitious protons. Acid catalysis is however required for the dissociation of triaminoolefins (NHC-CAAC dimers).
References
Carbenes | Wanzlick equilibrium | [
"Chemistry"
] | 742 | [
"Organic compounds",
"Carbenes",
"Inorganic compounds"
] |
9,326,244 | https://en.wikipedia.org/wiki/Kantar | A kantar is the official Egyptian weight unit for measuring cotton. It corresponds to the US hundredweight, and is roughly equal to 99.05 pounds, or 45.02 kilograms. It is equal to either 157 kilograms of seed cotton or 50 kilograms of lint cotton.
References
Units of mass | Kantar | [
"Physics",
"Mathematics"
] | 61 | [
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
9,326,517 | https://en.wikipedia.org/wiki/List%20of%20bombings%20during%20the%20Iraq%20War | Bombings were a regular occurrence during the Iraq War. They resulted in tens of thousands of casualties throughout the country, killing and wounding civilians and combatants alike. Many Iraqi insurgents favoured the tactic of suicide bombing, which was used at a particularly unprecedented scale against the American-led Multi-National Force – Iraq (MNF–I). Additionally, during the 2003 invasion of Iraq, the United States and the United Kingdom collectively dropped 29,199 bombs on the country. This article does not list these aerial attacks, and instead concentrates on the smaller number of direct insurgent bombings during the sectarian conflict, when Shia Muslims and Sunni Muslims fought each other on the one hand and the MNF–I on the other hand.
Most of the organized bombings were carried out by Sunni insurgents affiliated with Jama'at al-Tawhid wal Jihad, Al-Qaeda in Iraq, Jama'at Ansar al-Sunna, and the Islamic State of Iraq, among others. The main targets of these bombings were MNF–I troops and private military contractors, as well as local Iraqi collaborators. Upon the outbreak of the Iraqi civil war in 2006, the various Sunni and Shia militant groups fighting in the country had effectively shifted their focus away from the MNF–I and began increasingly targeting Iraqi civilians on the basis of their sectarian affiliation.
Perpetrators
A 2005 report by Human Rights Watch analyzed the Iraqi insurgency and highlighted: "The groups that are most responsible for the abuse, namely al-Qaeda in Iraq and its allies, Ansar al-Sunna and the Islamic State of Iraq, have all targeted civilians for abductions and executions. The first two groups have repeatedly boasted about massive car bombs and suicide bombs in mosques, markets, bus stations and other civilian areas. Such acts are war crimes and in some cases may constitute crimes against humanity, which are defined as serious crimes committed as part of a widespread or systematic attack against a civilian population."
Analysis
A 2008 research brief by the RAND Corporation on the subject of counter-insurgency tactics in Iraq between 2003 and 2006 depicts a chart that shows that in June and July 2004, Iraqi insurgents began to shift their focus away from attacking coalition forces with roadside bombs and instead began targeting the Iraqi population with suicide bombers and vehicle-borne IEDs. By increasing the number of suicide bombings against civilians and accepting their targeting in retribution, the insurgents sought to expose the weakness of the coalition's and Iraqi government's security and reconstruction apparatus, threaten those who collaborated with the government, generate funds and propaganda, and increasingly enact sectarian revenge. The U.S. failure to adapt to this shift had dramatic consequences. By June 2004, U.S. deaths represented less than 10% of overall deaths on the battlefield and Iraqi deaths represented more than 90%—a figure that remained constant for the next 18 months of the war.
An analysis by Iraq Body Count and co-authors published in 2011 concluded that at least 12,284 civilians were killed in at least 1,003 suicide bombings in Iraq between 2003 and 2010. The study reveals that suicide bombings killed 60 times as many civilians as they did soldiers.
Bombings
This article lists all major bombings of the Iraq War, which took place between 2003 and 2011. For bombings that occurred following the withdrawal of U.S. troops from the country, see: List of bombings during the Iraqi insurgency (2011–2013).
See also
Terrorist incidents in Iraq in 2021
Terrorist incidents in Iraq in 2020
Terrorist incidents in Iraq in 2017
Terrorist incidents in Iraq in 2016
Terrorist incidents in Iraq in 2015
Terrorist incidents in Iraq in 2014
Terrorist incidents in Iraq in 2013
Terrorist incidents in Iraq in 2012
References
External links
List of some attacks 2003-2005
Baghdad: Mapping the violence (BBC)
Project Iraq Body Count
bombings
Bombings
Lists of explosions | List of bombings during the Iraq War | [
"Chemistry"
] | 761 | [
"Lists of explosions",
"Explosions"
] |
9,328,198 | https://en.wikipedia.org/wiki/Opus%20spicatum | Opus spicatum, literally "spiked work," is a type of masonry construction used in Roman and medieval times. It consists of bricks, tiles or cut stone laid in a herringbone pattern.
Uses
Its usage was generally decorative and most commonly it served as a pavement, though it was also used as an infill pattern in walls, as in the striking base of the causeway leading up to the gate tower at Tamworth Castle. Unless the elements run horizontally and vertically, it is inherently weak, since the oblique angles of the elements tend to spread the pattern horizontally under compression.
The type of construction was constantly employed in Roman, Byzantine and Romanesque work, and in the latter was regarded as a test of very early date. It is frequently found in the Byzantine walls in Asia Minor, and in Byzantine churches was employed decoratively to give variety to the wall surface. Sometimes the diagonal courses are reversed one above the other.
The herringbone pattern produces opposing shear plane faces, increasing the relative surface area and therefore rendering it a sounder design for mortar and brick.
Firebacks
Herringbone work, particularly in stone, is also used to make firebacks in stone hearths. Acidic flue gases tend to corrode lime mortar, so a finely set herringbone could remain intact with a minimum of mortar used. Usk Castle has several fine examples.
Examples
The herringbone method was used by Filippo Brunelleschi in constructing the dome of the Cathedral of Florence (Santa Maria del Fiore).
Examples in France exist in the churches at Querqueville in Normandy and St Christophe at Suèvres, both dating from
the 10th century, and in England herring-bone masonry is found in the walls of castles, such as at Guildford, Colchester and Tamworth, as well as Usk Castle in Wales.
Herringbone brickwork was also a feature of Gothic Revival architecture.
See also
References
Opus caementicium roman walls
Bricks
Building stone
Pavements
Architectural elements
Ancient Roman construction techniques | Opus spicatum | [
"Technology",
"Engineering"
] | 399 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
9,328,373 | https://en.wikipedia.org/wiki/Drop%20out%20ink | Drop out ink is ink specifically colored to avoid reading in high-speed OCR scanners. It is often a pastel yellow, red or orange.
The purpose for dropping out specific colors is to allow the OCR scanner to ignore those colors and operate only on the foreground information.
Drop out ink is often used in the finance industry for automated paper invoice processing.
Drop out ink is not the same as inks that have been screened down.
References
Inks | Drop out ink | [
"Technology"
] | 97 | [
"Computing stubs"
] |
9,328,562 | https://en.wikipedia.org/wiki/Boltzmann%27s%20entropy%20formula | In statistical mechanics, Boltzmann's equation (also known as the Boltzmann–Planck equation) is a probability equation relating the entropy , also written as , of an ideal gas to the multiplicity (commonly denoted as or ), the number of real microstates corresponding to the gas's macrostate:
where is the Boltzmann constant (also written as simply ) and equal to 1.380649 × 10−23 J/K, and is the natural logarithm function (or log base e, as in the image above).
In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a certain kind of thermodynamic system can be arranged.
History
The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases".
A 'microstate' is a state specified in terms of the constituent particles of a body of matter or radiation that has been specified as a macrostate in terms of such variables as internal energy and pressure. A macrostate is experimentally observable, with at least a finite extent in spacetime. A microstate can be instantaneous, or can be a trajectory composed of a temporal progression of instantaneous microstates. In experimental practice, such are scarcely observable. The present account concerns instantaneous microstates.
The value of was originally intended to be proportional to the Wahrscheinlichkeit (the German word for probability) of a macroscopic state for some probability distribution of possible microstates—the collection of (unobservable microscopic single particle) "ways" in which the (observable macroscopic) thermodynamic state of a system can be realized by assigning different positions and momenta to the respective molecules.
There are many instantaneous microstates that apply to a given macrostate. Boltzmann considered collections of such microstates. For a given macrostate, he called the collection of all possible instantaneous microstates of a certain kind by the name monode, for which Gibbs' term ensemble is used nowadays. For single particle instantaneous microstates, Boltzmann called the collection an ergode. Subsequently, Gibbs called it a microcanonical ensemble, and this name is widely used today, perhaps partly because Bohr was more interested in the writings of Gibbs than of Boltzmann.
Interpreted in this way, Boltzmann's formula is the most basic formula for the thermodynamic entropy. Boltzmann's paradigm was an ideal gas of identical particles, of which are in the -th microscopic condition (range) of position and momentum. For this case, the probability of each microstate of the system is equal, so it was equivalent for Boltzmann to calculate the number of microstates associated with a macrostate. was historically misinterpreted as literally meaning the number of microstates, and that is what it usually means today. can be counted using the formula for permutations
where ranges over all possible molecular conditions and "" denotes factorial. The "correction" in the denominator is due to the fact that identical particles in the same condition are indistinguishable. is sometimes called the "thermodynamic probability" since it is an integer greater than one, while mathematical probabilities are always numbers between zero and one.
Introduction of the natural logarithm
In Boltzmann’s 1877 paper, he clarifies molecular state counting to determine the state distribution number introducing the logarithm to simplify the equation.
Boltzmann writes:
“The first task is to determine the permutation number, previously designated by
𝒫
, for any state distribution. Denoting by J the sum of the permutations
𝒫
for all possible state distributions, the quotient
𝒫
/J is the state distribution’s probability, henceforth denoted by W. We would first like to calculate the permutations
𝒫
for
the state distribution characterized by w0 molecules with kinetic energy 0, w1 molecules with kinetic energy ϵ, etc. …
“The most likely state distribution will be for those w0, w1 … values for which
𝒫
is a maximum or since the numerator is a constant, for which the denominator is a minimum. The values w0, w1 must simultaneously satisfy the two constraints (1) and (2). Since the denominator of
𝒫
is a product, it is easiest to determine the minimum of its logarithm, …”
Therefore, by making the denominator small, he maximizes the number of states. So to simplify the product of the factorials, he uses their natural logarithm to add them. This is the reason for the natural logarithm in Boltzmann’s entropy formula.
Generalization
Boltzmann's formula applies to microstates of a system, each possible microstate of which is presumed to be equally probable.
But in thermodynamics, the universe is divided into a system of interest, plus its surroundings; then the entropy of Boltzmann's microscopically specified system can be identified with the system entropy in classical thermodynamics. The microstates of such a thermodynamic system are not equally probable—for example, high energy microstates are less probable than low energy microstates for a thermodynamic system kept at a fixed temperature by allowing contact with a heat bath.
For thermodynamic systems where microstates of the system may not have equal probabilities, the appropriate generalization, called the Gibbs entropy, is:
This reduces to equation () if the probabilities pi are all equal.
Boltzmann used a formula as early as 1866. He interpreted as a density in phase space—without mentioning probability—but since this satisfies the axiomatic definition of a probability measure we can retrospectively interpret it as a probability anyway. Gibbs gave an explicitly probabilistic interpretation in 1878.
Boltzmann himself used an expression equivalent to () in his later work and recognized it as more general than equation (). That is, equation () is a corollary of
equation ()—and not vice versa. In every situation where equation () is valid,
equation () is valid also—and not vice versa.
Boltzmann entropy excludes statistical dependencies
The term Boltzmann entropy is also sometimes used to indicate entropies calculated based on the approximation that the overall probability can be factored into an identical separate term for each particle—i.e., assuming each particle has an identical independent probability distribution, and ignoring interactions and correlations between the particles. This is exact for an ideal gas of identical particles that move independently apart from instantaneous collisions, and is an approximation, possibly a poor one, for other systems.
The Boltzmann entropy is obtained if one assumes one can treat all the component particles of a thermodynamic system as statistically independent. The probability distribution of the system as a whole then factorises into the product of N separate identical terms, one term for each particle; and when the summation is taken over each possible state in the 6-dimensional phase space of a single particle (rather than the 6N-dimensional phase space of the system as a whole), the Gibbs entropy
simplifies to the Boltzmann entropy .
This reflects the original statistical entropy function introduced by Ludwig Boltzmann in 1872. For the special case of an ideal gas it exactly corresponds to the proper thermodynamic entropy.
For anything but the most dilute of real gases, leads to increasingly wrong predictions of entropies and physical behaviours, by ignoring the interactions and correlations between different molecules. Instead one must consider the ensemble of states of the system as a whole, called by Boltzmann a holode, rather than single particle states. Gibbs considered several such kinds of ensembles; relevant here is the canonical one.
See also
History of entropy
Gibbs entropy
nat (unit)
Shannon entropy
von Neumann entropy
References
External links
Introduction to Boltzmann's Equation
Vorlesungen über Gastheorie, Ludwig Boltzmann (1896) vol. I, J.A. Barth, Leipzig
Vorlesungen über Gastheorie, Ludwig Boltzmann (1898) vol. II. J.A. Barth, Leipzig.
Eponymous equations of physics
Thermodynamic entropy
Thermodynamic equations
Ludwig Boltzmann | Boltzmann's entropy formula | [
"Physics",
"Chemistry"
] | 1,807 | [
"Thermodynamic equations",
"Equations of physics",
"Physical quantities",
"Eponymous equations of physics",
"Thermodynamic entropy",
"Entropy",
"Thermodynamics",
"Statistical mechanics"
] |
9,328,589 | https://en.wikipedia.org/wiki/Cryogenic%20grinding | Cryogenic grinding, also known as freezer milling, freezer grinding, and cryomilling, is the act of cooling or chilling a material and then reducing it into a small particle size. For example, thermoplastics are difficult to grind to small particle sizes at ambient temperatures because they soften, adhere in lumpy masses and clog screens. When chilled by dry ice, liquid carbon dioxide or liquid nitrogen, the thermoplastics can be finely ground to powders suitable for electrostatic spraying and other powder processes. Cryogenic grinding of plant and animal tissue is a technique used by microbiologists. Samples that require extraction of nucleic acids must be kept at −80 °C or lower during the entire extraction process. For samples that are soft or flexible at room temperature, cryogenic grinding may be the only viable technique for processing samples. A number of recent studies report on the processing and behavior of nanostructured materials via cryomilling.
Freezer milling
Freezer milling is a type of cryogenic milling that uses a solenoid to mill samples. The solenoid moves the grinding media back and forth inside the vial, grinding the sample down to analytical fineness. This type of milling is especially useful in milling temperature sensitive samples, as samples are milled at liquid nitrogen temperatures. The idea behind using a solenoid is that the only "moving part" in the system is the grinding media inside the vial. The reason for this is that at liquid nitrogen temperatures (–196°C) any moving part will come under huge stress leading to potentially poor reliability. Cryogenic milling using a solenoid has been used for over 50 years and has proven to be a very reliable method of processing temperature sensitive samples in the laboratory.
Cryomilling
Cryomilling is a variation of mechanical milling, in which metallic powders or other samples (e.g. temperature sensitive samples and samples with volatile components) are milled in a cryogen (usually liquid nitrogen or liquid argon) slurry or at a cryogenics temperature under processing parameters, so a nanostructured microstructure is attained. Cryomilling takes advantage of both the cryogenic temperatures and conventional mechanical milling. The extremely low milling temperature suppresses recovery and recrystallization and leads to finer grain structures and more rapid grain refinement. The embrittlement of the sample makes even elastic and soft samples grindable. Tolerances less than 5 μm can be achieved. The ground material can be analyzed by a laboratory analyzer.
Applications in biology
Cryogenic grinding (or "cryogrinding") is a method of cell disruption employed by molecular life scientists to obtain broken cell material with favorable properties for protein extraction and affinity capture. Once ground, the fine powder consisting of broken cells (or "grindate") can be stored for long periods at –80°C without obvious changes to biochemical properties – making it a very convenient source material in e.g. proteomic studies including affinity capture / mass spectrometry.
References
Cryogenics
Microbiology techniques
Grinding and lapping
Plastics industry | Cryogenic grinding | [
"Physics",
"Chemistry",
"Biology"
] | 632 | [
"Microbiology techniques",
"Applied and interdisciplinary physics",
"Cryogenics"
] |
9,328,646 | https://en.wikipedia.org/wiki/Grab%20bar | Grab bars are safety devices designed to enable a person to maintain balance, lessen fatigue while standing, hold some of their weight while maneuvering, or have something to grab onto in case of a slip or fall. A caregiver may use a grab bar to assist with transferring a patient from one place to another. A worker may use a grab bar to hold on to as he or she climbs, or in case of a fall.
Construction
Grab bars must bear high loads and sudden impacts, and most jurisdictions have building regulations specifying what loads they must bear. They are generally mounted to masonry walls or to the studs of stud walls (which may need to be specially strengthened). They can be mounted through drywall into a strong wooden wall stud or other structural member, but not mounted only on the drywall, as it will not bear the users' weight.
Grab bars are made of metal, plastic, fiberglass, and composites. For wet areas such as bathrooms, the material must be waterproof. Stainless steel, nylon-coated mild steel, epoxy-coated aluminum, ABS plastic, and even vinyl-coated metal and plastic.
Accessibility
Grab bars increase accessibility and safety for people with a variety of disabilities or mobility difficulties. Although they are most commonly seen in public handicapped toilet stalls, grab bars are also used in private homes, assisted living facilities, hospitals, and nursing homes. Grab bars are most commonly installed next to a toilet or in a shower or bath enclosure.
Some grab bars also have a light feature and double as a night light offering up a little more safety at night when using the bathroom.
Locations
Many jurisdictions have regulations on grab bar placement and floorplans for public bathrooms (American ADA, British Doc M regs).
Grab bars next to a toilet help people using a wheelchair transfer to the toilet seat and back to the wheelchair. They also assist people who have difficulty sitting down, have balance problems while seated or need help rising from a seated position.
Used in a shower or bathtub, grab bars help to maintain balance while standing or maneuvering, assist in transferring into and out of the enclosure, and generally help to mitigate slips and falls.
Floor to ceiling grab bars, or security poles, can be used in the bedroom to help one get out of bed or get up from a chair, or to help caregivers by assisting in transfers.
Grab bars are often used in conjunction with other medical devices to increase safety. For example, a grab bar added to a shower is frequently used with a shower chair and hand held shower head. Grab bars installed by a doorway are usually added near a railing. In addition, grab bars can be placed on any wall where extra support is needed even if it is not the "usual place" they are used.
Positions
Grab bars can be installed in different positions:
Vertical grab bars may help with balance while standing.
Horizontal grab bars provide assistance when sitting or rising, or to grab onto in case of a slip or fall.
Some grab bars can be installed at an angle, depending on the needs of the user and the positioning. Grab bars installed horizontally offer up the greatest safety and care should be taken when installing them on the angle, as this is contrary to the ADA Guidelines. Often this angled installation is easier for people pulling themselves up from a seated position.
There are many considerations when deciding which grab bar to use and how best to install it. Properly securing a grab bar is important so that it doesn't pull out of the wall when pressure is applied to it. Each installation should be properly secured into wall blocking or studs to provide the best support. If no studs are available, specialized mollies can be used to spread out grab forces across a wider area of the wall.
ADA guidelines
The Americans with Disabilities Act of 1990 Accessibility Guidelines for Buildings and Facilities (ADAAG) defines requirements for installing grab bars in public bathing and toileting facilities. The guidelines are supported by substantial research regarding the best placement of grab bars.
The following is a subset of ADA grab bar guidelines:
The diameter of grab bars should be (or the shape shall provide an equivalent gripping surface)
There shall be a clearance from the wall.
Grab bars should not rotate in their fittings.
The required mounting height is universally from top of gripping surface of the grab bar to the finish floor. DOJ 2010 ADA standards 609.4.
ADA-style grab bars and their mounting devices should withstand more than 250 pounds (1112 N) of force.
In public toilet stalls, side grab bars must be a minimum of 42 inches long and mounted 12 inches from the rear wall, and rear grab bars must be a minimum of 36 inches long and mounted a maximum of 6 inches from the side wall.
Styles
While the ADA guidelines provide specifics on the placement of grab bars in public locations, they do not require a specific style. The British Doc M regulations specify a minimum contrast between bars and background. Many public facilities opt for the cheapest grab bars, which usually have an institutional look. However, grab bars are actually available in many styles, finishes and colors. Manufacturers have begun to understand the need to blend in with home decor, offering grab bars that have style and pizazz. For the home, grab bars do not need to be ADA compliant, but those guidelines should be considered. In addition to straight grab bars, there are fold-out bars, those that clamp onto the side of the bathtub, L-shaped, U-shaped and corner grab bars. Grab bars are also made with built in LED lighting and can come in many different colours.
In industry and construction
Grab bars in industry and construction are found on equipment or above fixed ladders where footholds exist but other handholds are lacking. They may be positioned horizontally, vertically, or at an angle.
When using grab bars as safety devices in order to prevent falls, your best choice would be a horizontal bar. Scientific research has found that gripping strength is far greater using a horizontal bar than a vertical bar in a fall situation. This makes horizontal grab bars the safest choice.
Grab bars were required on U.S. railroad cars by the Railroad Safety Appliance Act of 1893. Occupational Safety and Health Administration (OSHA) guidelines describe the requirements for grab bar clearance, diameter and spacing on fixed ladders. These regulations state that the clearance in the back of grab bars must be at least 4 inches, the diameter similar to the ladder rungs and, when horizontal, grab bars must be spaced by a continuation of the rung spacing. In 2008-2009 alone, the USDOL Bureau of Labor Statistics reported 241 casualties from ladder falls.
Siderail extensions horizontal grab bars may be bolted or welded to fixed ladders. Grab bars may be mounted to the curb for access to rooftops and rooftop hatches.
See also
Fall prevention
References
Accessible building
Safety equipment | Grab bar | [
"Engineering"
] | 1,393 | [
"Accessible building",
"Architecture"
] |
9,328,883 | https://en.wikipedia.org/wiki/Data%20format%20management | Data format management (DFM) is the application of a systematic approach to the selection and use of the data formats used to encode information for storage on a computer.
In practical terms, data format management is the analysis of data formats and their associated technical, legal or economic attributes which can either enhance or detract from the ability of a digital asset or a given information systems to meet specified objectives.
Data format management is necessary as the amount of information and number of people creating it grows. This is especially the case as the information with which users are working is difficult to generate, store, costly to acquire, or to be shared.
Data format management as an analytic tool or approach is data format neutral.
Historically individuals, organization and businesses have been categorized by their type of computer or their operating system. Today, however, it is primarily productivity software, such as spreadsheet or word processor programs, and the way these programs store information that also defines an entity. For instance, when browsing the web it is not important which kind of computer is responsible for hosting a site, only that the information it publishes is in a format that is readable by the viewing browser. In this instance the data format of the published information has more to do with defining compatibilities than the underlying hardware or operating system.
Several initiatives have been established to record those data formats commonly used and the software available to read them, for example the Pronom project at the UK National Archives.
See also
Data curation
Data preservation
Digital preservation
File format
Information technology governance
National Digital Library Program (NDLP)
National Digital Information Infrastructure and Preservation Program (NDIIPP)
External links
The Library of Congress, Sustainability of Digital Formats
Computer data | Data format management | [
"Technology"
] | 347 | [
"Computer data",
"Data"
] |
9,329,258 | https://en.wikipedia.org/wiki/Aminomethanol | Aminomethanol or methanolamine is the amino alcohol with the chemical formula of H2NCH2OH. With an amino group and an alcohol group on the same carbon atom, the compound is also an hemiaminal.
In aqueous solution, methanolamine exists in equilibrium with formaldehyde and ammonia. It is an intermediate en route to hexamethylenetetramine. The reaction can be conducted in gas phase and in solution.
References
Primary alcohols
Amines | Aminomethanol | [
"Chemistry"
] | 103 | [
"Amines",
"Bases (chemistry)",
"Functional groups",
"Organic chemistry stubs"
] |
9,329,395 | https://en.wikipedia.org/wiki/CBS%20Laboratories | CBS Laboratories or CBS Labs (later known as the CBS Technology Center or CTC) was the technology research and development organization of the CBS television network. Innovations developed at the labs included many groundbreaking broadcast, industrial, military, and consumer technologies.
History and significant technological achievements
CBS Laboratories was established in 1936 in New York City to conduct technological research for CBS and outside clients. In October 1957, CBS President Dr. Frank Stanton, speaking during the ground-breaking ceremonies for a new CBS Laboratories building in Stamford, Connecticut said: "Our objective in establishing the Laboratories is to continue CBS leadership in communications and electronics and provide broader research and development services."
One year later, a group of 60 engineers and scientists, led by Dr. Peter Goldmark, left New York City and moved into the new 30,000 square-foot facility. The results of their efforts over the next 20 years resulted in a steady growth in facilities, personnel, sales, product development and technological leadership.
Laboratory facilities grew to include five well-equipped buildings totaling more than 200,000 square feet. Six major departments were engaged in a wide range of research and development programs for government, industry, education, medicine and the broadcasting field.
The total staff grew to more than 600 people, one-third of whom were professionals. Many of these professionals were internationally renowned in their respective fields and helped establish CBS Laboratories as a leader in electronics and communications research and development.
Dr. Peter Goldmark joined CBS Laboratories in 1936. On September 4, 1940, while working at the lab, he demonstrated the Field-Sequential Color TV system. It utilized a mechanical color wheel on both the camera and on the television home receiver, but was not compatible with the existing post-war NTSC, 525-line, 60-field/second black and white TV sets as it was a 405-line, 144-field scanning system. It was the first color broadcasting system that received FCC approval in 1950, and the CBS Television Network began broadcasting in color on November 20, 1950. However, no other TV set manufacturers made the sets, and CBS stopped broadcasting in field-sequential color on October 21, 1951.
Nevertheless, the Field Sequential Color System was selected to televise the real-time broadcasts from the Moon during the Apollo 14 Moon landing, since it uses far less bandwidth than the NTSC system.
Goldmark’s interest in recorded music led to the development of the long-playing (LP) 33-1/3 rpm vinyl record, which became the standard for incorporating multiple or lengthy recorded works on a single audio disc for two generations. The LP was introduced to the market place by Columbia Records in 1948.
In 1959 the CBS Audimax I Audio Gain Controller was introduced. It was the first of its kind in the broadcasting industry, and updated versions (Audimax 4440) continued to be manufactured by Thompson-CSF, which acquired the technology after the Labs were closed. In the 1960s the CBS VoluMax Audio FM Peak Limiter was introduced, also the first of its kind in the broadcasting industry. Both the Audimax and VoluMax were considered the “gold standard” for audio processing used in the AM/FM and Television Broadcasting industry.
At the same time, CBS Laboratories developed a solid-state character generator, a crucial component of the VIDIAC (Visual Information Display and Control) system built for the Air Force by a collaboration of several companies. Known as the "magnetic memory character generator," this component was responsible for storing and retrieving high quality alphanumeric characters, which was essential for the high-speed data display.
Electronic Video Recording was announced in 1967.
In 1966, the CBS Vidifont was invented. It was the first electronic graphics generator used in television production. Brought to the marketplace at the NAB in 1970, it revolutionized television production.
The minicam was developed for use in national political conventions in 1968.
In 1971, a backwards-compatible 4-channel encoding technique was developed for vinyl records, called SQ Quadraphonic, based on work by musician Peter Scheiber and Labs engineer Benjamin B. Bauer.
That same year, CBS Labs Staff Scientist Dennis Gabor received the Nobel Prize in Physics for earlier work on holography. Upon Peter Goldmark's retirement, also in 1971, Senior Vice President Renville H. McMann assumed the role of Labs President.
At the same time that CBS Laboratories developed technologies for the CBS Television Network, it also took on similar work for the Government. CBS Laboratories was selected by NASA Manned Spacecraft Center to provide the voice recorder for the Gemini space program (1964 - 1966). The Labs designed and built a very small (2.5 in square x .415 in thick) and reliable onboard voice recorder.
An aerospace qualified film scanning system, consisting of a CBS Laboratories Line Scan Tube was developed for the Lunar Orbiter program to read out the processed film images taken by the Orbiter for transmission back to Earth.
The CBS Laboratories Reconotron all-electrostatic image dissector tube was developed for the 1964 Mariner IV Mars mission as an azimuth star tracker, then was modified for the 1967 Mariner V Venus mission in order to withstand the intense planetary illumination. The sensor was further modified for the 1969 Mariner mission to Mars to survive the more severe launch environment and to provide greater capability for automatic search, identification, and tracking.
In 1964 the Mergenthaler Co. and CBS Laboratories won a GPO contract to build a machine called the Linotron. The Linotron took a computer magnetic tape from the publishing agency that had been programmed through GPO’s computers, and composed the data in 6-point type at the rate of a page every 10 to 12 seconds, up to 1,000 characters per second, justified including upper and lower case letters, resulting in a page negative made up and ready to be plated and printed. This was accomplished using a highly-specialized Cathode Ray Tube developed by CBS Laboratories which had unequaled geometric fidelity and resolution. The introduction of the Linotron was characterized as “the most important development in composition since the introduction of the Linotype machine at the turn of the century.”
The first Linotron went into operation in October 1967 and the second a year later. The dean of the Senate and Chair of the JCP, Senator Carl Hayden of Arizona, pressed the key starting the Linotron 1010 on its first job, the Federal Supply Catalog. The Linotrons cost $2.3 million to develop and install, but in the first 13 months of operation the savings were estimated at $900,000. With it, “it can truly be said that in 1968 the Government Printing Office entered the electronic printing age.”
A detailed discussion and description of the Linotron system can be found here.
CBS Laboratories was a leader in the development of Electron Beam Recorders, (EBR), which use a finely focused beam of electrons to record information onto film. Because the electron beam has no inertia, it can be electromagnetically scanned over the film at a very high speed. Also, because it is focused using a magnetic field, instead of glass lenses, the electron beam can be focused to a much smaller spot than laser or other optical methods, on the order of a half-millionth of an inch.
One of the applications of the Electron Beam Recorder was in the ERTS-Landsat system, whose mission was to capture images of the Earth's surface in different spectral bands to provide data for Earth resource management and environmental monitoring. ThE ERTS satellites generated an immense amount of data, which was transmitted to dedicated ground stations to be recorded and processed for analysis. The ERTS EBR was a crucial part of the ground-station-based image data recording system, capable of producing a thousand 70mm archival quality film images per day, from which all the other ERTS photographic products were produced.
During the Vietnam War, CBS Laboratories developed and produced the scanning and recording equipment for the Compass Link system, which provided one-way, near-real-time secure transmission of photographic and other battlefield imagery via satellite relays from Vietnam to Hawai'i and Washington, DC. Using available equipment, in many cases at the breadboard stage, it was developed, deployed and operational in the field and on shipboard 73 days after approval to proceed. Philco-Ford provided the satellite communications systems.
In 1969, CBS Laboratories developed an advanced, state-of-the-art, MIL-Spec In-Flight Photo-Processor Scanner (IPPS) for JIFDATS (the Joint Services In-Flight Data Transmission System). Mounted in an external pod on a Mach-2, RF-4C reconnaissance aircraft, the target images from a KS-87 airborne film camera were processed, scanned and transmitted within 12 minutes of acquisition to a ground-based Image Interpretation Facility.
CBS Laboratories technical publications
In addition to designing and building commercial and government products and systems, the technical staff was also contracted to write reports and analyses for government clients. Although most of the reports remain classified, a few have been unclassified and are available in the public domain.
Sale of CBS Laboratories and subsequent history
In 1974, CBS Corp., under then-President Arthur R. Taylor, made the decision to focus on its primary media and broadcasting operations, away from the Government R&D and commercial product development, and divest these non-core assets. As part of this reorganization, the CBS Laboratories Professional Products Department, which manufactured the products developed by the Labs for sale to the broadcast industry, was sold to Thomson-CSF.
The remainder of CBS Laboratories, including all of its Government research and development activities, was acquired in 1975 by EPSCO Corp., based in Buffalo, NY, for the purpose of enhancing its technological capabilities and facilitating the entrance into new Government markets. EPSCO renamed the business as Epsco Labs, and after an unsuccessful attempt to convince the CBS Laboratories personnel to relocate to Buffalo, NY, EPSCO moved the complete operations and staff to a facility in Wilton, CT. The two original CBS Laboratories buildings on High Ridge Road in Stamford, CT were razed and the property sold.
Although EPSCO Corp. immediately began the process of novating the CBS Laboratories government R&D contracts to EPSCO, the process turned out to be much more time-consuming than EPSCO anticipated, due to the legal and regulatory implications involved in obtaining Government and Contracting Agency approvals of the many classified programs underway at CBS Laboratories. This year-long time delay greatly increased EPSCO's ongoing costs of funding the acquisition, to the point where EPSCO made the decision to liquidate the entire Epsco Labs facilities, staff and operations in 1976. As a result, all of the assets of the Laboratories, including all machinery, optical equipment, vacuum equipment, electronics, test facilities and equipment, as well as the office equipment, photo lab, machine shop and printing department were sold at auction over a four-day period in late May, 1976.
Patents
CBS Laboratories' staff registered approximately 100 patents in the fields of television, quadraphonic sound, scanning devices, laser scanning and recording, film handling systems, image and character generation, noise monitoring, hydrophones, forming electrophoretic and photoemissive surfaces, diffraction optics, photo-electronic imaging, electron guns, and more.
Emmy awards for CBS Laboratories
1956: Development of Video Tape by Ampex and Further Development and Practical Applications by CBS - dual entry
1958-1959: Industry-wide improvement of editing of Video Tape as exemplified by ABC - CBS – NBC
1965-1966: Stop Action Playback - MVR Corporation and CBS
1967: CBS Minicam - The CBS Minicam, a portable, battery-powered camera system, received an Engineering Emmy Award for its impact on news gathering and live broadcasting.
1968: Electronic Video Recording (EVR) system - CBS Laboratories developed the EVR system, which allowed for the recording and playback of high-quality video using a cassette tape format.
1970-1971: The Columbia Broadcasting System – For the development of the Color Corrector, which can provide color uniformity between television picture segments and scenes shot and recorded under different conditions at different times and locations.
1970-1971: CBS Laboratories received the Engineering Emmy Award for their development of the "Miniature Rapid Deployment Earth Terminal" (MRDET), which was recognized for its significant contributions to the field of television technology.
1974-1975: Emmy Award for CBS Laboratories' Electronic News Gathering System
1977-1978 CBS, INC. For the development of the Digital Noise Reducer
1978: CBS Fieldtronics - The CBS Fieldtronics system, a portable, electronic news gathering system, received an Engineering Emmy Award for its contributions to live broadcasting.
1978 Engineering Emmy Award for "Improved SMPTE Color Bars Standard ECR 1-1978" awarded to CBS Technology Center and the Society of Motion Picture and Television Engineers.
Awards and industry recognition for CBS Laboratories staff
Nobel Prize awarded to CBS Laboratories Staff Scientist
The 1971 Nobel Prize in Physics was awarded to Dr. Dennis Gabor, Staff Scientist at CBS Laboratories, who was also affiliated with the Imperial Colleges of Science and Technology, London, United Kingdom, “for his invention and development of the holographic method.” A description of his work, given at his Nobel Prize Lecture, can be found here.
David Sarnoff Medal Recipients for CBS Laboratories Technical Staff
1969: Peter C. Goldmark
1976: Adrian B. Ettlinger
1977: Renville H. McMann
1989: William E. Glenn
References
External links
The quest for home video: EVR
Paramount Global subsidiaries
Companies based in Stamford, Connecticut
Technology companies established in 1936
Technology companies disestablished in 1986
American companies established in 1936
American companies disestablished in 1986 | CBS Laboratories | [
"Technology",
"Engineering"
] | 2,790 | [
"Information and communications technology",
"Telecommunications engineering",
"Television technology",
"Radio technology"
] |
9,330,399 | https://en.wikipedia.org/wiki/Condensate%20polisher | A condensate polisher is a device used to filter water condensed from steam as part of the steam cycle, for example in a conventional or nuclear power plant (powdered resin or deep bed system). It is frequently filled with tiny polymer resin beads which are used to remove or exchange ions so that the purity of the condensate is maintained at or near that of distilled water.
Description
Condensate polishers are important in systems using the boiling and condensing of water to transport or transform thermal energy. Using technology similar to a water softener, trace amounts of minerals or other contamination are removed from the system before such contamination becomes concentrated enough to cause problems by depositing minerals inside pipes, or within precision-engineered devices such as boilers, steam generators, heat exchangers, steam turbines, cooling towers, and condensers. The removal of minerals has the secondary effect of maintaining the pH balance of the water at or near neutral (a pH of 7.0) by removing ions that would tend to make the water more acidic. This reduces the rate of corrosion from water against metal.
Condensate polishing typically involves ion exchange technology for the removal of trace dissolved minerals and suspended matter. Commonly used as part of a power plant's condensate system, it prevents premature chemical failure and deposition within the power cycle which would have resulted in loss of unit efficiency and possible mechanical damage to key generating equipment.
During the process of steam generation in power plants, the steam cools and condensate forms. The condensate is collected and then recycled as boiler feedwater. Prior to re-use, the condensate must be purified or "polished", to remove impurities (predominantly silicon oxides and sodium hydroxide) which have the potential to cause damage to the boilers, steam generators, reactors and turbines. Both dissolved (i.e. silica) and suspended matter (ex. iron oxide particles from corrosion, also called 'crud'), as well as other contaminants which can cause corrosion and maintenance issues are effectively removed by condensate polishing treatment.
References
Tools
Industrial water treatment | Condensate polisher | [
"Chemistry"
] | 436 | [
"Water treatment",
"Industrial water treatment"
] |
9,330,792 | https://en.wikipedia.org/wiki/Pentabromodiphenyl%20ether | Pentabromodiphenyl ether (also known as pentabromodiphenyl oxide) is a brominated flame retardant which belongs to the group of polybrominated diphenyl ethers (PBDEs). Because of their toxicity and persistence, their industrial production is to be eliminated under the Stockholm Convention, a treaty to control and phase out major persistent organic pollutants (POP).
Composition, uses, and production
Commercial pentaBDE is a technical mixture of different PBDE congeners, with BDE-47 (2,2',4,4'- tetrabromodiphenyl ether) and BDE-99 (2,2',4,4',5-pentabromodiphenyl ether) as the most abundant. The term pentaBDE alone refers to isomers of pentabromodiphenyl ether (PBDE congener numbers 82-127).
Only congeners with more than 1% listed.
Commercial pentaBDE is most commonly used as a flame retardant in flexible polyurethane foam; it was also used in printed circuit boards in Asia, and in other applications. The annual demand worldwide was estimated as 7,500 tonnes in 2001, of which the Americas accounted for 7,100 tonnes, Europe 150 tonnes, and Asia 150 tonnes. The global industrial demand increased from 4,000 tonnes annually in 1991 to 8,500 tonnes annually in 1999. As of 2007, "there should be no current production of C-PentaBDE [commercial pentaBDE] in Europe, Japan, Canada, Australia and the US"; however, it is possible that production continues elsewhere in the world.
Environmental chemistry
PentaBDE is released by different processes into the environment, such as emissions from manufacture of pentaBDE-containing products and from the products themselves. Elevated concentrations can be found in air, water, soil, food, sediment, sludge, and dust.
Exposures and health effects
PentaBDE may enter the body by ingestion or inhalation. It is "stored mainly in body fat" and may stay in the body for years. A 2007 study found that PBDE 47 (a tetraBDE) and PBDE 99 (a pentaBDE) had biomagnification factors in terrestrial carnivores and humans of 98, higher than any other industrial chemicals studied. In an investigation carried out by the WWF, "the brominated flame retardant chemical (PBDE 153), which is a component of the penta- and octa- brominated diphenyl ether flame retardant products" was found in all blood samples of 14 ministers of health and environment of 13 European Union countries.
The chemical has no proven health effects in humans; however, based on animal experiments, pentaBDE may have effects on "the liver, thyroid, and neurobehavioral development."
Voluntary and governmental actions
In Germany, industrial users of pentaBDE "agreed to a voluntary phaseout in 1986." In Sweden, the government "phase[d] out the production and use of the [pentaBDE] compounds by 1999 and a total ban on imports came into effect within just a few years." The European Union (EU) has carried out a comprehensive risk assessment under the Existing Substances Regulation 793/93/EEC; as a consequence, the EU has banned the use of pentaBDE since 2004.
In the United States, as of 2005, "no new manufacture or import of" pentaBDE and octaBDE "can occur... without first being subject to EPA [i.e., United States Environmental Protection Agency ] evaluation." As of mid-2007, a total of eleven states in the U.S. had banned pentaBDE.
In May 2009, pentaBDE was added to the Stockholm Convention as it meets the criteria for the so-called persistent organic pollutants of persistence, bioaccumulation and toxicity.
Alternatives
The EPA organized a Furniture Flame Retardancy Partnership beginning in 2003 "to better understand fire safety options for the furniture industry" after pentaBDE "was voluntarily phased out of production by the sole U.S. manufacturer on December 31, 2004." In 2005 the Partnership published evaluations of alternatives to pentaBDE, including triphenyl phosphate, tribromoneopentyl alcohol, tris(1,3-dicholoro-2-propyl)phosphate, and 12 proprietary chemicals.
References
Flame retardants
Bromoarenes
Diphenyl ethers
Persistent organic pollutants under the Stockholm Convention | Pentabromodiphenyl ether | [
"Chemistry"
] | 979 | [
"Persistent organic pollutants under the Stockholm Convention"
] |
9,330,860 | https://en.wikipedia.org/wiki/Accepted%20and%20experimental%20value | In science, and most specifically chemistry, the accepted value denotes a value of a substance accepted by almost all scientists and the experimental value denotes the value of a substance's properties found in a localized lab.
See also
Accuracy and precision
Error
Approximation error
References
Analytical chemistry | Accepted and experimental value | [
"Chemistry"
] | 54 | [
"Physical chemistry stubs",
"nan"
] |
9,331,066 | https://en.wikipedia.org/wiki/Induction%20generator | An induction generator or asynchronous generator is a type of alternating current (AC) electrical generator that uses the principles of induction motors to produce electric power. Induction generators operate by mechanically turning their rotors faster than synchronous speed. A regular AC induction motor usually can be used as a generator, without any internal modifications. Because they can recover energy with relatively simple controls, induction generators are useful in applications such as mini hydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure.
An induction generator draws reactive excitation current from an external source. Induction generators have an AC rotor and cannot bootstrap using residual magnetization to black start a de-energized distribution system as synchronous machines do. Power factor correcting capacitors can be added externally to neutralize a constant amount of the variable reactive excitation current. After starting, an induction generator can use a capacitor bank to produce reactive excitation current, but the isolated power system's voltage and frequency are not self-regulating and destabilize readily.
Principle of Operation
An induction generator produces electrical power when its rotor is turned faster than the synchronous speed. For a four-pole motor (two pairs of poles on stator) powered by a 60 Hz source, the synchronous speed is 1800 rotations per minute (rpm) and 1500 RPM powered at 50 Hz. The motor always turns slightly slower than the synchronous speed. The difference between synchronous and operating speed is called "slip" and is often expressed as percent of the synchronous speed. For example, a motor operating at 1450 RPM that has a synchronous speed of 1500 RPM is running at a slip of +3.3%.
In operation as a motor, the stator flux rotation is at the synchronous speed, which is faster than the rotor speed. This causes the stator flux to cycle at the slip frequency inducing rotor current through the mutual inductance between the stator and rotor. The induced current create a rotor flux with magnetic polarity opposite to the stator. In this way, the rotor is dragged along behind stator flux, with the currents in the rotor induced at the slip frequency. The motor runs at the speed where the induced rotor current gives rise to torque equal to the shaft load.
In generator operation, a prime mover (turbine or engine) drives the rotor above the synchronous speed (negative slip). The stator flux induces current in the rotor, but the opposing rotor flux is now cutting the stator coils, a current is induced in the stator coils 270° behind the magnetizing current, in phase with magnetizing voltage. The motor delivers real (in-phase) power to the power system.
Excitation
An induction motor requires an externally supplied current to the stator windings in order to induce a current in the rotor. Because the current in an inductor is integral of the voltage with respect to time, for a sinusoidal voltage waveform the current lags the voltage by 90°, and the induction motor always consumes reactive power, regardless of whether it is consuming electrical power and delivering mechanical power as a motor or consuming mechanical power and delivering electrical power to the system.
A source of excitation current for magnetizing flux (reactive power) for the stator is still required, to induce rotor current. This can be supplied from the electrical grid or, once it starts producing power, from a capacitive reactance. The generating mode for induction motors is complicated by the need to excite the rotor, which being induced by an alternating current is demagnetized at shutdown with no residual magnetization to bootstrap a cold start. It is necessary to connect an external source of magnetizing current to initialize production. The power frequency and voltage are not self regulating. The generator is able to supply current out of phase with the voltage requiring more external equipment to build a functional isolated power system. Similar is the operation of the induction motor in parallel with a synchronous motor serving as a power factor compensator. A feature in the generator mode in parallel to the grid is that the rotor speed is higher than in the driving mode. Then active energy is being given to the grid. Another disadvantage of induction motor generator is that it consumes a significant magnetizing current I0 = (20-35)%.
Active Power
Active power delivered to the line is proportional to slip above the synchronous speed. Full rated power of the generator is reached at very small slip values (motor dependent, typically 3%). At synchronous speed of 1800 RPM, generator will produce no power. When the driving speed is increased to 1860 RPM (typical example), full output power is produced. If the prime mover is unable to produce enough power to fully drive the generator, speed will remain somewhere between 1800 and 1860 RPM range.
Required Capacitance
A capacitor bank must supply reactive power to the motor when used in stand-alone mode. The reactive power supplied should be equal or greater than the reactive power that the generator normally draws when operating as a motor.
Torque vs. Slip
The basic fundamental of induction generators is the conversion from mechanical energy to electrical energy. This requires an external torque applied to the rotor to turn it faster than the synchronous speed. However, indefinitely increasing torque doesn't lead to an indefinite increase in power generation. The rotating magnetic field torque excited from the armature works to counter the motion of the rotor and prevent over speed because of induced motion in the opposite direction. As the speed of the motor increases the counter torque reaches a max value of torque (breakdown torque) that it can operate until before the operating conditions become unstable. Ideally, induction generators work best in the stable region between the no-load condition and maximum torque region.
Rating Current
The maximum power that can be produced by an induction motor operated as a generator is limited by the rated current of the generator's windings.
Grid and stand-alone connections
In induction generators, the reactive power required to establish the air gap magnetic flux is provided by a capacitor bank connected to the machine in case of stand-alone system and in case of grid connection it draws reactive power from the grid to maintain its air gap flux. For a grid-connected system, frequency and voltage at the machine will be dictated by the electric grid, since it is very small compared to the whole system. For stand-alone systems, frequency and voltage are complex function of machine parameters, capacitance used for excitation, and load value and type.
Uses
Induction generators are often used in wind turbines and some micro hydro installations due to their ability to produce useful power at varying rotor speeds. Induction generators are mechanically and electrically simpler than other generator types. They are also more rugged, requiring no brushes or commutators.
Limitations
An induction generator connected to a capacitor system can generate sufficient reactive power to operate independently. When the load current exceeds the capability of the generator to supply both magnetization reactive power and load power the generator will immediately cease to produce power. The load must be removed and the induction generator restarted with either an external DC motor or if present, residual magnetism in the core.
Induction generators are particularly suitable for wind generating stations as in this case speed is always a variable factor. Unlike synchronous motors, induction generators are load-dependent and cannot be used alone for grid frequency control.
Example application
As an example, consider the use of a 10 hp, 1760 r/min, 440 V, three-phase induction motor (a.k.a. induction electrical machine in an asynchronous generator regime) as asynchronous generator. The full-load current of the motor is 10 A and the full-load power factor is 0.8.
Required capacitance per phase if capacitors are connected in delta:
Apparent power
Active power
Reactive power
For a machine to run as an asynchronous generator, capacitor bank must supply minimum 4567 / 3 phases = 1523 VAR per phase. Voltage per capacitor is 440 V because capacitors are connected in delta.
Capacitive current Ic = Q/E = 1523/440 = 3.46 A
Capacitive reactance per phase Xc = E/Ic = 127 Ω
Minimum capacitance per phase:
C = 1 / (2*π*f*Xc) = 1 / (2 * 3.141 * 60 * 127) = 21 μF.
If the load also absorbs reactive power, capacitor bank must be increased in size to compensate.
Prime mover speed should be used to generate frequency of 60 Hz:
Typically, slip should be similar to full-load value when machine is running as motor, but negative (generator operation):
if Ns = 1800, one can choose N=Ns+40 rpm
Required prime mover speed N = 1800 + 40 = 1840 rpm.
See also
Electric generator
Induction motor
Notes
References
Electrical Machines, Drives, and Power Systems, 4th edition, Theodore Wildi, Prentice Hall, , pages 311–314.
External links
Testing of stand-alone and grid connected asynchronous generator
Electrical generators
Induction motors | Induction generator | [
"Physics",
"Technology"
] | 1,911 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
9,331,271 | https://en.wikipedia.org/wiki/Molecular%20Systems%20Biology | Molecular Systems Biology is a peer-reviewed open-access scientific journal covering systems biology at the molecular level (examples include: genomics, proteomics, metabolomics, microbial systems, the integration of cell signaling and regulatory networks), synthetic biology, and systems medicine. It was established in 2005 and published by the Nature Publishing Group on behalf of the European Molecular Biology Organization. As of December 2013, it is published by EMBO Press.
References
External links
Molecular and cellular biology journals
Systems biology
Academic journals established in 2005
English-language journals
Monthly journals
European Molecular Biology Organization academic journals | Molecular Systems Biology | [
"Chemistry",
"Biology"
] | 119 | [
"Systems biology",
"Molecular and cellular biology journals",
"Molecular biology"
] |
9,331,665 | https://en.wikipedia.org/wiki/Visibility%20%28geometry%29 | In geometry, visibility is a mathematical abstraction of the real-life notion of visibility.
Given a set of obstacles in the Euclidean space, two points in the space are said to be visible to each other, if the line segment that joins them does not intersect any obstacles. (In the Earth's atmosphere light follows a slightly curved path that is not perfectly predictable, complicating the calculation of actual visibility.)
Computation of visibility is among the basic problems in computational geometry and has applications in computer graphics, motion planning, and other areas.
Concepts and problems
Point visibility
Edge visibility
Visibility polygon
Weak visibility
Art gallery problem or museum problem
Visibility graph
Visibility graph of vertical line segments
Watchman route problem
Computer graphics applications:
Hidden surface determination
Hidden line removal
z-buffering
portal engine
Star-shaped polygon
Kernel of a polygon
Isovist
Viewshed
Zone of Visual Influence
Painter's algorithm
References
Chapter 15: "Visibility graphs"
External links
Software
VisiLibity: A free open source C++ library of floating-point visibility algorithms and supporting data types
Geometry
Geometric algorithms | Visibility (geometry) | [
"Mathematics"
] | 217 | [
"Geometry",
"Geometry stubs"
] |
9,332,179 | https://en.wikipedia.org/wiki/A/B%20testing | A/B testing (also known as bucket testing, split-run testing, or split testing) is a user experience research method. A/B tests consist of a randomized experiment that usually involves two variants (A and B), although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and determining which of the variants is more effective.
Multivariate testing or multinomial testing is similar to A/B testing, but may test more than two versions at the same time or use more controls. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations—commonplace with survey data, offline data, and other, more complex phenomena.
Definition
"A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared.
A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows.
The following example illustrates an A/B test with a single variable:
Suppose a company has a customer database of 2,000 people and decides to create an email campaign with a discount code in order to generate sales through its website. The company creates two versions of the email with different call to action (the part of the copy which encourages customers to do something — in the case of a sales campaign, make a purchase) and identifying promotional code.
To 1,000 people it sends the email with the call to action stating, "Offer ends this Saturday! Use code A1",
To the remaining 1,000 people, it sends the email with the call to action stating, "Offer ends soon! Use code B1".
All other elements of the emails' copy and layout are identical.
The company then monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1,000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine if the differences in response rates between A1 and B1 were statistically significant (that is, highly likely that the differences are real, repeatable, and not due to random chance).
In the example above, the purpose of the test is to determine which is the more effective way to encourage customers to make a purchase. If, however, the aim of the test had been to see which email would generate the higher click-ratethat is, the number of people who actually click onto the website after receiving the emailthen the results might have been different.
For example, even though more of the customers receiving the code B1 accessed the website, because the Call To Action didn't state the end-date of the promotion many of them may feel no urgency to make an immediate purchase. Consequently, if the purpose of the test had been simply to see which email would bring more traffic to the website, then the email containing code B1 might well have been more successful. An A/B test should have a defined outcome that is measurable such as number of sales made, click-rate conversion, or number of people signing up/registering.
Common test statistics
Two-sample hypothesis tests are appropriate for comparing the two samples where the samples are divided by the two control cases in the experiment. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. Student's t-tests are appropriate for comparing means under relaxed conditions when less is assumed. Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used.
For a comparison of two binomial distributions such as a click-through rate one would use Fisher's exact test.
Segmentation and targeting
A/B tests most commonly apply the same variant (e.g., user interface element) with equal probability to all users. However, in some circumstances, responses to variants may be heterogeneous. That is, while a variant A might have a higher response rate overall, variant B may have an even higher response rate within a specific segment of the customer base.
For instance, in the above example, the breakdown of the response rates by gender could have been:
In this case, we can see that while variant A had a higher response rate overall, variant B actually had a higher response rate with men.
As a result, the company might select a segmented strategy as a result of the A/B test, sending variant B to men and variant A to women in the future. In this example, a segmented strategy would yield an increase in expected response rates from to – constituting a 30% increase.
If segmented results are expected from the A/B test, the test should be properly designed at the outset to be evenly distributed across key customer attributes, such as gender. That is, the test should both (a) contain a representative sample of men vs. women, and (b) assign men and women randomly to each “variant” (variant A vs. variant B). Failure to do so could lead to experiment bias and inaccurate conclusions to be drawn from the test.
This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attributefor example, customers' age and genderto identify more nuanced patterns that may exist in the test results.
Tradeoffs
Positives
The results of A/B tests are simple to interpret and use to get a clear idea of what users prefer, since it is directly testing one option over another. It is based on real user behavior, so the data can be very helpful especially when determining what works better between two options.
A/B tests can also provide answers to highly specific design questions. One example of this is Google's A/B testing with hyperlink colors. In order to optimize revenue, they tested dozens of different hyperlink hues to see which color the users tend to click more on.
Negatives
A/B tests are sensitive to variance; they require a large sample size in order to reduce standard error and produce a statistically significant result. In applications where active users are abundant, such as popular online social media platforms, obtaining a large sample size is trivial. In other cases, large sample sizes are obtained by increasing the experiment enrollment period. However, using a technique coined by Microsoft as Controlled-experiment Using Pre-Experiment Data (CUPED), variance from before the experiment start can be taken into account so that fewer samples are required to produce a statistically significant result.
Due to its nature as an experiment, running an A/B test introduces the risk of wasted time and resources if the test produces unwanted results, such as negative or no impact to business metrics.
In December 2018, representatives with experience in large-scale A/B testing from thirteen different organizations (Airbnb, Amazon, Booking.com, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber, and Stanford University) summarized the top challenges in a SIGKDD Explorations paper.
The challenges can be grouped into four areas: Analysis, Engineering and Culture, Deviations from Traditional A/B tests, and Data quality.
History
It is difficult to definitively establish when A/B testing was first used. The first randomized double-blind trial, to assess the effectiveness of a homeopathic drug, occurred in 1835. Experimentation with advertising campaigns, which has been compared to modern A/B testing, began in the early twentieth century. The advertising pioneer Claude Hopkins used promotional coupons to test the effectiveness of his campaigns. However, this process, which Hopkins described in his Scientific Advertising, did not incorporate concepts such as statistical significance and the null hypothesis, which are used in statistical hypothesis testing. Modern statistical methods for assessing the significance of sample data were developed separately in the same period. This work was done in 1908 by William Sealy Gosset when he altered the Z-test to create Student's t-test.
With the growth of the internet, new ways to sample populations have become available. Google engineers ran their first A/B test in the year 2000 in an attempt to determine what the optimum number of results to display on its search engine results page would be. The first test was unsuccessful due to glitches that resulted from slow loading times. Later A/B testing research would be more advanced, but the foundation and underlying principles generally remain the same, and in 2011, 11 years after Google's first test, Google ran over 7,000 different A/B tests.
In 2012, a Microsoft employee working on the search engine Microsoft Bing created an experiment to test different ways of displaying advertising headlines. Within hours, the alternative format produced a revenue increase of 12% with no impact on user-experience metrics. Today, major software companies such as Microsoft and Google each conduct over 10,000 A/B tests annually.
A/B testing has been claimed by some to be a change in philosophy and business-strategy in certain niches, though the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions. A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice.
Many companies now use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results. It is an increasingly common practice as the tools and expertise grow in this area.
Applications
A/B testing in online social media
A/B tests have been used by large social media sites like LinkedIn, Facebook, and Instagram to understand user engagement and satisfaction of online features, such as a new feature or product. A/B tests have also been used to conduct complex experiments on subjects such as network effects when users are offline, how online services affect user actions, and how users influence one another.
A/B testing for e-commerce
On an e-commerce website, the purchase funnel is typically a good candidate for A/B testing, since even marginal-decreases in drop-off rates can represent a significant gain in sales. Significant improvements can be sometimes seen through testing elements like copy text, layouts, images and colors, but not always. In these tests, users only see one of two versions, since the goal is to discover which of the two versions is preferable.
A/B testing for product pricing
A/B testing can be used to determine the right price for the product, as this is perhaps one of the most difficult tasks when a new product or service is launched. A/B testing (especially valid for digital goods) is an excellent way to find out which price-point and offering maximize the total revenue.
Political A/B testing
A/B tests have also been used by political campaigns. In 2007, Barack Obama's presidential campaign used A/B testing as a way to garner online attraction and understand what voters wanted to see from the presidential candidate. For example, Obama's team tested four distinct buttons on their website that led users to sign up for newsletters. Additionally, the team used six different accompanying images to draw in users. Through A/B testing, staffers were able to determine how to effectively draw in voters and garner additional interest.
HTTP Routing and API feature testing
A/B testing is very common when deploying a newer version of an API. For real-time user experience testing, an HTTP Layer-7 Reverse proxy is configured in such a way that, N% of the HTTP traffic goes into the newer version of the backend instance, while the remaining 100-N% of HTTP traffic hits the (stable) older version of the backend HTTP application service. This is usually done for limiting the exposure of customers to a newer backend instance such that, if there is a bug on the newer version, only N% of the total user agents or clients get affected while others get routed to a stable backend, which is a common ingress control mechanism.
See also
Adaptive control
Between-group design experiment
Choice modelling
Multi-armed bandit
Multivariate testing
Randomized controlled trial
Scientific control
Stochastic dominance
Test statistic
Two-proportion Z-test
References
Market research
Experiments
Software testing | A/B testing | [
"Engineering"
] | 2,670 | [
"Software engineering",
"Software testing"
] |
9,332,320 | https://en.wikipedia.org/wiki/Intrusion%20detection%20system%20evasion%20techniques | Intrusion detection system evasion techniques are modifications made to attacks in order to prevent detection by an intrusion detection system (IDS). Almost all published evasion techniques modify network attacks. The 1998 paper Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection popularized IDS evasion, and discussed both evasion techniques and areas where the correct interpretation was ambiguous depending on the targeted computer system. The 'fragroute' and 'fragrouter' programs implement evasion techniques discussed in the paper. Many web vulnerability scanners, such as 'Nikto', 'whisker' and 'Sandcat', also incorporate IDS evasion techniques.
Most IDSs have been modified to detect or even reverse basic evasion techniques, but IDS evasion (and countering IDS evasion) are still active fields.
Obfuscation
An IDS can be evaded by obfuscating or encoding the attack payload in a way that the target computer will reverse but the IDS will not. In this way, an attacker can exploit the end host without alerting the IDS.
Encoding
Application layer protocols like HTTP allow for multiple encodings of data which are interpreted as the same value. For example, the string "cgi-bin" in a URL can be encoded as "%63%67%69%2d%62%69%6e" (i.e., in hexadecimal). A web server will view these as the same string and act on them accordingly. An IDS must be aware of all of the possible encodings that its end hosts accept in order to match network traffic to known-malicious signatures.
Attacks on encrypted protocols such as HTTPS cannot be read by an IDS unless the IDS has a copy of the private key used by the server to encrypt the communication. The IDS won't be able to match the encrypted traffic to signatures if it doesn't account for this.
Polymorphism
Signature-based IDS often look for common attack patterns to match malicious traffic to signatures. To detect buffer overflow attacks, an IDS might look for the evidence of NOP slides which are used to weaken the protection of address space layout randomization.
To obfuscate their attacks, attackers can use polymorphic shellcode to create unique attack patterns. This technique typically involves encoding the payload in some fashion (e.g., XOR-ing each byte with 0x95), then placing a decoder in front of the payload before sending it. When the target executes the code, it runs the decoder which rewrites the payload into its original form which the target then executes.
Polymorphic attacks don't have a single detectable signature, making them very difficult for signature-based IDS, and even some anomaly-based IDS, to detect. Shikata ga nai ("it cannot be helped") is a popular polymorphic encoder in the Metasploit framework used to convert malicious shellcode into difficult-to-detect polymorphic shellcode using XOR additive feedback.
Evasion
Attackers can evade IDS by crafting packets in such a way that the end host interprets the attack payload correctly while the IDS either interprets the attack incorrectly or determines that the traffic is benign too quickly.
Fragmentation and small packets
One basic technique is to split the attack payload into multiple small packets, so that the IDS must reassemble the packet stream to detect the attack. A simple way of splitting packets is by fragmenting them, but an adversary can also simply craft packets with small payloads. The 'whisker' evasion tool calls crafting packets with small payloads 'session splicing'.
By itself, small packets will not evade any IDS that reassembles packet streams. However, small packets can be further modified in order to complicate reassembly and detection. One evasion technique is to pause between sending parts of the attack, hoping that the IDS will time out before the target computer does. A second evasion technique is to send the packets out of order,
Overlapping fragments and TCP segments
Another evasion technique is to craft a series of packets with TCP sequence numbers configured to overlap. For example, the first packet will include 80 bytes of payload but the second packet's sequence number will be 76 bytes after the start of the first packet. When the target computer reassembles the TCP stream, they must decide how to handle the four overlapping bytes. Some operating systems will take the older data, and some will take the newer data. If the IDS doesn't reassemble the TCP in the same way as the target, it can be manipulated into either missing a portion of the attack payload or seeing benign data inserted into the malicious payload, breaking the attack signature. This technique can also be used with IP fragmentation in a similar manner.
Ambiguities
Some IDS evasion techniques involve deliberately manipulating TCP or IP protocols in a way the target computer will handle differently from the IDS. For example, the TCP urgent pointer is handled differently on different operating systems. If the IDS doesn't handle these protocol violations in a manner consistent with its end hosts, it is vulnerable to insertion and evasion techniques similar to those mentioned earlier.
Low-bandwidth attacks
Attacks which are spread out across a long period of time or a large number of source IPs, such as nmap's slow scan, can be difficult to pick out of the background of benign traffic. An online password cracker which tests one password for each user every day will look nearly identical to a normal user who mistyped their password.
Denial of service
Due to the fact that passive IDS are inherently fail-open (as opposed to fail-closed), launching a denial-of-service attack against the IDS on a network is a feasible method of circumventing its protection. An adversary can accomplish this by exploiting a bug in the IDS, consuming all of the computational resources on the IDS, or deliberately triggering a large number of alerts to disguise the actual attack.
CPU exhaustion
Packets captured by an IDS are stored in a kernel buffer until the CPU is ready to process them. If the CPU is under high load, it can't process the packets quickly enough and this buffer fills up. New (and possibly malicious) packets are then dropped because the buffer is full.
An attacker can exhaust the IDS's CPU resources in a number of ways. For example, signature-based intrusion detection systems use pattern matching algorithms to match incoming packets against signatures of known attacks. Naturally, some signatures are more computational expensive to match against than others. Exploiting this fact, an attacker can send specially-crafted network traffic to force the IDS to use the maximum amount of CPU time as possible to run its pattern matching algorithm on the traffic. This algorithmic complexity attack can overwhelm the IDS with a relatively small amount of bandwidth.
An IDS that also monitors encrypted traffic can spend a large portion of its CPU resources on decrypting incoming data.
Memory exhaustion
In order to match certain signatures, an IDS is required to keep state related to the connections it is monitoring. For example, an IDS must maintain "TCP control blocks" (TCBs), chunks of memory which track information such as sequence numbers, window sizes, and connection states (ESTABLISHED, RELATED, CLOSED, etc.), for each TCP connection monitored by the IDS. Once all of the IDS's random-access memory (RAM) is consumed, it is forced to utilize virtual memory on the hard disk which is much slower than RAM, leading to performance problems and dropped packets similar to the effects of CPU exhaustion.
If the IDS doesn't garbage collect TCBs correctly and efficiently, an attacker can exhaust the IDS's memory by starting a large number of TCP connections very quickly. Similar attacks can be made by fragmenting a large number of packets into a larger number of smaller packets, or send a large number of out-of-order TCP segments.
Operator fatigue
Alerts generated by an IDS have to be acted upon in order for them to have any value. An attacker can reduce the "availability" of an IDS by overwhelming the human operator with an inordinate number of alerts by sending large amounts of "malicious" traffic intended to generate alerts on the IDS. The attacker can then perform the actual attack using the alert noise as cover. The tools 'stick' and 'snot' were designed for this purpose. They generate a large number of IDS alerts by sending attack signature across the network, but will not trigger alerts in IDS that maintain application protocol context.
References
External links
Evasions in IDS/IPS, Abhishek Singh, Virus Bulletin, April 2010.
Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection Thomas Ptacek, Timothy Newsham. Technical Report, Secure Networks, Inc., January 1998.
IDS evasion with Unicode Eric Packer. last updated January 3, 2001.
Fragroute home page
Fragrouter source code
Nikto home page outdated, see : https://cirt.net/nikto2
Phrack 57 phile 0x03 mentioning the TCP Urgent pointer
Whisker home page
Sandcat home page
Snort's stream4 preprocessor for stateful packet reassembly
Evasions in the wild blog on evasions found in the Shadow Brokers leak
Computer security exploits | Intrusion detection system evasion techniques | [
"Technology"
] | 1,978 | [
"Computer security exploits"
] |
9,332,507 | https://en.wikipedia.org/wiki/Leibniz%E2%80%93Clarke%20correspondence | The Leibniz–Clarke correspondence was a scientific, theological and philosophical debate conducted in an exchange of letters between the German thinker Gottfried Wilhelm Leibniz and Samuel Clarke, an English supporter of Isaac Newton during the years 1715 and 1716. The exchange began because of a letter Leibniz wrote to Caroline of Ansbach, in which he remarked that Newtonian physics was detrimental to natural theology. Eager to defend the Newtonian view, Clarke responded, and the correspondence continued until the death of Leibniz in 1716.
Although a variety of subjects are touched on in the letters, the main interest for modern readers is in the dispute between the absolute theory of space favoured by Newton and Clarke, and Leibniz's relational approach. Also important is the conflict between Clarke's and Leibniz's opinions on free will and whether God must create the best of all possible worlds.
Leibniz had published only one book on moral matters, the Théodicée (1710), and his more metaphysical views had never been exposed to a sufficient extent, so the collected letters were met with interest by their contemporaries. The primary dispute between Leibniz and Newton about calculus was still fresh in the public's mind and it was taken as a matter of course that it was Newton himself who stood behind Clarke's replies.
Editions
The Leibniz-Clarke letters were first published under Clarke's name in the year following Leibniz's death. Clarke wrote a preface, took care of the translation from French, added notes and some of his own writing. In 1720 Pierre Desmaizeaux published a similar volume in a French translation, including quotes from Newton's work. It is quite certain that for both editions the opinion of Newton himself has been sought and Leibniz left at a disadvantage. However the German translation of the correspondence published by Kohler, also in 1720, contained a reply to Clarke's last letter which Leibniz had not been able to answer, due to his death. The letters have been reprinted in most collections of Leibniz's works and regularly published in stand-alone editions.
See also
Philosophy of space and time
Principle of sufficient reason
Notes
References
G.V. Leroy, Die philosophische Probleme in dem Briefwechsel Leibniz und Clarke, Giessen, 1893.
Rowe, William L., "Can God Be Free?", Oxford UP, 2004. .
External links
Complete transcription of the 1717 edition at The Newton Project
Stanford encyclopedia of philosophy - Divine Freedom
1715 documents
1716 documents
Philosophical debates
Historical physics publications
Space
Correspondences
Works by Gottfried Wilhelm Leibniz
Caroline of Ansbach | Leibniz–Clarke correspondence | [
"Physics",
"Mathematics"
] | 539 | [
"Spacetime",
"Space",
"Geometry"
] |
9,332,907 | https://en.wikipedia.org/wiki/OGNL | Object-Graph Navigation Language (OGNL) is an open-source Expression Language (EL) for Java, which, while using simpler expressions than the full range of those supported by the Java language, allows getting and setting properties (through defined setProperty and getProperty methods, found in JavaBeans), and execution of methods of Java classes. It also allows for simpler array manipulation.
It is aimed to be used in Java EE applications with taglibs as expression language.
OGNL was created by Luke Blanshard and Drew Davidson of OGNL Technology. OGNL development was continued by OpenSymphony, which closed in 2011. OGNL is developed now as a part of the Apache Commons.
OGNL Technology
OGNL began as a way to map associations between front-end components and back-end objects using property names. As these associations gathered more features, Drew Davidson created Key-Value Coding language (KVCL). Luke Blanshard then reimplemented KVCL using ANTLR and started using the name OGNL. The technology was again reimplemented using the Java Compiler Compiler (JavaCC).
OGNL uses Java reflection and introspection to address the Object Graph of the runtime application. This allows the program to change behavior based on the state of the object graph instead of relying on compile time settings. It also allows changes to the object graph.
Projects using OGNL
WebWork and its successor Struts2
Tapestry (4 and earlier)
Spring Web Flow
Apache Click
MyBatis - SQL mapper framework
The Thymeleaf - A Java XML/XHTML/HTML5 template engine
FreeMarker - A Java template engine
OGNL security issues
Due to its ability to create or change executable code, OGNL is capable of introducing critical security flaws to any framework that uses it. Multiple Apache Struts 2 versions have been vulnerable to OGNL security flaws. As of October 2017, the recommended version of Struts 2 is 2.5.13. Users are urged to upgrade to the latest version, as older revisions have documented security vulnerabilities — for example, Struts 2 versions 2.3.5 through 2.3.31, and 2.5 through 2.5.10, allow remote attackers to execute arbitrary code. Atlassian Confluence has repeatedly been affected by OGNL security issues that allowed arbitrary remote code execution, and required all users to update.
See also
MVEL
External links
OGNL 3.x maintenance branch
OGNL 4.x Homepage (Apache)
Apache Struts CVE-2013-2134 OGNL Expression Injection Vulnerability
References
Scripting languages
Free software programmed in Java (programming language)
Java platform
Software using the BSD license
OGNL | OGNL | [
"Technology"
] | 564 | [
"Computing platforms",
"Java platform"
] |
9,333,381 | https://en.wikipedia.org/wiki/Fryingpan%E2%80%93Arkansas%20Project | The Fryingpan–Arkansas Project, or "Fry-Ark," is a water diversion, storage and delivery project serving southeastern Colorado. The multi-purpose project was authorized in 1962 by President Kennedy to serve municipal, industrial, and hydroelectric power generation, and to enhance recreation, fish and wildlife interests. Construction began in 1964 and was completed in 1981. The project includes five dams and reservoirs, one federal hydroelectric power plant (two private, FERC regulated plants), and 22 tunnels and conduits totaling in length. The Bureau of Reclamation, under the Department of the Interior built and manages the project.
Like its sister–project, the Colorado–Big Thompson Project, the Fry-Ark brings available water from Colorado's West Slope to the more arid, and more heavily populated, East Slope, providing supplemental water to over 720,000 people and of irrigable land in Colorado Springs, Pueblo, La Junta, Lamar, and other southeastern Colorado municipalities each year.
Operation
The project diverts and delivers an average of of water a year. However, the water right on the Fry-Ark allows for a diversion of over the course of 34 consecutive years, but not to exceed a diversion of in any one single year. In 2011, when Colorado had an abundance of snow, the Fry-Ark imported about from the West Slope, the second highest diversion amount in the project's 50-year operating history. The following year, 2012, snowpack was scarce and drought returned to the state. As a result, the project was only able to import roughly of water.
Before the Fry-Ark Project could be built in its entirety, a compromise had to be struck between East and West Slope water politics. The result was the construction of Ruedi Reservoir, upstream on the Fryingpan River from Basalt, Colorado. Ruedi provides water to Colorado's West Slope, in part to compensate for what is diverted further upstream.
Water is diverted from the West Slope's Fryingpan River basin. A series of interconnected tunnels carrying water from 16 small diversion dams, all at an elevation of above , collect snowmelt and run it, via gravity, to the Charles H. Boustead Tunnel. The Boustead conveys water underneath the Continental Divide before discharging it into Turquoise Lake just west of Leadville. Water then leaves Turquoise Lake reservoir via the Mt. Elbert Conduit, which runs nearly to the Mt. Elbert Forebay. Water is stored in the forebay to build up head (energy) before being dropped down over in elevation to the hydroelectric Mt. Elbert Powerplant.
The power plant takes its name from Mount Elbert, Colorado's tallest peak, and sits at its base. The two-unit facility is the largest hydroelectric power plant in Colorado. It has a nameplate capacity of 200 megawatts and a maximum generating head of . During night time hours, when power rates are less expensive, the reversible pump-back units return water from Twin Lakes—water that was already used at least once by the units to generate electricity—back to the forebay so it can flow down again for more power generation. The Western Area Power Administration markets the power generated at the plant.
Water exiting the Mt. Elbert Power Plant helps fill Twin Lakes Reservoir, a natural lake bed that was enlarged and impounded by the Twin Lakes Dam during 1978–1980. The reservoir sits on Lake Creek which runs down from Independence Pass. Water from the reservoir continues down Lake Creek to the Arkansas River, which is the main delivery vehicle for the Fry-Ark project.
Pueblo Reservoir, the center piece of Lake Pueblo State Park, is the last reservoir in the project and sits on the Arkansas just west of Pueblo. The majority of municipal and agricultural deliveries for the project are made out of Pueblo Reservoir before the water continues on east to Kansas via the Arkansas.
References
External links
Fryingpan-Arkansas Project 50th Anniversary Film, Bureau of Reclamation
The Great Plains Region
The Eastern Colorado Area Office
Lake Pueblo Water Levels
Twin Lakes Water Levels
Turquoise Reservoir Water Levels
Ruedi Reservoir Water Levels
Energy infrastructure completed in 1981
Energy infrastructure in Colorado
Buildings and structures in Colorado
United States Bureau of Reclamation
Interbasin transfer
Hydroelectric power plants in Colorado | Fryingpan–Arkansas Project | [
"Environmental_science"
] | 856 | [
"Hydrology",
"Interbasin transfer"
] |
9,333,545 | https://en.wikipedia.org/wiki/Stomp%20rocket | A stomp rocket is a flying toy rocket that is powered by the release of compressed air.
The rocket has a hollow body that fits over a launch tube. The launch tube is a hollow, rigid pipe, with an opening into the rocket body, and a connection to a pipe that is connected to an air bladder. Typical bladders are an air pump, or a flexible bottle (e.g. plastic drink bottle).
When pressure is applied to the bladder (compressing the container) the air contained within is expelled from the bladder. In many home made versions, the bladder is a recycled drinks bottle, from which air is released rapidly by the user jumping, or 'stomping' on the bottle - hence "stomp rocket". The expelled air rushes through the connecting pipe and into the body of the rocket, causing a pressurisation of the air in the rocket's hollow body. The air within the rocket body has to escape, and is expelled to the rear of the rocket, causing the rocket to accelerate upwards along the launch tube in the opposite direction, and then lift into the sky.
The launch tube may be mounted on an adjustable clamp to enable the direction of the launch to be set by the user.
The thrust of a stomp rocket is completely expended in the first instance of flight. For the majority of the flight, only Gravity and aerodynamic forces act on the rocket, unlike on a Water rocket or a 'real' rocket with motors and fuel onboard, which are applying thrust force to the rocket for a large portion of the flight.
Commercial varieties
The Astroblast was a late 1970s-era toy rocket consisting of a plastic chamber with a plunger on top. An adjustable plastic pipe allowed a foam rubber rocket to be launched at any angle when the user stomped on the plunger. A later adaptation of the Astroblast substituted an air pump and release mechanism for the stomp chamber and plunger.
Stomp Rocket is a trademarked name, the owner being Fred Ramirez, President of D&L Company. D&L manufactured their first stomp rocket ever in the early 1990s. There are five versions of D&L Stomp rockets available: the Super High-Performance Stomp Rocket, which travels about 400 feet, the Ultra Stomp Rocket, which travels about 200 feet, the Ultra LED Stomp Rocket, which travels about 100 feet, the Ultra Dueling Stomp Rocket, which travels about 200 feet, and the Junior Stomp Rocket, which travels about 100 feet.
The highest stomp rocket ever was 1/4 of a mile in the air. The Rocket is mainly based on the concept of air pressure and sounding rockets
External links
NASA.gov
Making stomp rockets
http://www.sciencetoymaker.org/airRocket/index.html
Manufacturers of stomp rockets
http://www.stomprocket.com
Model rockets
1990s toys
1970s toys | Stomp rocket | [
"Astronomy"
] | 594 | [
"Rocketry stubs",
"Astronomy stubs"
] |
9,333,574 | https://en.wikipedia.org/wiki/Individually%20ventilated%20cage | An individually ventilated cage (IVC) is used to keep an animal separated from other animals and possible exposures, including exposure by air.
Use
In laboratory animal husbandry, there is a huge demand for animals that have been kept in disease free conditions and housed in barrier units such as individually ventilated cages. This is very important because when animals are used for scientific research, particularly drug-related research, the animals must provide accurate and valid results. Using an animal that is ill may cause the severity limit to be exceeded. If the animal already has a disease and then undergoes experimentation of a substance that also produces effects on the animals health, it could potentially worsen the effects of the agent being tested causing the animal to experience more suffering than necessary. The animals may produce false results which may prove vital at a later stage, e.g., in drug trials on humans. Not only that, the experiment will have to be performed again and the previous animals would have ended up being killed. Special caging systems are often used alongside many other barriers to keep unwanted materials out of range of the animals.
IVC system
General
The IVC-systems in which the animals are kept in ensures they are fully protected by use of HEPA-filters (high efficiency-particulate air) that defends them from all micro-organisms. A process of sterilisation of all items to be passed in to the barrier unit including bedding material, food etc. must be performed.
The cages are usually made out of high tech special synthetic polycarbonates. Although this material allows various methods of sterilising and disinfecting to be carried out, repeated sterilisation can cause discolouration and brittleness.
Design
The cages are constructed and designed in a specific way to ensure an absolute microparticle free inner environment. This generally includes a cage bottom, a cage top (with a food hopper and water bottle holder incorporated) and a filter lid. It is also designed to allow maximum comfort of the animal and to provide a secure, chew proof environment. An external ventilation unit supplies the cages with fresh HEPA-filtered air which passes through the filter lids. The ventilation-system mostly consists of two tubes for ingoing and outgoing air.
Criticism
Individual cages, with no environmental enrichment, make it impossible for any animal to carry out species-specific behaviour and are a huge drawback in the terms of animal welfare. In natural conditions, many animals live in groups, but individual cages prevent that. However, that said, multiple sizes of IVCs are available for holding either 1–5 animals or in larger cages 12–15 per cage.
References
Animal testing | Individually ventilated cage | [
"Chemistry"
] | 540 | [
"Animal testing"
] |
9,333,674 | https://en.wikipedia.org/wiki/Tonmeister | Tonmeister is a job description in the music and recording industries that describes a so-called "sound master" (a literal translation of the German Tonmeister): a person who creates recordings or broadcasts of music who is also both musically trained (in classical and non-classical genres) and has theoretical and practical knowledge.
The word tonmeister was trademarked in 1996 by the University of Surrey, United Kingdom. Also within the UK, the SAE Institute registered the term SAE Tonmeister. The title has been abbreviated to tonmeister in their registrations in several other countries, not including Germany, Switzerland or Austria. Members of the VDT may call themselves Tonmeister VDT.
Origins
The concept of a tonmeister dates back to 1946, when Arnold Schoenberg wrote a letter to the Chancellor of the University of Chicago suggesting a course to train "soundmen". Schoenberg wrote, "soundmen will be trained in music, acoustics, physics, mechanics and related fields to a degree enabling them to control and improve the sonority of recordings, radio broadcasts and sound films". It was also in this year that the University of Music Detmold in Germany started the first Tonmeister course.
References
Music production
Audio engineering | Tonmeister | [
"Engineering"
] | 251 | [
"Electrical engineering",
"Audio engineering"
] |
9,334,079 | https://en.wikipedia.org/wiki/Fauces%20%28throat%29 | The fauces, isthmus of fauces, or the oropharyngeal isthmus is the opening at the back of the mouth into the throat. It is a narrow passage between the velum and the base of the tongue.
The fauces is a part of the oropharynx directly behind the oral cavity as a subdivision, bounded superiorly by the soft palate, laterally by the palatoglossal and palatopharyngeal arches, and inferiorly by the tongue. The arches form the pillars of the fauces. The anterior pillar is the palatoglossal arch formed of the palatoglossus muscle. The posterior pillar is the palatopharyngeal arch formed of the palatopharyngeus muscle. Between these two arches on the lateral walls of the oropharynx is the tonsillar fossa which is the location of the palatine tonsil. The arches are also known together as the palatine arches.
Each arch runs downwards, laterally and forwards, from the soft palate to the side of the tongue. The approximation of the arches due to the contraction of the palatoglossal muscles constricts the fauces, and is essential to swallowing.
Faucitis
Inflammation of the fauces, known as faucitis, is seen in animals. In cats, faucitis is usually a secondary disease to gingivitis but can be a primary disease. In this species faucitis is usually caused by bacterial and viral infections although food allergies need to be excluded in any diagnosis. Treatment is symptomatic and includes broad-spectrum antibiotics and in severe cases where cats are inappetant, corticosteroids (often given as depot forms, e.g. depomedrol) or chemotherapy (e.g. chlorambucil).
See also
List of anatomical isthmi
References
Human throat
Digestive system
Pharynx
Animal anatomy
Animal diseases | Fauces (throat) | [
"Biology"
] | 411 | [
"Digestive system",
"Organ systems"
] |
9,334,285 | https://en.wikipedia.org/wiki/Interferon%20gamma%20release%20assay | Interferon-γ release assays (IGRA) are medical tests used in the diagnosis of some infectious diseases, especially tuberculosis. Interferon-γ (IFN-γ) release assays rely on the fact that T-lymphocytes will release IFN-γ when exposed to specific antigens. These tests are mostly developed for the field of tuberculosis diagnosis, but in theory, may be used in the diagnosis of other diseases that rely on cell-mediated immunity, e.g. cytomegalovirus and leishmaniasis and COVID-19. For example, in patients with cutaneous adverse drug reactions, the challenge of peripheral blood lymphocytes with the drug causing the reaction produced a positive test result for half of the drugs tested.
There are currently two IFN-γ release assays available for the diagnosis of tuberculosis:
QuantiFERON-TB Gold (licensed in US, Europe and Japan); and
T-SPOT.TB, a form of ELISpot, the variant of ELISA (licensed in Europe, US, Japan and China).
The former test quantitates the amount of IFN-γ produced in response to the ESAT-6 and CFP-10 antigens from Mycobacterium tuberculosis, which are distinguishable from those present in BCG and most other non-tuberculous mycobacteria. The latter test determines the total number of individual effector T cells expressing IFN-γ.
The indications for the test are still disputed. It has been evaluated for the diagnosis of latent tuberculosis in HIV patients (who frequently have a negative Mantoux test).
IFN-γ release assays for the diagnosis of SARS-CoV-2 (COVID-19):
The blood samples were collected in a set of lithium heparin tubes; The first tube without stimulation was left as a control; the second tube was stimulated with a single SARS-CoV-2 peptide pool for CD4+ T cells and the third tube was stimulated with a SARS-CoV-2 peptide pool for CD8+ T cells; the fourth tube was stimulated with mitogen as positive control. A single CD4+ T-cell mega pool (CD4+ pool) consisted of 221 predicted HLA class II CD4+ T-cell epitope peptides covering the entire viral proteome except for the spike protein, which was covered with 253 15-mer peptides overlapping by 10 residues and 2 CD8+ T-cell mega pools (CD8+ pools A and B) together consisting of 628 predicted HLA class I CD8+ T-cell epitopes from the entire SARS-CoV-2 proteome. After an overnight stimulation of the T-cells, the IFN-γ concentration in the plasma fraction was measured by enzyme-linked immunosorbent assay (ELISA) in international units per milliliter (IU/mL) (Murugesan et al, 2021).
References
Immunologic tests
Tuberculosis | Interferon gamma release assay | [
"Biology"
] | 631 | [
"Immunologic tests"
] |
9,334,591 | https://en.wikipedia.org/wiki/Koinophilia | Koinophilia is an evolutionary hypothesis proposing that during sexual selection, animals preferentially seek mates with a minimum of unusual or mutant features, including functionality, appearance and behavior. Koinophilia intends to explain the clustering of sexual organisms into species and other issues described by Darwin's dilemma. The term derives from the Greek word koinos meaning "common" or "that which is shared", and philia, meaning "fondness".
Natural selection causes beneficial inherited features to become more common at the expense of their disadvantageous counterparts. The koinophilia hypothesis proposes that a sexually-reproducing animal would therefore be expected to avoid individuals with rare or unusual features, and to prefer to mate with individuals displaying a predominance of common or average features. Mutants with peculiar features would be avoided because most mutations that manifest themselves as changes in appearance, functionality or behavior are disadvantageous. Because it is impossible to judge whether a new mutation is beneficial (or might be advantageous in the unforeseeable future) or not, koinophilic animals avoid them all, at the cost of avoiding the very occasional potentially beneficial mutation. Thus, koinophilia, although not infallible in its ability to distinguish fit from unfit mates, is a good strategy when choosing a mate. A koinophilic choice ensures that offspring are likely to inherit a suite of features and attributes that have served all the members of the species well in the past.
Koinophilia differs from the "like prefers like" mating pattern of assortative mating. If like preferred like, leucistic animals (such as white peacocks) would be sexually attracted to one another, and a leucistic subspecies would come into being. Koinophilia predicts that this is unlikely because leucistic animals are attracted to the average in the same way as are all the other members of its species. Since non-leucistic animals are not attracted by leucism, few leucistic individuals find mates, and leucistic lineages will rarely form.
Koinophilia provides simple explanations for the almost universal canalization of sexual creatures into species, the rarity of transitional forms between species (between both extant and fossil species), evolutionary stasis, punctuated equilibria, and the evolution of cooperation. Koinophilia might also contribute to the maintenance of sexual reproduction, preventing its reversion to the much simpler asexual form of reproduction.
The koinophilia hypothesis is supported by the findings of Judith Langlois and her co-workers. They found that the average of two human faces was more attractive than either of the faces from which that average was derived. The more faces (of the same gender and age) that were used in the averaging process the more attractive and appealing the average face became. This work into averageness supports koinophilia as an explanation of what constitutes a beautiful face.
Speciation and punctuated equilibria
Biologists from Darwin onwards have puzzled over how evolution produces species whose adult members look extraordinarily alike, and distinctively different from the members of other species. Lions and leopards are, for instance, both large carnivores that inhabit the same general environment, and hunt much the same prey, but look quite different. The question is why intermediates do not exist.
This is the "horizontal" dimension of a two-dimensional problem, referring to the almost complete absence of transitional or intermediate forms between present-day species (e.g. between lions, leopards, and cheetahs).
The "vertical" dimension concerns the fossil record. Fossil species are frequently remarkably stable over extremely long periods of geological time, despite continental drift, major climate changes, and mass extinctions. When a change in form occurs, it tends to be abrupt in geological terms, again producing phenotypic gaps (i.e. an absence of intermediate forms), but now between successive species, which then often co-exist for long periods of time. Thus the fossil record suggests that evolution occurs in bursts, interspersed by long periods of evolutionary stagnation in so-called punctuated equilibria. Why this is so has been an evolutionary enigma ever since Darwin first recognized the problem.
Koinophilia could explain both the horizontal and vertical manifestations of speciation, and why it, as a general rule, involves the entire external appearance of the animals concerned. Since koinophilia affects the entire external appearance, the members of an interbreeding group are driven to look alike in every detail. Each interbreeding group will rapidly develop its own characteristic appearance. An individual from one group which wanders into another group will consequently be recognized as different, and will be discriminated against during the mating season. Reproductive isolation induced by koinophilia might thus be the first crucial step in the development of, ultimately, physiological, anatomical and behavioral barriers to hybridization, and thus, ultimately, full specieshood. Koinophilia will thereafter defend that species' appearance and behavior against invasion by unusual or unfamiliar forms (which might arise by immigration or mutation), and thus be a paradigm of punctuated equilibria (or the "vertical" aspect of the speciation problem).
Evolution under koinophilic conditions
Background
Evolution can be extremely rapid, as shown by the creation of domesticated animals and plants in a very short period of geological time, spanning only a few tens of thousands of years, by humans with little or no knowledge of genetics. Maize, Zea mays, for instance, was created in Mexico in only a few thousand years, starting about 7 000 to 12 000 years ago. This raises the question of why the long term rate of evolution is far slower than is theoretically possible.
Evolution is imposed on species or groups. It is not planned or striven for in some Lamarckist way. The mutations on which the process depends are random events, and, except for the "silent mutations" which do not affect the functionality or appearance of the carrier, are thus usually disadvantageous, and their chance of proving to be useful in the future is vanishingly small. Therefore, while a species or group might benefit by being able to adapt to a new environment through the accumulation of a wide range of genetic variation, this is to the detriment of the individuals who have to carry these mutations until a small, unpredictable minority of them ultimately contributes to such an adaptation. Thus, the capability to evolve is a group adaptation, which has been discredited by, among others, George C. Williams, John Maynard Smith and Richard Dawkins. because it is not to the benefit of the individual.
Consequently, sexual individuals would be expected to avoid transmitting mutations to their progeny by avoiding mates with strange or unusual characteristics. Mutations that therefore affect the external appearance and habits of their carriers will seldom be passed on to the next and subsequent generations. They will therefore seldom be tested by natural selection. Evolutionary change in a large population with a wide choice of mates, will, therefore, come to a virtual standstill. The only mutations that can accumulate in a population are ones that have no noticeable effect on the outward appearance and functionality of their bearers (they are thus termed "silent" or "neutral mutations").
Evolutionary process
The restraint koinophilia exerts on phenotypic change suggests that evolution can only occur if mutant mates cannot be avoided as a result of a severe scarcity of potential mates. This is most likely to occur in small restricted communities, such as on small islands, in remote valleys, lakes, river systems, caves, or during periods of glaciation, or following mass extinctions, when sudden bursts of evolution can be expected. Under these circumstances, not only is the choice of mates severely restricted, but population bottlenecks, founder effects, genetic drift and inbreeding cause rapid, random changes in the isolated population's genetic composition. Furthermore, hybridization with a related species trapped in the same isolate might introduce additional genetic changes. If an isolated population such as this survives its genetic upheavals, and subsequently expands into an unoccupied niche, or into a niche in which it has an advantage over its competitors, a new species, or subspecies, will have come in being. In geological terms this will be an abrupt event. A resumption of avoiding mutant mates will, thereafter, result, once again, in evolutionary stagnation.
Thus the fossil record of an evolutionary progression typically consists of species that suddenly appear, and ultimately disappear hundreds of thousands or millions of years later, without any change in external appearance. Graphically, these fossil species are represented by horizontal lines, whose lengths depict how long each of them existed. The horizontality of the lines illustrates the unchanging appearance of each of the fossil species depicted on the graph. During each species' existence new species appear at random intervals, each also lasting many hundreds of thousands of years before disappearing without a change in appearance. The degree of relatedness and the lines of descent of these concurrent species is generally impossible to determine. This is illustrated in the following diagram depicting the evolution of modern humans from the time that the hominins separated from the line that led to the evolution of our closest living primate relatives, the chimpanzees.
Phenotypic implications
This proposal, that population bottlenecks are possibly the primary generators of the variation that fuels evolution, predicts that evolution will usually occur in intermittent, relatively large scale morphological steps, interspersed with prolonged periods of evolutionary stagnation, instead of in a continuous series of finely graded changes. However, it makes a further prediction. Darwin emphasized that the shared biologically useless oddities and incongruities that characterize a species are signs of an evolutionary history – something that would not be expected if a bird's wing, for instance, was engineered de novo, as argued by his detractors. The present model predicts that, in addition to vestiges which reflect an organism's evolutionary heritage, all the members of a given species will also bear the stamp of their isolationary past – arbitrary, random features, accumulated through founder effects, genetic drift and the other genetic consequences of sexual reproduction in small, isolated communities. Thus all lions, African and Asian, have a highly characteristic black tuft of fur at the end of their tails, which is difficult to explain in terms of an adaptation, or as a vestige from an early feline, or more ancient ancestor. The unique, often color- and pattern-rich plumage of each of today's wide variety of bird species presents a similar evolutionary enigma. This richly varied array of phenotypes is more easily explained as the products of isolates, subsequently defended by koinophilia, than as assemblies of very diverse evolutionary relics, or as sets of uniquely evolved adaptations.
Evolution of co-operation
Co-operation is any group behavior that benefits the individuals more than if they were to act as independent agents.
However selfish individuals can exploit the co-operativeness of others by not taking part in the group activity, but still enjoying its benefits. For instance, a selfish individual which does not join the hunting pack and share in its risks, but nevertheless shares in the spoils, has a fitness advantage over the other members of the pack. Thus, although a group of co-operative individuals is fitter than an equivalent group of selfish individuals, selfish individuals interspersed among a community of co-operators are always fitter than their hosts. They will raise, on average, more offspring than their hosts, and will ultimately replace them.
If, however, the selfish individuals are ostracized, and rejected as mates, because of their deviant and unusual behavior, then their evolutionary advantage becomes an evolutionary liability. Co-operation then becomes evolutionarily stable.
Effects of diets and environmental conditions
The best-documented creations of new species in the laboratory were performed in the late 1980s. William Rice and G.W. Salt bred fruit flies, Drosophila melanogaster, using a maze with three different choices of habitat, such as light/dark and wet/dry. Each generation was placed into the maze, and the groups of flies that came out of two of the eight exits were set apart to breed with each other in their respective groups. After thirty-five generations, the two groups and their offspring were isolated reproductively because of their strong habitat preferences: they mated only within the areas they preferred, and so did not mate with flies that preferred the other areas. The history of such attempts is described in Rice and Hostert (1993).
Diane Dodd used a laboratory experiment to show how reproductive isolation can evolve in Drosophila pseudoobscura fruit flies after several generations by placing them in different media, starch- or maltose-based media.
Dodd's experiment has been easy for many others to replicate, including with other kinds of fruit flies and foods.
The carrion crow (Corvus corone) and hooded crow (Corvus cornix) are two closely related species whose geographical distribution across Europe is illustrated in the accompanying diagram. It is believed that this distribution might have resulted from the glaciation cycles during the Pleistocene, which caused the parent population to split into isolates which subsequently re-expanded their ranges when the climate warmed causing secondary contact. Jelmer W. Poelstra and coworkers sequenced almost the entire genomes of both species in populations at varying distances from the contact zone to find that the two species were genetically identical, both in their DNA and in its expression (in the form of RNA), except for the lack of expression of a small portion (<0.28%) of the genome (situated on avian chromosome 18) in the hooded crow, which imparts the lighter plumage coloration on its torso. Thus the two species can viably hybridize, and occasionally do so at the contact zone, but the all-black carrion crows on the one side of the contact zone mate almost exclusively with other all-black carrion crows, while the same occurs among the hooded crows on the other side of the contact zone. It is therefore clear that it is only the outward appearance of the two species that inhibits hybridization. The authors attribute this to assortative mating, the advantage of which is not clear, and it would lead to the rapid appearance of streams of new lineages, and possibly even species, through mutual attraction between mutants. Unnikrishnan and Akhila propose, instead, that koinophilia is a more precise explanation for the resistance to hybridization across the contact zone, despite the absence of physiological, anatomical or genetic barriers to such hybridization.
Reception
William B. Miller, in an extensive recent (2013) review of koinophilia theory, notes that while it provides precise explanations for the grouping of sexual animals into species, their unchanging persistence in the fossil record over long periods of time, and the phenotypic gaps between species, both fossil and extant, it represents a major departure from the widely accepted view that beneficial mutations spread, ultimately, to the whole, or some portion of the population (causing it to evolve gene by gene). Darwin recognized that this process had no inherent, or inevitable propensity to produce species. Instead populations would be in a perpetual state of transition both in time and space. They would, at any given moment, consist of individuals with varying numbers of beneficial characteristics that may or may not have reached them from their various points of origin in the population, and neutral features will have a scattering determined by random mechanisms such as genetic drift.
He also notes that koinophilia provides no explanation as to how the physiological, anatomical and genetic causes of reproductive isolation come about. It is only the behavioral reproductive isolation that is addressed by koinophilia. It is furthermore difficult to see how koinophilia might apply to plants, and certain marine creatures that discharge their gametes into the environment to meet up and fuse, it seems, entirely randomly (within conspecific confines). However, when pollen from several compatible donors is used to pollinate stigmata, the donors typically do not sire equal numbers of seeds. Marshall and Diggle state that the existence of some kind of non-random seed paternity is, in fact, not in question in flowering plants. How this occurs remains unknown. Pollen choice is one of the possibilities, taking into account that 50% of the pollen grain's haploid genome is expressed during its tube's growth towards the ovule.
The apparent preference of the females of certain, particularly bird, species for exaggerated male ornaments, such as the peacock's tail, is not easily reconciled with the concept of koinophilia.
References
Evolutionary biology
Speciation
Reproduction in animals
Selection
Population genetics
Sexual selection
Articles which contain graphical timelines | Koinophilia | [
"Biology"
] | 3,436 | [
"Evolutionary biology",
"Reproduction in animals",
"Behavior",
"Evolutionary processes",
"Selection",
"Speciation",
"Reproduction",
"Sexual selection",
"Mating"
] |
9,334,710 | https://en.wikipedia.org/wiki/Nearby%20Stars%20Database | The Nearby Stars Database (NStars) began as a NASA project in 1998, then was based at Northern Arizona University. It is now defunct. The stated mission of NStars was "to be a complete and accurate source of scientific data about all stellar systems within 25 parsecs." The website (see below) included search tools and links to an interactive forum.
Status
As of 1 January 2002, there were 2,633 stars in 2,029 systems in the database. As of 29 January 2008, the site is closed, displaying the message "This site is currently undergoing a major redesign and will be returned to service at a later date."
References
External links
Nearby Stars Database on the Wayback Machine
The Extrasolar Planets Encyclopaedia
Near Star Catalogue (an unofficial update of the NSTARS database)
Old Table from Wayback Machine Web Archive
List as of 1 January 2009
Astronomical catalogues of stars
Databases in the United States | Nearby Stars Database | [
"Astronomy"
] | 196 | [
"Stellar astronomy stubs",
"Astronomy stubs"
] |
9,334,855 | https://en.wikipedia.org/wiki/Nuclear%20matter | Nuclear matter is an idealized system of interacting nucleons (protons and neutrons) that exists in several phases of exotic matter that, as of yet, are not fully established. It is not matter in an atomic nucleus, but a hypothetical substance consisting of a huge number of protons and neutrons held together by only nuclear forces and no Coulomb forces. Volume and the number of particles are infinite, but the ratio is finite. Infinite volume implies no surface effects and translational invariance (only differences in position matter, not absolute positions).
A common idealization is symmetric nuclear matter, which consists of equal numbers of protons and neutrons, with no electrons.
When nuclear matter is compressed to sufficiently high density, it is expected, on the basis of the asymptotic freedom of quantum chromodynamics, that it will become quark matter, which is a degenerate Fermi gas of quarks.
Some authors use "nuclear matter" in a broader sense, and refer to the model described above as "infinite nuclear matter", and consider it as a "toy model", a testing ground for analytical techniques. However, the composition of a neutron star, which requires more than neutrons and protons, is not necessarily locally charge neutral, and does not exhibit translation invariance, often is differently referred to, for example, as neutron star matter or stellar matter and is considered distinct from nuclear matter. In a neutron star, pressure rises from zero (at the surface) to an unknown large value in the center.
Methods capable of treating finite regions have been applied to stars and to atomic nuclei. One such model for finite nuclei is the liquid drop model, which includes surface effects and Coulomb interactions.
See also
QCD vacuum
Quark–gluon plasma
Degenerate matter
Neutron-degenerate matter
Strange matter
Nuclear structure
Neutronium
Nuclear physics
Nuclear spectroscopy
References
Nuclear physics
Phases of matter | Nuclear matter | [
"Physics",
"Chemistry"
] | 396 | [
"Phases of matter",
"Matter",
"Nuclear physics"
] |
9,335,064 | https://en.wikipedia.org/wiki/Ledinegg%20instability | In fluid dynamics, the Ledinegg instability occurs in two-phase flow, especially in a boiler tube, when the boiling boundary is within the tube. For a given mass flux J through the tube, the pressure drop per unit length (which typically varies as the square of the mass flux and inversely as the density, i.e., as ) is much less when the flow is wholly of liquid than when the flow is wholly of steam. Thus, as the boiling boundary moves up the tube, the total pressure drop falls, potentially increasing the flow in an unstable manner. Boiler tubes normally overcome this (which is effectively a 'negative resistance' regime) by incorporating a narrow orifice at the entry, to give a stabilising pressure drop on entry.
References
Ruspini, Two-phase flow instabilities: A review, IJHMT, 71, 2013
System Instabilities https://web.archive.org/web/20060721232210/http://caltechbook.library.caltech.edu/51/01/chap15.pdf
http://authors.library.caltech.edu/25021/1/chap15.pdf
Fluid dynamics | Ledinegg instability | [
"Chemistry",
"Engineering"
] | 255 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
9,335,254 | https://en.wikipedia.org/wiki/Polyclonal%20B%20cell%20response | Polyclonal B cell response is a natural mode of immune response exhibited by the adaptive immune system of mammals. It ensures that a single antigen is recognized and attacked through its overlapping parts, called epitopes, by multiple clones of B cell.
In the course of normal immune response, parts of pathogens (e.g. bacteria) are recognized by the immune system as foreign (non-self), and eliminated or effectively neutralized to reduce their potential damage. Such a recognizable substance is called an antigen. The immune system may respond in multiple ways to an antigen; a key feature of this response is the production of antibodies by B cells (or B lymphocytes) involving an arm of the immune system known as humoral immunity. The antibodies are soluble and do not require direct cell-to-cell contact between the pathogen and the B-cell to function.
Antigens can be large and complex substances, and any single antibody can only bind to a small, specific area on the antigen. Consequently, an effective immune response often involves the production of many different antibodies by many different B cells against the same antigen. Hence the term "polyclonal", which derives from the words poly, meaning many, and clones from Greek klōn, meaning sprout or twig; a clone is a group of cells arising from a common "mother" cell. The antibodies thus produced in a polyclonal response are known as polyclonal antibodies. The heterogeneous polyclonal antibodies are distinct from monoclonal antibody molecules, which are identical and react against a single epitope only, i.e., are more specific.
Although the polyclonal response confers advantages on the immune system, in particular, greater probability of reacting against pathogens, it also increases chances of developing certain autoimmune diseases resulting from the reaction of the immune system against native molecules produced within the host.
Humoral response to infection
Diseases which can be transmitted from one organism to another are known as infectious diseases, and the causative biological agent involved is known as a pathogen. The process by which the pathogen is introduced into the body is known as inoculation, and the organism it affects is known as a biological host. When the pathogen establishes itself in a step known as colonization, it can result in an infection, consequently harming the host directly or through the harmful substances called toxins it can produce. This results in the various symptoms and signs characteristic of an infectious disease like pneumonia or diphtheria.
Countering the various infectious diseases is very important for the survival of the susceptible organism, in particular, and the species, in general. This is achieved by the host by eliminating the pathogen and its toxins or rendering them nonfunctional. The collection of various cells, tissues and organs that specializes in protecting the body against infections is known as the immune system. The immune system accomplishes this through direct contact of certain white blood cells with the invading pathogen involving an arm of the immune system known as the cell-mediated immunity, or by producing substances that move to sites distant from where they are produced, "seek" the disease-causing cells and toxins by specifically binding with them, and neutralize them in the process–known as the humoral arm of the immune system. Such substances are known as soluble antibodies and perform important functions in countering infections.
B cell response
Antibodies serve various functions in protecting the host against the pathogen. Their soluble forms which carry out these functions are produced by plasma B cells, a type of white blood cell. This production is tightly regulated and requires the activation of B cells by activated T cells (another type of white blood cell), which is a sequential procedure. The major steps involved are:
Specific or nonspecific recognition of the pathogen (because of its antigens) with its subsequent engulfing by B cells or macrophages. This activates the B cell only partially.
Antigen processing.
Antigen presentation.
Activation of the T helper cells by antigen-presenting cells.
Co-stimulation of the B cell by activated T cell resulting in its complete activation.
Proliferation of B cells with resultant production of soluble antibodies.
Recognition of pathogens
Pathogens synthesize proteins that can serve as "recognizable" antigens; they may express the molecules on their surface or release them into the surroundings (body fluids). What makes these substances recognizable is that they bind very specifically and somewhat strongly to certain host proteins called antibodies. The same antibodies can be anchored to the surface of cells of the immune system, in which case they serve as receptors, or they can be secreted in the blood, known as soluble antibodies. On a molecular scale, the proteins are relatively large, so they cannot be recognized as a whole; instead, their segments, called epitopes, can be recognized. An epitope comes in contact with a very small region (of 15–22 amino acids) of the antibody molecule; this region is known as the paratope. In the immune system, membrane-bound antibodies are the B-cell receptor (BCR). Also, while the T-cell receptor is not biochemically classified as an antibody, it serves a similar function in that it specifically binds to epitopes complexed with major histocompatibility complex (MHC) molecules. The binding between a paratope and its corresponding antigen is very specific, owing to its structure, and is guided by various noncovalent bonds, not unlike the pairing of other types of ligands (any atom, ion or molecule that binds with any receptor with at least some degree of specificity and strength). The specificity of binding does not arise out of a rigid lock and key type of interaction, but rather requires both the paratope and the epitope to undergo slight conformational changes in each other's presence.
Specific recognition of epitope by B cells
In figure at left, the various segments that form the epitope have been shown to be continuously collinear, meaning that they have been shown as sequential; however, for the situation being discussed here (i.e., the antigen recognition by the B cell), this explanation is too simplistic. Such epitopes are known as sequential or linear epitopes, as all the amino acids on them are in the same sequence (line). This mode of recognition is possible only when the peptide is small (about six to eight amino acids long), and is employed by the T cells (T lymphocytes).
However, the B memory/naive cells recognize intact proteins present on the pathogen surface. In this situation, the protein in its tertiary structure is so greatly folded that some loops of amino acids come to lie in the interior of the protein, and the segments that flank them may lie on the surface. The paratope on the B cell receptor comes in contact only with those amino acids that lie on the surface of the protein. The surface amino acids may actually be discontinuous in the protein's primary structure, but get juxtaposed owing to the complex protein folding patterns (as in the adjoining figure). Such epitopes are known as conformational epitopes and tend to be longer (15–22 amino acid residues) than the linear epitopes. Likewise, the antibodies produced by the plasma cells belonging to the same clone would bind to the same conformational epitopes on the pathogen proteins.
The binding of a specific antigen with corresponding BCR molecules results in increased production of the MHC-II molecules. This assumes significance as the same does not happen when the same antigen would be internalized by a relatively nonspecific process called pinocytosis, in which the antigen with the surrounding fluid is "drunk" as a small vesicle by the B cell. Hence, such an antigen is known as a nonspecific antigen and does not lead to activation of the B cell, or subsequent production of antibodies against it.
Nonspecific recognition by macrophages
Macrophages and related cells employ a different mechanism to recognize the pathogen. Their receptors recognize certain motifs present on the invading pathogen that are very unlikely to be present on a host cell. Such repeating motifs are recognized by pattern recognition receptors (PRRs) like the toll-like receptors (TLRs) expressed by the macrophages. Since the same receptor could bind to a given motif present on surfaces of widely disparate microorganisms, this mode of recognition is relatively nonspecific, and constitutes an innate immune response.
Antigen processing
After recognizing an antigen, an antigen-presenting cell such as the macrophage or B lymphocyte engulfs it completely by a process called phagocytosis. The engulfed particle, along with some material surrounding it, forms the endocytic vesicle (the phagosome), which fuses with lysosomes. Within the lysosome, the antigen is broken down into smaller pieces called peptides by proteases (enzymes that degrade larger proteins). The individual peptides are then complexed with major histocompatibility complex class II (MHC class II) molecules located in the lysosome – this method of "handling" the antigen is known as the exogenous or endocytic pathway of antigen processing in contrast to the endogenous or cytosolic pathway, which complexes the abnormal proteins produced within the cell (e.g. under the influence of a viral infection or in a tumor cell) with MHC class I molecules.
An alternate pathway of endocytic processing had also been demonstrated wherein certain proteins like fibrinogen and myoglobin can bind as a whole to MHC-II molecules after they are denatured and their disulfide bonds are reduced (breaking the bond by adding hydrogen atoms across it). The proteases then degrade the exposed regions of the protein-MHC II-complex.
Antigen presentation
After the processed antigen (peptide) is complexed to the MHC molecule, they both migrate together to the cell membrane, where they are exhibited (elaborated) as a complex that can be recognized by the CD 4+ (T helper cell) – a type of white blood cell. This is known as antigen presentation. However, the epitopes (conformational epitopes) that are recognized by the B cell prior to their digestion may not be the same as that presented to the T helper cell. Additionally, a B cell may present different peptides complexed to different MHC-II molecules.
T helper cell stimulation
The CD 4+ cells through their T cell receptor-CD3 complex recognize the epitope-bound MHC II molecules on the surface of the antigen presenting cells, and get 'activated'. Upon this activation, these T cells proliferate and differentiate into Th1 or Th2 cells. This makes them produce soluble chemical signals that promote their own survival. However, another important function that they carry out is the stimulation of B cell by establishing direct physical contact with them.
Co-stimulation of B cell by activated T helper cell
Complete stimulation of T helper cells requires the B7 molecule present on the antigen presenting cell to bind with CD28 molecule present on the T cell surface (in close proximity with the T cell receptor). Likewise, a second interaction between the CD40 ligand or CD154 (CD40L) present on T cell surface and CD40 present on B cell surface, is also necessary. The same interactions that stimulate the T helper cell also stimulate the B cell, hence the term costimulation. The entire mechanism ensures that an activated T cell only stimulates a B cell that recognizes the antigen containing the same epitope as recognized by the T cell receptor of the "costimulating" T helper cell. The B cell gets stimulated, apart from the direct costimulation, by certain growth factors, viz., interleukins 2, 4, 5, and 6 in a paracrine fashion. These factors are usually produced by the newly activated T helper cell. However, this activation occurs only after the B cell receptor present on a memory or a naive B cell itself would have bound to the corresponding epitope, without which the initiating steps of phagocytosis and antigen processing would not have occurred.
Proliferation and differentiation of B cell
A naive (or inexperienced) B cell is one which belongs to a clone which has never encountered the epitope to which it is specific. In contrast, a memory B cell is one which derives from an activated naive or memory B cell. The activation of a naive or a memory B cell is followed by a manifold proliferation of that particular B cell, most of the progeny of which terminally differentiate into plasma B cells; the rest survive as memory B cells. So, when the naive cells belonging to a particular clone encounter their specific antigen to give rise to the plasma cells, and also leave a few memory cells, this is known as the primary immune response. In the course of proliferation of this clone, the B cell receptor genes can undergo frequent (one in every two cell divisions) mutations in the genes coding for paratopes of antibodies. These frequent mutations are termed somatic hypermutation. Each such mutation alters the epitope-binding ability of the paratope slightly, creating new clones of B cells in the process. Some of the newly created paratopes bind more strongly to the same epitope (leading to the selection of the clones possessing them), which is known as affinity maturation. Other paratopes bind better to epitopes that are slightly different from the original epitope that had stimulated proliferation. Variations in the epitope structure are also usually produced by mutations in the genes of pathogen coding for their antigen. Somatic hypermutation, thus, makes the B cell receptors and the soluble antibodies in subsequent encounters with antigens, more inclusive in their antigen recognition potential of altered epitopes, apart from bestowing greater specificity for the antigen that induced proliferation in the first place. When the memory cells get stimulated by the antigen to produce plasma cells (just like in the clone's primary response), and leave even more memory cells in the process, this is known as a secondary immune response, which translates into greater numbers of plasma cells and faster rate of antibody production lasting for longer periods. The memory B cells produced as a part of secondary response recognize the corresponding antigen faster and bind more strongly with it (i.e., greater affinity of binding) owing to affinity maturation. The soluble antibodies produced by the clone show a similar enhancement in antigen binding.
Basis of polyclonality
Responses are polyclonal in nature as each clone somewhat specializes in producing antibodies against a given epitope, and because, each antigen contains multiple epitopes, each of which in turn can be recognized by more than one clone of B cells. To be able to react to innumerable antigens, as well as multiple constituent epitopes, the immune system requires the ability to recognize a very great number of epitopes in all, i.e., there should be a great diversity of B cell clones.
Clonality of B cells
Memory and naïve B cells normally exist in relatively small numbers. As the body needs to be able to respond to a large number of potential pathogens, it maintains a pool of B cells with a wide range of specificities. Consequently, while there is almost always at least one B (naive or memory) cell capable of responding to any given epitope (of all that the immune system can react against), there are very few exact duplicates. However, when a single B cell encounters an antigen to which it can bind, it can proliferate very rapidly. Such a group of cells with identical specificity towards the epitope is known as a clone, and is derived from a common "mother" cell. All the "daughter" B cells match the original "mother" cell in their epitope specificity, and they secrete antibodies with identical paratopes. These antibodies are monoclonal antibodies, since they derive from clones of the same parent cell. A polyclonal response is one in which clones of multiple B cells react to the same antigen.
Single antigen contains multiple overlapping epitopes
A single antigen can be thought of as a sequence of multiple overlapping epitopes. Many unique B cell clones may be able to bind to the individual epitopes. This imparts even greater multiplicity to the overall response. All of these B cells can become activated and produce large colonies of plasma cell clones, each of which can secrete up to 1000 antibody molecules against each epitope per second.
Multiple clones recognize single epitope
In addition to different B cells reacting to different epitopes on the same antigen, B cells belonging to different clones may also be able to react to the same epitope. An epitope that can be attacked by many different B cells is said to be highly immunogenic. In these cases, the binding affinities for respective epitope-paratope pairs vary, with some B cell clones producing antibodies that bind strongly to the epitope, and others producing antibodies that bind weakly.
Clonal selection
The clones that bind to a particular epitope with greater strength are more likely to be selected for further proliferation in the germinal centers of the follicles in various lymphoid tissues like the lymph nodes. This is not unlike natural selection: clones are selected for their fitness to attack the epitopes (strength of binding) on the encountered pathogen.
What makes the analogy even stronger is that the B lymphocytes have to compete with each other for signals that promote their survival in the germinal centers.
Diversity of B cell clones
Although there are many diverse pathogens, many of which are constantly mutating, it is a surprise that a majority of individuals remain free of infections. Thus, maintenance of health requires the body to recognize all pathogens (antigens they present or produce) likely to exist. This is achieved by maintaining a pool of immensely large (about 109) clones of B cells, each of which reacts against a specific epitope by recognizing and producing antibodies against it. However, at any given time very few clones actually remain receptive to their specific epitope. Thus, approximately 107 different epitopes can be recognized by all the B cell clones combined. Moreover, in a lifetime, an individual usually requires the generation of antibodies against very few antigens in comparison with the number that the body can recognize and respond against.
Significance of the phenomenon
Increased probability of recognizing any antigen
If an antigen can be recognized by more than one component of its structure, it is less likely to be "missed" by the immune system. Mutation of pathogenic organisms can result in modification of antigen—and, hence, epitope—structure. If the immune system "remembers" what the other epitopes look like, the antigen, and the organism, will still be recognized and subjected to the body's immune response. Thus, the polyclonal response widens the range of pathogens that can be recognized.
Limitation of immune system against rapidly mutating viruses
Many viruses undergo frequent mutations that result in changes in amino acid composition of their important proteins. Epitopes located on the protein may also undergo alterations in the process. Such an altered epitope binds less strongly with the antibodies specific to the unaltered epitope that would have stimulated the immune system. This is unfortunate because somatic hypermutation does give rise to clones capable of producing soluble antibodies that would have bound the altered epitope avidly enough to neutralize it. But these clones would consist of naive cells which are not allowed to proliferate by the weakly binding antibodies produced by the priorly stimulated clone. This doctrine is known as the original antigenic sin. This phenomenon comes into play particularly in immune responses against influenza, dengue and HIV viruses. This limitation, however, is not imposed by the phenomenon of polyclonal response, but rather, against it by an immune response that is biased in favor of experienced memory cells against the "novice" naive cells.
Increased chances of autoimmune reactions
In autoimmunity the immune system wrongly recognizes certain native molecules in the body as foreign (self-antigen), and mounts an immune response against them. Since these native molecules, as normal parts of the body, will naturally always exist in the body, the attacks against them can get stronger over time (akin to secondary immune response). Moreover, many organisms exhibit molecular mimicry, which involves showing those antigens on their surface that are antigenically similar to the host proteins. This has two possible consequences: first, either the organism will be spared as a self antigen; or secondly, that the antibodies produced against it will also bind to the mimicked native proteins. The antibodies will attack the self-antigens and the tissues harboring them by activating various mechanisms like the complement activation and antibody-dependent cell-mediated cytotoxicity. Hence, wider the range of antibody-specificities, greater the chance that one or the other will react against self-antigens (native molecules of the body).
Difficulty in producing monoclonal antibodies
Monoclonal antibodies are structurally identical immunoglobulin molecules with identical epitope-specificity (all of them bind with the same epitope with same affinity) as against their polyclonal counterparts which have varying affinities for the same epitope.
They are usually not produced in a natural immune response, but only in diseased states like multiple myeloma, or through specialized laboratory techniques. Because of their specificity, monoclonal antibodies are used in certain applications to quantify or detect the presence of substances (which act as antigen for the monoclonal antibodies), and for targeting individual cells (e.g. cancer cells). Monoclonal antibodies find use in various diagnostic modalities (see: western blot and immunofluorescence) and therapies—particularly of cancer and diseases with autoimmune component. But, since virtually all responses in nature are polyclonal, it makes production of immensely useful monoclonal antibodies less straightforward.
History
The first evidence of presence of a neutralizing substance in the blood that could counter infections came when Emil von Behring along with Kitasato Shibasaburō in 1890 developed effective serum against diphtheria. This they did by transferring serum produced from animals immunized against diphtheria to animals suffering from it. Transferring the serum thus could cure the infected animals. Behring was awarded the Nobel Prize for this work in 1901.
At this time though the chemical nature of what exactly in the blood conferred this protection was not known. In a few decades to follow, it was shown that the protective serum could neutralize and precipitate toxins, and clump bacteria. All these functions were attributed to different substances in the serum, and named accordingly as antitoxin, precipitin and agglutinin. That all the three substances were one entity (gamma globulins) was demonstrated by Elvin A. Kabat in 1939. In the preceding year Kabat had demonstrated the heterogeneity of antibodies through ultracentrifugation studies of horses' sera.
Until this time, cell-mediated immunity and humoral immunity were considered to be contending theories to explain effective immune response, but the former lagged behind owing to lack of advanced techniques. Cell-mediated immunity got an impetus in its recognition and study when in 1942, Merrill Chase successfully transferred immunity against tuberculosis between pigs by transferring white blood cells.
It was later shown in 1948 by Astrid Fagraeus in her doctoral thesis that the plasma B cells are specifically involved in antibody production. The role of lymphocytes in mediating both cell-mediated and humoral responses was demonstrated by James Gowans in 1959.
In order to account for the wide range of antigens the immune system can recognize, Paul Ehrlich in 1900 had hypothesized that preexisting "side chain receptors" bind a given pathogen, and that this interaction induces the cell exhibiting the receptor to multiply and produce more copies of the same receptor. This theory, called the selective theory was not proven for next five decades, and had been challenged by several instructional theories which were based on the notion that an antibody would assume its effective structure by folding around the antigen. In the late 1950s however, the works of three scientists—Jerne, Talmage and Burnet (who largely modified the theory)—gave rise to the clonal selection theory, which proved all the elements of Ehrlich's hypothesis except that the specific receptors that could neutralize the agent were soluble and not membrane-bound.
The clonal selection theory was proved correct when Sir Gustav Nossal showed that each B cell always produces only one antibody.
In 1974, the role of MHC in antigen presentation was demonstrated by Rolf Zinkernagel and Peter C. Doherty.
See also
Polyclonal antibodies
Antigen processing
Antiserum, a polyclonal antibody preparation used to treat envenomation
Notes
References
Further reading
External links
An Introduction to the Immune system
Immune system
Immunology
Cell biology | Polyclonal B cell response | [
"Chemistry",
"Biology"
] | 5,194 | [
"Cell biology",
"Immune system",
"Organ systems",
"Immunology",
"Molecular biology",
"Biochemistry"
] |
9,335,466 | https://en.wikipedia.org/wiki/Virtual%20packet | A virtual packet is a tool used to overcome the problem of trying to send data between two heterogeneous networks. If a router connecting networks A and B receives a frame constructed from network A, using protocol PA as its data exchange protocol, it won't mean ANYTHING for addressing use on network B, which we will assume uses PB as its data exchange protocol. To fix this, hardware-independent packet formats (or virtual packets) were created to overcome the heterogeneity.
Virtual packets include packets at any layer or sublayer (as those terms are used in, for example, the OSI model) above the most basic packets or frames used in a network. These "virtual packets" allow heterogeneous networks to talk to each other using a common protocol.
References
Packets (information technology) | Virtual packet | [
"Technology"
] | 168 | [
"Computing stubs",
"Computer network stubs"
] |
9,335,594 | https://en.wikipedia.org/wiki/Pisolithus%20arhizus | Pisolithus arhizus, commonly known as the dead man's foot, dyeball, pardebal, or Bohemian truffle, is a widespread earth-ball like fungus, which may in fact be several closely related species. This puffball's black viscous gel is used as a natural dye for clothes. Pisolithus arhizus is a major component in mycorrhizal fungus mixtures that are used in gardening as powerful root stimulators. It is inedible.
In South Africa, it is known as the pardebal, and in Europe, it is known as the Bohemian truffle.
The fruiting body is 5–30 cm tall and 4–20 cm wide, with a thin yellow-brown to brown exterior layer. The spores are brown.
Dictyocephalos attenuatus is similar.
References
External links
Pisolithus tinctorius
Mushroomexpert.com
Fungi described in 1786
Inedible fungi
Fungus species
Fungi used for fiber dyes
Pisolithus | Pisolithus arhizus | [
"Biology"
] | 220 | [
"Fungi",
"Fungus species"
] |
11,812,032 | https://en.wikipedia.org/wiki/Richard%20R.%20Arnold | Richard Robert "Ricky" Arnold II (born November 26, 1963, in Cheverly, Maryland) is an American educator and a NASA astronaut. He flew on Space Shuttle mission STS-119, which launched March 15, 2009, and delivered the final set of solar arrays to the International Space Station. He launched again in 2018 to the ISS, onboard Soyuz MS-08.
Arnold was raised in Bowie, Maryland and is married to Eloise Miller Arnold of Bowie. They have two daughters.
Education
Bowie High School, Bowie, Maryland, 1981.
B.S., Accounting, Frostburg State University, Maryland, 1985.
Completed teacher certification program at Frostburg State University, Maryland, 1988.
M.S., Marine, Estuarine, & Environmental Sciences, University of Maryland, 1992.
Organizations
National Science Teachers Association
International Technology Education Association
National Council of Teachers of Mathematics
Career
Arnold began working at the United States Naval Academy in 1987 as an Oceanographic Technician. Upon completing his teacher certification program, he accepted a position as a science teacher at John Hanson Middle School in Waldorf, Maryland. During his tenure, he completed a Masters program while conducting research in biostratigraphy at the Horn Point Environmental Laboratory in Cambridge, Maryland. Upon matriculation, Arnold spent another year working in the Marine Sciences including time at the Cape Cod National Seashore and aboard a sail training/oceanographic vessel headquartered in Woods Hole, Massachusetts. In 1993, Arnold joined the faculty at the Casablanca American School in Casablanca, Morocco, teaching college preparatory Biology and Marine Environmental Science. During that time, he began presenting workshops at various international education conferences focusing on science teaching methodologies. In 1996, he and his family moved to Riyadh, Saudi Arabia, where he was employed as a middle and high school science teacher at the American International School. In 2001, Arnold was hired by International Schools Services to teach middle school mathematics and science at the International School of Kuala Kencana operated by PT Freeport Indonesia in West Papua, Indonesia. In 2003, he accepted a similar teaching position at the American International School of Bucharest in Bucharest, Romania. Arnold was the guest of honor at the Bowie High School Class of 2009 graduation, delivering the commencement speech.
NASA career
Arnold was selected as a Mission Specialist-Educator by NASA in May 2004. In February 2006 he completed Astronaut Candidate Training that included scientific and technical briefings, intensive instruction in Shuttle and International Space Station systems, physiological training, T-38 flight training, and water and wilderness survival training. Upon completion of his training, Arnold was assigned to the Hardware Integration Team in the Space Station Branch working on technical issues with JAXA hardware. He worked on various technical assignments until he was assigned to the STS-119 spaceflight.
In August 2007, Arnold served as an aquanaut during the NEEMO 13 project, an exploration research mission held in Aquarius, the world's only undersea research laboratory. On September 19, 2011, NASA announced that Arnold would participate in the NEEMO 15 mission in October 2011 from the DeepWorker submersible. The DeepWorker is a small submarine used as an underwater stand-in for the Space Exploration Vehicle, which might someday be used to explore the surface of an asteroid. In 2016 Arnold participated in ESA CAVES training in Sardinia (Italy) spending six nights underground simulating a mission exploring another planet.
STS-119
Arnold participated in two spacewalks during the STS-119 mission.
The first EVA by Steven Swanson and Arnold occurred on March 19, 2009, with a duration of 6 hours 7 minutes. After Canadarm2 was released S6 and moved away, the spacewalkers plugged in power and data cables to connect the new hardware. The two spacewalkers also removed launch locks, stowed a keel pin, removed and jettisoned four thermal covers, and deployed the blanket boxes that hold the solar arrays in place during launch. The first spacewalk was devoted to installing the station's new truss segment and preparing the segment's solar arrays for unfolding on the eighth day of the mission.
Arnold performed his second EVA on March 23, 2009, with Joseph Acaba with a duration of 6 hours 27 minutes, to move one of the station's two Crew and Equipment and Translation Aid - or CETA - carts. The carts were moved to the port side of the station's truss during the previous mission to give the robotic arm's mobile transporter the best possible access to the starboard truss for the installation of the new truss segment and solar arrays. With that work done, one of the carts was moved back to the station's port side, leaving a cart for use on either side of the truss. The CETA-cart also got a new coupler. The EVA also included lubricating the end effector capture snares on the station's robot arm - similar to what was done to the other end on an STS-126 spacewalk in late 2008. This has been proven to prevent the snare from snagging or not returning snugly into its groove inside the latching mechanism.
Expedition 55/56
In March 2017, Arnold was assigned to ISS Expeditions 55/56. He launched on Soyuz MS-08 on March 21, 2018.
On March 29, 2018, Arnold performed his first EVA of the mission, with crewmate Drew Feustel. They installed wireless communications equipment on the station's Tranquility module to enhance payload data processing for the experiment ECOSTRESS (Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station). They also swapped out high-definition video cameras on the port truss of the station's backbone and removed aging hoses from a cooling component on the station's truss. The spacewalk had a duration of 6 hours and 10 minutes, bringing Arnold's cumulative EVA time to over 18 hours.
Arnold returned to Earth on October 4, 2018.
References
External links
Spacefacts biography of Richard Arnold
1963 births
Living people
Educator astronauts
Schoolteachers from Maryland
Aquanauts
Frostburg State University alumni
University of Maryland, College Park alumni
NASA civilian astronauts
Space Shuttle program astronauts
Spacewalkers | Richard R. Arnold | [
"Astronomy"
] | 1,240 | [
"Educator astronauts",
"Astronomy education"
] |
11,813,773 | https://en.wikipedia.org/wiki/Gliese%20832 | Gliese 832 (Gl 832 or GJ 832) is a red dwarf of spectral type M2V in the southern constellation Grus. The apparent visual magnitude of 8.66 means that it is too faint to be seen with the naked eye. It is located relatively close to the Sun, at a distance of 16.2 light years and has a high proper motion of 818.16 milliarcseconds per year. Gliese 832 has just under half the mass and radius of the Sun. Its estimated rotation period is a relatively leisurely 46 days. The star is roughly 6 billion years old.
This star achieved perihelion some 52,920 years ago when it came within an estimated of the Sun.
Gliese 832 emits X-rays. Despite the strong flare activity, Gliese 832 is producing on average less ionizing radiation than the Sun. Only at extremely short radiation wavelengths (<50nm) does its radiation intensity rise above the level of quiet Sun, but does not reach levels typical for active Sun.
Planetary system
Gliese 832 hosts one known planet, with a second planet having been refuted in 2022. No additional planets were found as of 2024.
Gliese 832 b
In September 2008, it was announced that a Jupiter-like planet, designated Gliese 832 b, had been detected in a long-period, near-circular orbit around this star, with a false alarm probability of a negligible 0.05%. It would induce an astrometric perturbation on its star of at least 0.95 milliarcseconds and is thus a good candidate for being detected by astrometric observations. Despite its relatively large angular distance, direct imaging is problematic due to the star–planet contrast. The orbital solution of the planet was refined in 2011. In 2023, an astrometric detection of the planet was announced, determining its inclination and revealing a true mass 80% the mass of Jupiter.
Gliese 832 c
Gliese 832 c was believed to be of super-Earth mass. It was announced to orbit in the optimistic habitable zone but outside the conservative habitable zone of its parent star. The planet Gliese 832 c was believed to be in, or very close to, the right distance from its sun to allow liquid water to exist on its surface. However, doubts were raised about the existence of planet c by a 2015 study, which found that its orbital period is close to the stellar rotation period. The existence of the planet was refuted in 2022, when a study found that the radial velocity signal shows characteristics of a signal originating from stellar activity, and not from a planet.
The region between Gliese 832 b and where Gliese 832 c would be is a zone where additional planets are possible.
Search for cometary disc
If this system has a comet disc, it is not "brighter than the fractional dust luminosity 10−5" according to a 2012 Herschel study.
See also
List of nearest stars
List of extrasolar planets
Notes
References
Local Bubble
M-type main-sequence stars
204961
106440
Grus (constellation)
CD-49 13515
J21333397-4900323
Planetary systems with one confirmed planet
139754153
Astronomical X-ray sources | Gliese 832 | [
"Astronomy"
] | 693 | [
"Astronomical X-ray sources",
"Grus (constellation)",
"Astronomical objects",
"Constellations"
] |
11,813,890 | https://en.wikipedia.org/wiki/Supersymmetry%20algebras%20in%201%20%2B%201%20dimensions | A two dimensional Minkowski space, i.e. a flat space with one time and one spatial dimension, has a two-dimensional Poincaré group IO(1,1) as its symmetry group. The respective Lie algebra is called the Poincaré algebra. It is possible to extend this algebra to a supersymmetry algebra, which is a -graded Lie superalgebra. The most common ways to do this are discussed below.
algebra
Let the Lie algebra of IO(1,1) be generated by the following generators:
is the generator of the time translation,
is the generator of the space translation,
is the generator of Lorentz boosts.
For the commutators between these generators, see Poincaré algebra.
The supersymmetry algebra over this space is a supersymmetric extension of this Lie algebra with the four additional generators (supercharges) , which are odd elements of the Lie superalgebra. Under Lorentz transformations the generators and transform as left-handed Weyl spinors, while and transform as right-handed Weyl spinors. The algebra is given by the Poincaré algebra plus
where all remaining commutators vanish, and and are complex central charges. The supercharges are related via . , , and are Hermitian.
Subalgebras of the algebra
The and subalgebras
The subalgebra is obtained from the algebra by removing the generators and . Thus its anti-commutation relations are given by
plus the commutation relations above that do not involve or . Both generators are left-handed Weyl spinors.
Similarly, the subalgebra is obtained by removing and and fulfills
Both supercharge generators are right-handed.
The subalgebra
The subalgebra is generated by two generators and given by
for two real numbers and .
By definition, both supercharges are real, i.e. . They transform as Majorana-Weyl spinors under Lorentz transformations. Their anti-commutation relations are given by
where is a real central charge.
The and subalgebras
These algebras can be obtained from the subalgebra by removing resp. from the generators.
See also
Supersymmetry
Super-Poincaré algebra (in 1+3 dimensions)
References
K. Schoutens, Supersymmetry and factorized scattering, Nucl.Phys. B344, 665–695, 1990
T.J. Hollowood, E. Mavrikis, The N = 1 supersymmetric bootstrap and Lie algebras, Nucl. Phys. B484, 631–652, 1997, arXiv:hep-th/9606116
Supersymmetry
Mathematical physics
Lie algebras | Supersymmetry algebras in 1 + 1 dimensions | [
"Physics",
"Mathematics"
] | 581 | [
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"Mathematical physics",
"Supersymmetry",
"Symmetry"
] |
11,814,285 | https://en.wikipedia.org/wiki/Stefan%27s%20formula | In thermodynamics, Stefan's formula says that the specific surface energy at a given interface is determined by the respective enthalpy difference .
where σ is the specific surface energy, NA is the Avogadro constant, is a steric dimensionless coefficient, and Vm is the molar volume.
References
Thermodynamic equations
Chemical thermodynamics | Stefan's formula | [
"Physics",
"Chemistry"
] | 79 | [
"Chemical thermodynamics",
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics"
] |
11,814,346 | https://en.wikipedia.org/wiki/Automotive%20paint | Automotive paint is paint used on automobiles for both protective and decorative purposes. Water-based acrylic polyurethane enamel paint is currently the most widely used paint for reasons including reducing paint's environmental impact.
Modern automobile paint is applied in several layers, with a total thickness of around 100 μm (0.1mm). Paint application requires preparation and primer steps to ensure proper application. A basecoat is applied after the primer paint is applied. Following this, a clearcoat of paint may be applied that forms a glossy and transparent coating. The clearcoat layer must be able to withstand UV light.
History
In the early days of the automobile industry, paint was applied manually and dried for weeks at room temperature because it was a single component paint that dried by solvent evaporation. As mass production of cars made the process untenable, paint began to be dried in ovens. Nowadays, two-component (catalyzed) paint is usually applied by robotic arms and cures in just a few hours either at room temperature or in heated booths.
Until several decades ago lead, chromium, and other heavy metals were used in automotive paint. Environmental laws have prohibited this, which has resulted in a move to water-based paints. Up to 85% of Lacquer paint can evaporate into the air, polluting the atmosphere. Enamel paint is better for the environment and replaced lacquer paint in the late 20th century. Water-based acrylic polyurethane enamels are now almost universally used as the basecoat with a clearcoat.
Processes and coatings
Preparation
High-pressure water spray jets are directed to the body. Without proper pretreatment, premature failure of the finish system can almost be guaranteed. A phosphate coat is necessary to protect the body against corrosion effects and prepares the surface for the E-Coat.
The body is dipped into the Electro-Coat Paint Operation (ELPO/E-Coat), then a high voltage is applied. The body works as a cathode and the paint as an anode sticking on the body surface. It is an eco-friendly painting process. In E-Coat, also called CED paint, use is approximately 99.9% and provides superior salt spray resistance compared to other painting processes.
Primer
The primer is the first coat to be applied. The primer serves several purposes.
It serves as a leveler, which is important since the cab often has marks and other forms of surface defect after being manufactured in the body shop. A smoother surface is created by leveling out these defects and therefore a better final product.
It protects the vehicle from corrosion, heat differences, bumps, stone-chips, UV-light, etc.
It improves ease of application by making it easier for paints to stick to the surface. Using a primer, a more varied range of paints can be used.
Base coat
The base coat is applied after the primer coat. This coat contains the visual properties of color and effects, and is usually the one referred to as the paint. Base coat used in automotive applications is commonly divided into three categories: solid, metallic, and pearlescent pigments.
Solid paints have no sparkle effects except the color. This is the easiest type of paint to apply, and the most common type of paint for heavy transportation vehicles, construction equipment and aircraft. It is also widely used on cars, trucks, and motorcycles. Clear coat was not used on solid colors until the early 1990s.
Metallic paints contain aluminium flakes to create a sparkling and grainy effect, generally referred to as a metallic look. This paint is harder to manage than solid paints because of the extra dimensions to consider. Metallic and pearlescent paints must be applied evenly to ensure a consistent looking finish without light and dark spots which are often called "mottling". Metallic basecoats are formulated so that the aluminium flake is parallel to the substrate. This maximises the "flop". This is the difference in the brightness between looking perpendicularly at the paint and that at an acute angle. The "flop" is maximised if the basecoat increases in viscosity shortly after application so that the aluminium flake which is in a random orientation after spraying is locked into this position while there is still much solvent (or water) in the coating. Subsequent evaporation of the solvent (or water), leads to a reduction in the film thickness of the drying coating, causing the aluminium flake to be dragged into an orientation parallel to the substrate. This orientation then needs to be unaffected by the application of the clear coat solvents. The formulation of the clear coat needs to be carefully chosen so that it will not "re-dissolve" the basecoat and thus affect the orientation of the metallic flake but will still exhibit enough adhesion between the coatings so as to avoid delamination of the clear coat. A similar mode of action occurs with pearlescent pigmented basecoats.
Pearlescent paints contain special iridescent pigments commonly referred to as "pearls". Pearl pigments impart a colored sparkle to the finish which works to create depth of color. Pearlescent paints can be two stage in nature (pearl base color + clear) or 3 stage in nature (basecoat + pearl mid-coat + clear-coat).
Clearcoat
Usually sprayed on top of a colored basecoat, clearcoat is a glossy and transparent coating that forms the final interface with the environment. For this reason, clearcoat must be durable enough to resist abrasion and chemically stable enough to withstand UV light. Clearcoat can be either solvent or water-borne.
One part and two part formulations are often referred to as "1K" and "2K" respectively. Car manufacturer (OEM) clear coats applied to the metal bodies of cars are normally 1K systems since they can be heated to around 140 °C to effect cure. The clear coats applied to the plastic components like the bumpers and wing mirrors however are 2K systems since they can normally only accept temperatures up to about 90 °C. These 2K systems are normally applied "off line" with the coated plastic parts fixed to the painted metallic body. Owing to the difference in formulation of the 1K and 2K systems and the fact they are coated in different locations they have a different effect on the "redissolving" of the metallic base coat. This is most easily seen in the light metallic paints like the silver and light blue or green shades where the "flop" difference is most marked.
Terminology
The terminology for automotive paints has been driven by the progression of technologies and by the desire to both distinguish new technologies and relate to previous technologies for the same purpose. Modern car paints are nearly always an acrylic polyurethane "enamel" with a pigmented basecoat and a clear topcoat. It may be described as "acrylic", "acrylic enamel", "urethane", etc. and the clearcoat in particular may be described as a lacquer. True lacquers and acrylic lacquers are obsolete, and plain acrylic enamels have largely been superseded by better-performing paints. True enamel is not an automotive paint. The term is common for any tough glossy paint but its use in the automotive industry is often restricted to older paints before the introduction of polyurethane hardeners.
Chemistry
Modern car paint is typically made from acrylic-polyurethane hybrid dispersions, which are a combination of two different plastics. They were developed during the 1970's and 80's as a water-soluble replacement for enamel paints, following health concerns over their high VOC content. Acrylic is less expensive and can hold more pigment, but has poor scratch resistance, whereas polyurethanes are harder but more costly. Combining both types gives a material which can contain a lot of color and be hard-wearing. Simply mixing the materials is not sufficient, as this give heterogeneous coating with separate acrylic and polyurethane domains. Instead, the starting chemicals for each plastic (monomers) are combined and partially polymerized to give an interpenetrating polymer network. Within this the polymer-chains are not chemically bonded to one another, but instead become entangled and interwoven and as they form. This is possible because they polymerize in different ways, which are incompatible with each other. Polyurethane is formed by step growth polymerization involving polycondensation, whereas acrylic is formed by chain growth polymerization featuring free radicals. The resulting product is homogeneous and tough, with superior properties to the individual plastics.
Types and form
Innovations are taking place in paint industry as well. These days, automotive paints come in liquid form, spray form, and powder forms:-
Liquid: Usually polyurethane paints. Compressor is needed to apply.
Spray: This is as same as perfume in spray bottle. Made for DIYer.
Powder or additive: Paints in powder form applied after mixing in paint thinner.
Types of automotive paints
Removable: These kinds of paints are made for giving custom appearance to vehicle.
Non-removable: Made for touch-ups and painting vehicle.
See also
Fordite, automotive paint which has been layered and dried over time
References
Automotives Paints and Coatings, Streitberger & Dössel, 2008
Paint Materials and Processes from an Automotive OEM Perspective
Automotive technologies
Paints | Automotive paint | [
"Chemistry"
] | 1,928 | [
"Paints",
"Coatings"
] |
11,814,370 | https://en.wikipedia.org/wiki/Center%20%28category%20theory%29 | In category theory, a branch of mathematics, the center (or Drinfeld center, after Soviet-American mathematician Vladimir Drinfeld) is a variant of the notion of the center of a monoid, group, or ring to a category.
Definition
The center of a monoidal category , denoted , is the category whose objects are pairs (A,u) consisting of an object A of and an isomorphism which is natural in satisfying
and
(this is actually a consequence of the first axiom).
An arrow from (A,u) to (B,v) in consists of an arrow in such that
.
This definition of the center appears in . Equivalently, the center may be defined as
i.e., the endofunctors of C which are compatible with the left and right action of C on itself given by the tensor product.
Braiding
The category becomes a braided monoidal category with the tensor product on objects defined as
where , and the obvious braiding.
Higher categorical version
The categorical center is particularly useful in the context of higher categories. This is illustrated by the following example: the center of the (abelian) category of R-modules, for a commutative ring R, is again. The center of a monoidal ∞-category C can be defined, analogously to the above, as
.
Now, in contrast to the above, the center of the derived category of R-modules (regarded as an ∞-category) is given by the derived category of modules over the cochain complex encoding the Hochschild cohomology, a complex whose degree 0 term is R (as in the abelian situation above), but includes higher terms such as (derived Hom).
The notion of a center in this generality is developed by . Extending the above-mentioned braiding on the center of an ordinary monoidal category, the center of a monoidal ∞-category becomes an -monoidal category. More generally, the center of a -monoidal category is an algebra object in -monoidal categories and therefore, by Dunn additivity, an -monoidal category.
Examples
has shown that the Drinfeld center of the category of sheaves on an orbifold X is the category of sheaves on the inertia orbifold of X. For X being the classifying space of a finite group G, the inertia orbifold is the stack quotient G/G, where G acts on itself by conjugation. For this special case, Hinich's result specializes to the assertion that the center of the category of G-representations (with respect to some ground field k) is equivalent to the category consisting of G-graded k-vector spaces, i.e., objects of the form
for some k-vector spaces, together with G-equivariant morphisms, where G acts on itself by conjugation.
In the same vein, have shown that Drinfeld center of the derived category of quasi-coherent sheaves on a perfect stack X is the derived category of sheaves on the loop stack of X.
Related notions
Centers of monoid objects
The center of a monoid and the Drinfeld center of a monoidal category are both instances of the following more general concept. Given a monoidal category C and a monoid object A in C, the center of A is defined as
For C being the category of sets (with the usual cartesian product), a monoid object is simply a monoid, and Z(A) is the center of the monoid. Similarly, if C is the category of abelian groups, monoid objects are rings, and the above recovers the center of a ring. Finally, if C is the category of categories, with the product as the monoidal operation, monoid objects in C are monoidal categories, and the above recovers the Drinfeld center.
Categorical trace
The categorical trace of a monoidal category (or monoidal ∞-category) is defined as
The concept is being widely applied, for example in .
References
.
External links
Category theory
Monoidal categories | Center (category theory) | [
"Mathematics"
] | 849 | [
"Functions and mappings",
"Mathematical structures",
"Monoidal categories",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
11,815,157 | https://en.wikipedia.org/wiki/Particle-size%20distribution | In granulometry, the particle-size distribution (PSD) of a powder, or granular material, or particles dispersed in fluid, is a list of values or a mathematical function that defines the relative amount, typically by mass, of particles present according to size. Significant energy is usually required to disintegrate soil, etc. particles into the PSD that is then called a grain size distribution.
Significance
The PSD of a material can be important in understanding its physical and chemical properties. It affects the strength and load-bearing properties of rocks and soils. It affects the reactivity of solids participating in chemical reactions, and needs to be tightly controlled in many industrial products such as the manufacture of printer toner, cosmetics, and pharmaceutical products.
Significance in the collection of particulate matter
Particle size distribution can greatly affect the efficiency of any collection device.
Settling chambers will normally only collect very large particles, those that can be separated using sieve trays.
Centrifugal collectors will normally collect particles down to about 20 μm. Higher efficiency models can collect particles down to 10 μm.
Fabric filters are one of the most efficient and cost effective types of dust collectors available and can achieve a collection efficiency of more than 99% for very fine particles.
Wet scrubbers that use liquid are commonly known as wet scrubbers. In these systems, the scrubbing liquid (usually water) comes into contact with a gas stream containing dust particles. The greater the contact of the gas and liquid streams, the higher the dust removal efficiency.
Electrostatic precipitators use electrostatic forces to separate dust particles from exhaust gases. They can be very efficient at the collection of very fine particles.
Filter Press used for filtering liquids by cake filtration mechanism. The PSD plays an important part in the cake formation, cake resistance, and cake characteristics. The filterability of the liquid is determined largely by the size of the particles.
Nomenclature
ρp: Actual particle density (g/cm3)
ρg: Gas or sample matrix density (g/cm3)
r2: Least-squares coefficient of determination. The closer this value is to 1.0, the better the data fit to a hyperplane representing the relationship between the response variable and a set of covariate variables. A value equal to 1.0 indicates all data fit perfectly within the hyperplane.
λ: Gas mean free path (cm)
D50: Mass-median-diameter (MMD). The log-normal distribution mass median diameter. The MMD is considered to be the average particle diameter by mass.
σg: Geometric standard deviation. This value is determined mathematically by the equation:
σg = D84.13/D50 = D50/D15.87
The value of σg determines the slope of the least-squares regression curve.
α: Relative standard deviation or degree of polydispersity. This value is also determined mathematically. For values less than 0.1, the particulate sample can be considered to be monodisperse.
α = σg/D50
Re(P) : Particle Reynolds Number.
In contrast to the large numerical values noted for flow Reynolds number, particle Reynolds number for fine particles in gaseous mediums is typically less than 0.1.
Ref : Flow Reynolds number.
Kn: Particle Knudsen number.
Types
PSD is usually defined by the method by which it is determined. The most easily understood method of determination is sieve analysis, where powder is separated on sieves of different sizes. Thus, the PSD is defined in terms of discrete size ranges: e.g. "% of sample between 45 μm and 53 μm", when sieves of these sizes are used. The PSD is usually determined over a list of size ranges that covers nearly all the sizes present in the sample. Some methods of determination allow much narrower size ranges to be defined than can be obtained by use of sieves, and are applicable to particle sizes outside the range available in sieves. However, the idea of the notional "sieve", that "retains" particles above a certain size, and "passes" particles below that size, is universally used in presenting PSD data of all kinds.
The PSD may be expressed as a "range" analysis, in which the amount in each size range is listed in order. It may also be presented in "cumulative" form, in which the total of all sizes "retained" or "passed" by a single notional "sieve" is given for a range of sizes. Range analysis is suitable when a particular ideal mid-range particle size is being sought, while cumulative analysis is used where the amount of "under-size" or "over-size" must be controlled.
The way in which "size" is expressed is open to a wide range of interpretations. A simple treatment assumes the particles are spheres that will just pass through a square hole in a "sieve". In practice, particles are irregular – often extremely so, for example in the case of fibrous materials – and the way in which such particles are characterized during analysis is very dependent on the method of measurement used.
Sampling
Before a PSD can be determined, it is vital that a representative sample is obtained. In the case where the material to be analysed is flowing, the sample must be withdrawn from the stream in such a way that the sample has the same proportions of particle sizes as the stream. The best way to do this is to take many samples of the whole stream over a period, instead of taking a portion of the stream for the whole time.p. 6 In the case where the material is in a heap, scoop or thief sampling needs to be done, which is inaccurate: the sample should ideally have been taken while the powder was flowing towards the heap.p. 10 After sampling, the sample volume typically needs to be reduced. The material to be analysed must be carefully blended, and the sample withdrawn using techniques that avoid size segregation, for example using a rotary dividerp. 5. Particular attention must be paid to avoidance of loss of fines during manipulation of the sample.
Measurement techniques
Sieve analysis
Sieve analysis is often used because of its simplicity, cheapness, and ease of interpretation. Methods may be simple shaking of the sample in sieves until the amount retained becomes more or less constant. Alternatively, the sample may be washed through with a non-reacting liquid (usually water) or blown through with an air current.
Advantages: this technique is well-adapted for bulk materials. A large amount of materials can be readily loaded into sieve trays. Two common uses in the powder industry are wet-sieving of milled limestone and dry-sieving of milled coal.
Disadvantages: many PSDs are concerned with particles too small for separation by sieving to be practical. A very fine sieve, such as 37 μm sieve, is exceedingly fragile, and it is very difficult to get material to pass through it. Another disadvantage is that the amount of energy used to sieve the sample is arbitrarily determined. Over-energetic sieving causes attrition of the particles and thus changes the PSD, while insufficient energy fails to break down loose agglomerates. Although manual sieving procedures can be ineffective, automated sieving technologies using image fragmentation analysis software are available. These technologies can sieve material by capturing and analyzing a photo of material.
Air elutriation analysis
Material may be separated by means of air elutriation, which employs an apparatus with a vertical tube through which fluid is passed at a controlled velocity. When the particles are introduced, often through a side tube, the smaller particles are carried over in the fluid stream while the large particles settle against the upward current. If we start with low flow rates small less dense particle attain terminal velocities, and flow with the stream, the particle from the stream is collected in overflow and hence will be separated from the feed. Flow rates can be increased to separate higher size ranges. Further size fractions may be collected if the overflow from the first tube is passed vertically upwards through a second tube of greater cross-section, and any number of such tubes can be arranged in series.
Advantages: a bulk sample is analyzed using centrifugal classification and the technique is non-destructive. Each cut-point can be recovered for future size-respective chemical analyses. This technique has been used for decades in the air pollution control industry (data used for design of control devices). This technique determines particle size as a function of settling velocity in an air stream (as opposed to water, or some other liquid).
Disadvantages: a bulk sample (about ten grams) must be obtained. It is a fairly time-consuming analytical technique. The actual test method has been withdrawn by ASME due to obsolescence. Instrument calibration materials are therefore no longer available.
Photoanalysis
Materials can now be analysed through photoanalysis procedures. Unlike sieve analyses which can be time-consuming and inaccurate, taking a photo of a sample of the materials to be measured and using software to analyze the photo can result in rapid, accurate measurements. Another advantage is that the material can be analyzed without being handled. This is beneficial in the agricultural industry, as handling of food products can lead to contamination. Photoanalysis equipment and software is currently being used in mining, forestry and agricultural industries worldwide.
Optical counting methods
PSDs can be measured microscopically by sizing against a graticule and counting, but for a statistically valid analysis, millions of particles must be measured. This is impossibly arduous when done manually, but automated analysis of electron micrographs is now commercially available. It is used to determine the particle size within the range of 0.2 to 100 micrometers.
Electroresistance counting methods
An example of this is the Coulter counter, which measures the momentary changes in the conductivity of a liquid passing through an orifice that take place when individual non-conducting particles pass through. The particle count is obtained by counting pulses. This pulse is proportional to the volume of the sensed particle.
Advantages: very small sample aliquots can be examined.
Disadvantages: sample must be dispersed in a liquid medium... some particles may (partially or fully) dissolve in the medium altering the size distribution. The results are only related to the projected cross-sectional area that a particle displaces as it passes through an orifice. This is a physical diameter, not really related to mathematical descriptions of particles (e.g. terminal settling velocity).
Sedimentation techniques
These are based upon study of the terminal velocity acquired by particles suspended in a viscous liquid. Sedimentation time is longest for the finest particles, so this technique is useful for sizes below 10 μm, but sub-micrometer particles cannot be reliably measured due to the effects of Brownian motion. Typical apparatus disperses the sample in liquid, then measures the density of the column at timed intervals. Other techniques determine the optical density of successive layers using visible light or x-rays.
Advantages: this technique determines particle size as a function of settling velocity.
Disadvantages: Sample must be dispersed in a liquid medium... some particles may (partially or fully) dissolve in the medium altering the size distribution, requiring careful selection of the dispersion media.
Density is highly dependent upon fluid temperature remaining constant.
X-Rays will not count carbon (organic) particles.
Many of these instruments can require a bulk sample (e.g. two to five grams).
Laser diffraction methods
These depend upon analysis of the "halo" of diffracted light produced when a laser beam passes through a dispersion of particles in air or in a liquid. The angle of diffraction increases as particle size decreases, so that this method is particularly good for measuring sizes between 0.1 and 3,000 μm. Advances in sophisticated data processing and automation have allowed this to become the dominant method used in industrial PSD determination. This technique is relatively fast and can be performed on very small samples. A particular advantage is that the technique can generate a continuous measurement for analyzing process streams.
Laser diffraction measures particle size distributions by measuring the angular variation in intensity of light scattered as a laser beam passes through a dispersed particulate sample. Large particles scatter light at small angles relative to the laser beam and small particles scatter light at large angles. The angular scattering intensity data is then analyzed to calculate the size of the particles responsible for creating the scattering pattern, using the Mie theory or Fraunhofer approximation of light scattering. The particle size is reported as a volume equivalent sphere diameter.
Laser Obscuration Time" (LOT) or "Time Of Transition" (TOT)
A focused laser beam rotates in a constant frequency and interacts with particles within the sample medium.
Each randomly scanned particle obscures the laser beam to its dedicated photo diode, which measures the time of obscuration.
The time of obscuration directly relates to the particle's Diameter, by a simple calculation principle of multiplying the known beam rotation Velocity in the directly measured Time of obscuration, (D=V*t).
Acoustic spectroscopy or ultrasound attenuation spectroscopy
Instead of light, this method employs ultrasound for collecting information on the particles that are dispersed in fluid. Dispersed particles absorb and scatter ultrasound similarly to light. This has been known since Lord Rayleigh developed the first theory of ultrasound scattering and published a book "The Theory of Sound" in 1878. There have been hundreds of papers studying ultrasound propagation through fluid particulates in the 20th century. It turns out that instead of measuring scattered energy versus angle, as with light, in the case of ultrasound, measuring the transmitted energy versus frequency is a better choice. The resulting ultrasound attenuation frequency spectra are the raw data for calculating particle size distribution. It can be measured for any fluid system with no dilution or other sample preparation. This is a big advantage of this method. Calculation of particle size distribution is based on theoretical models that are well verified for up to 50% by volume of dispersed particles on micron and nanometer scales. However, as concentration increases and the particle sizes approach the nanoscale, conventional modelling gives way to the necessity to include shear-wave re-conversion effects in order for the models to accurately reflect the real attenuation spectra.
Air pollution emissions measurements
Cascade impactors – particulate matter is withdrawn isokinetically from a source and segregated by size in a cascade impactor at the sampling point exhaust conditions of temperature, pressure, etc. Cascade impactors use the principle of inertial separation to size segregate particle samples from a particle laden gas stream. The mass of each size fraction is determined gravimetrically. The California Air Resources Board Method 501 is currently the most widely accepted test method for particle size distribution emissions measurements.
Mathematical models
Probability distributions
The log-normal distribution is often used to approximate the particle size distribution of aerosols, aquatic particles and pulverized material.
The Weibull distribution or Rosin–Rammler distribution is a useful distribution for representing particle size distributions generated by grinding, milling and crushing operations.
The log-hyperbolic distribution was proposed by Bagnold and Barndorff-Nielsen to model the particle-size distribution of naturally occurring sediments. This model suffers from having non-unique solutions for a range of probability coefficients.
The skew log-Laplace model was proposed by Fieller, Gilbertson and Olbricht as a simpler alternative to the log-hyperbolic distribution.
Rosin–Rammler distribution
The Weibull distribution, now named for Waloddi Weibull was first identified by and first applied by to describe particle size distributions. It is still widely used in mineral processing to describe particle size distributions in comminution processes.
where
: Particle size
: 80th percentile of the particle size distribution
: Parameter describing the spread of the distribution
The inverse distribution is given by:
where
: Mass fraction
Parameter estimation
The parameters of the Rosin–Rammler distribution can be determined by refactoring the distribution function to the form
Hence the slope of the line in a plot of
versus
yields the parameter and is determined by substitution into
See also
Particle size
Sauter mean diameter — mathematical description of average particle size, based on an idealized sphere
De Brouckere mean diameter —determination of average particle size from measurements
Granulometry (morphology)
Optical granulometry
Raindrop size distribution
References
Further reading
O. Ahmad, J. Debayle, and J. C. Pinoli. "A geometric-based method for recognizing overlapping polygonalshaped and semi-transparent particles in gray tone images", Pattern Recognition Letters 32(15), 2068–2079,2011.
O. Ahmad, J. Debayle, N. Gherras, B. Presles, G. Févotte, and J. C. Pinoli. "Recognizing overlapped particles during a crystallization process from in situ video images for measuring their size distributions.",In 10th SPIE International Conference on Quality Control by Artificial Vision (QCAV), Saint-Etienne, France,June 2011.
O. Ahmad, J. Debayle, N. Gherras, B. Presles, G. Févotte, and J. C. Pinoli. "Quantification of overlapping polygonal-shaped particles based on a new segmentation method of in situ images during crystallization.",Journal of Electronic Imaging, 21(2), 021115, 2012.
.
.
External links
Free expert system for size analysis technique selection
Matlab toolbox for integrating and calibrating particle-size data from multiple sources
Aerosols
Chemical mixtures
Colloidal chemistry
Particle technology | Particle-size distribution | [
"Chemistry",
"Engineering"
] | 3,633 | [
"Colloidal chemistry",
"Chemical engineering",
"Surface science",
"Colloids",
"Aerosols",
"Chemical mixtures",
"nan",
"Environmental engineering",
"Particle technology"
] |
11,815,346 | https://en.wikipedia.org/wiki/Remote%20diagnostics | Remote diagnostics is the act of diagnosing a given symptom, issue or problem from a distance. Instead of the subject being co-located with the person or system done diagnostics, with remote diagnostics the subjects can be separated by physical distance (e.g., Earth-Moon). Important information is exchanged either through wire or wireless.
When limiting to systems, a general accepted definition is:
"To improve reliability of vital or capital-intensive installations and reduce the maintenance costs by avoiding unplanned maintenance, by monitoring the condition of the system remotely."
Process elements for remote diagnostics
Remotely monitor selected vital system parameters
Analysis of data to detect trends
Comparison with known or expected behavior data
After detected performance degradation, predict the failure moment by extrapolation
Order parts and/or plan maintenance, to be executed when really necessary, but in time to prevent a failure or stop
Typical uses
Medical use (see Remote guidance)
Formula One racecars
Space (Apollo project and others)
Telephone systems like a PABX
Connected Cars
Reasons for use
Limit local personnel to a minimum (Gemini, Apollo capsules: too tight to fit all technicians)
Limit workload of local personnel
Limit risks (exposure to dangerous environments)
Central expertise (locally solve small problems, remotely/centralized solve complex problems by experts)
Efficiency: reduce travel time to get expert and system or subject together
Remote diagnostics and maintenance
Remote diagnostics and maintenance refers to both diagnoses of the fault or faults and taking corrective (maintenance) actions, like changing settings to improve performance or prevent problems like breakdown, wear and tear. RDM can replace manpower at location by experts on a central location, in order to save manpower or prevent hazardous situations (space for instance). Increasing globalisation and more and more complicated machinery and software, also creates the wish to remote engineering, so travel over growing distances of experienced and expensive engineering personnel is limited.
See also
Real-time computing
Real-time technology
References
Motor vehicle maintenance | Remote diagnostics | [
"Technology"
] | 401 | [
"nan"
] |
11,815,360 | https://en.wikipedia.org/wiki/GSD%20microscopy | Ground state depletion microscopy (GSD microscopy) is an implementation of the RESOLFT concept. The method was proposed in 1995 and experimentally demonstrated in 2007. It is the second concept to overcome the diffraction barrier in far-field optical microscopy published by Stefan Hell. Using nitrogen-vacancy centers in diamonds a resolution of up to 7.8 nm was achieved in 2009. This is far below the diffraction limit (~200 nm).
Principle
In GSD microscopy, fluorescent markers are used. In one condition, the marker can freely be excited from ground state and returns spontaneously via emission of a fluorescence photon. However, if light of appropriate wavelength is additionally applied the dye can be excited to a long-lived dark state, i.e. a state where no fluorescence occurs. As long as the molecule is in the long-lived dark state (e.g. a triplet state), it cannot be excited from the ground state. Switching between these two states (bright and dark) by applying light fulfills all preconditions for the RESOLFT concept and subwavelength scale imaging, and therefore images with very high resolution can be obtained. For successful implementation, GSD microscopy requires either special fluorophores with high triplet yield, or removal of oxygen by use of various mounting media such as Mowiol or Vectashield.
The implementation in a microscope is very similar to stimulated emission depletion microscopy, however it can operate with only one wavelength for excitation and depletion. Using an appropriate ring-like focal spot for the light that switches the molecules into the dark state, the fluorescence can be quenched at the outer part of the focal spot. Therefore, fluorescence only still takes place at the center of the microscope's focal spot and the spatial resolution is increased.
References
Microscopy | GSD microscopy | [
"Chemistry"
] | 381 | [
"Microscopy"
] |
11,815,462 | https://en.wikipedia.org/wiki/Slicing%20%28interface%20design%29 | In fields employing interface design skills, slicing is the process of dividing a single 2D user interface composition layout (comp) into multiple image files (digital assets) of the graphical user interface (GUI) for one or more electronic pages. It is typically part of the client side development process of creating a web page and/or web site, but is also used in the user interface design process of software development and game development.
The process involves partitioning a comp in either a single layer [image file format] or the multi-layer native file format of the graphic art software used for partitioning. Once partitioned, one would save them as separate image files, typically in GIF, JPEG or PNG format in either a batch process or one at a time. Multi-layered image files may include multiple versions or states of the same image, often used for animations or widgets.
Practical usage
Slicing is used in many cases where a graphic design layout must be implemented as interactive media content. Therefore, this is a very important skill set typically possessed by "front end" developers; that is interactive media developers who specialize in user interface development.
Slices may be produced and used in several different ways. Before tableless web design, sliced images were held together precisely with html tables. Modern interactive page layout includes extensive use of Cascading Style Sheets (CSS) and semantic markup. Tables may be used for compatibility with rarer older web browsers that are incapable of processing modern tableless coding accurately.
Slicing is exclusively used for bitmap images. Vector images are usually processed by media-playing plugin applications and contained within native multimedia file formats such as X3D, SWF, SVG or PDF files.
Benefits
Slicing reduces work loads and computer data storage space requirements to needing only the part of a dynamic image that changes instead of the whole image. If the slice is on a transparent multi-layered image, it can be reused in multiple parts of an image without changing the background.
On the web, slicing breaks up one large image into many smaller ones, which reduces "page weight" or load time considerably. Advanced methods of slicing can be used to further compress the amount of data needed to download to the user's computer in order for the web page to display correctly. Techniques such as repeating background images mean that one small image can be downloaded from the web server only once and then be instructed (via a CSS) to repeat by the markup language, shifting the work load from the web server onto the client's computer. Certain performance issues can be raised, however they are typically negligible compared with today's technology and trends of web design shifting towards rich media websites that typically require high bandwidth connectivity and recent computing hardware.
In offline electronic media, individually sliced sections of a 2D image can be used to decrease the local computer processing requirements to change a section of that image.
Tools
Quite a few industry standard programs offer the abilities to automatically slice a layout directly into tables using built in functions. These are outlined below:
Adobe Photoshop
Sketch
Adobe Fireworks (Previously published by Macromedia)
Adobe ImageReady (Discontinued after CS2, functionality from ImageReady ported into Photoshop since CS3)
GIMP
Jasc Paint Shop Pro
Recent versions of these programs have improved ability to convert artwork directly into CSS, albeit an unorthodox method since the algorithms rely heavily on absolute positioning (for example), which can render (display) inconsistently across modern web browsers.
Alternatives
Slicing is mainly used for 2D computer graphics with single-layered interfaces. Multi-layered interfaces may use slices, but may also use vector graphics (including 3D models) with the drawback of added (most often unnoticeable) rendering time and with the advantage of more options and flexibility in altering the appearance of the individual image. These alternate individual images are commonly referred to as sprites.
See also
Web development
Software development
Separation of presentation and content
Image editing
Graphic image development
External links
alistapart.com article on CSS Sprites compared to slices
Things To Remember, While Coding A Website, To Make It Search Engine Friendly
Slicing tool for HTML + CSS email
Tool for putting sliced images together to HTML and CSS
Web design
User interfaces
Graphic design | Slicing (interface design) | [
"Technology",
"Engineering"
] | 865 | [
"User interfaces",
"Interfaces",
"Web design",
"Design"
] |
11,816,319 | https://en.wikipedia.org/wiki/Siegel%27s%20lemma | In mathematics, specifically in transcendental number theory and Diophantine approximation, Siegel's lemma refers to bounds on the solutions of linear equations obtained by the construction of auxiliary functions. The existence of these polynomials was proven by Axel Thue; Thue's proof used Dirichlet's box principle. Carl Ludwig Siegel published his lemma in 1929. It is a pure existence theorem for a system of linear equations.
Siegel's lemma has been refined in recent years to produce sharper bounds on the estimates given by the lemma.
Statement
Suppose we are given a system of M linear equations in N unknowns such that N > M, say
where the coefficients are integers, not all 0, and bounded by B. The system then has a solution
with the Xs all integers, not all 0, and bounded by
gave the following sharper bound for the X'''s:
where D is the greatest common divisor of the M × M minors of the matrix A, and AT is its transpose. Their proof involved replacing the pigeonhole principle by techniques from the geometry of numbers.
See also
Diophantine approximation
References
Wolfgang M. Schmidt. Diophantine approximation. Lecture Notes in Mathematics 785. Springer. (1980 [1996 with minor corrections]) (Pages 125-128 and 283–285)
Wolfgang M. Schmidt. "Chapter I: Siegel's Lemma and Heights" (pages 1–33). Diophantine approximations and Diophantine equations'', Lecture Notes in Mathematics, Springer Verlag 2000.
Lemmas
Diophantine approximation
Diophantine geometry | Siegel's lemma | [
"Mathematics"
] | 337 | [
"Mathematical theorems",
"Diophantine approximation",
"Mathematical relations",
"Mathematical problems",
"Lemmas",
"Approximations",
"Number theory"
] |
11,816,342 | https://en.wikipedia.org/wiki/Cercospora%20solani-tuberosi | Cercospora solani-tuberosi is a fungal plant pathogen.
References
solani-tuberosi
Fungal plant pathogens and diseases
Fungus species | Cercospora solani-tuberosi | [
"Biology"
] | 33 | [
"Fungi",
"Fungus species"
] |
11,816,368 | https://en.wikipedia.org/wiki/Puccinia%20pittieriana | Puccinia pittieriana is a species of rust fungus. It is a plant pathogen which infects agricultural crops such as potato and tomato. Its common names include common potato rust and common potato and tomato rust.
This fungus was first made known to science in 1904 when it was collected from potatoes (Solanum tuberosum) in cultivation on the slopes of Irazú Volcano in Costa Rica. It was later found on wild potato species such as Solanum demissum and on tomato crops. It is now known from Central and South America and from Mexico. Its known hosts now include many wild and cultivated species of Solanum. This is the only rust fungus that infects tomatoes.
Fungal infection manifests as greenish white spots a few millimeters wide on the undersides of a plant's leaves. The upper surfaces of the leaves may be dimpled as the lesions penetrate the blades. The lesions turn whitish with red to brown centers. The lesions spread across the leaf, which then dies and falls. Lesions can also form on the petioles, stems, flowers, and fruits.
The fungal spores are borne on the wind and infect plants when the plant surface is moist and the average temperature is around 10 °C. Symptoms appear in about two weeks and the lesions are largest by 20 to 25 days after infection by spores.
Several potato cultivars are resistant to the rust.
See also
List of Puccinia species
References
External links
Puccinia pittieriana. MycoBank.
Fungal plant pathogens and diseases
Potato diseases
pittieriana
Fungi described in 1904
Fungus species | Puccinia pittieriana | [
"Biology"
] | 320 | [
"Fungi",
"Fungus species"
] |
11,816,425 | https://en.wikipedia.org/wiki/Fusarium%20crookwellense | Fusarium crookwellense (syn. Fusarium cerealis) is a species of fungus in the family Nectriaceae. It is known as a plant pathogen that infects agricultural crops.
The fungus was first described in 1982 after it was found infecting potatoes in Australia. It causes plant diseases such as corn ear rot and wheat head blight. It has also been found on hops causing a necrotic blight on the cones.
Like other species in genus Fusarium, this fungus produces mycotoxins. It is a source of nivalenol, 4-acetylnivalenol, and zearalenone.
See also
List of potato diseases
References
External links
Fusarium crookwellense. Index Fungorum.
crookwellense
Fungal plant pathogens and diseases
Potato diseases
Fungi described in 1982
Fungus species | Fusarium crookwellense | [
"Biology"
] | 175 | [
"Fungi",
"Fungus species"
] |
11,816,456 | https://en.wikipedia.org/wiki/Calcium%20inosinate | Calcium inosinate is a calcium salt of the nucleoside inosine. Under the E number E633, it is a food additive used as a flavor enhancer.
Calcium compounds
Flavor enhancers
Food additives
Nucleotides
Purines
E-number additives | Calcium inosinate | [
"Chemistry"
] | 60 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
11,816,566 | https://en.wikipedia.org/wiki/Septoria%20malagutii | Septoria malagutii is a fungal plant pathogen infecting potatoes. The casual fungal pathogen is a deuteromycete and therefore has no true sexual stage. As a result, Septoria produces pycnidia, an asexual flask shaped fruiting body, on the leaves of potato and other tuber-bearing spp. causing small black to brown necrotic lesions ranging in size from 1-5mm. The necrotic lesions can fuse together forming large necrotic areas susceptible to leaf drop, early senescence, dieback, and dwarfing. Septoria malagutii has been found only in the Andean countries of Bolivia, Ecuador, Peru, and Venezuela at altitudes of near 3000 meters. Consequently, the fungi grows and disperses best under relatively low temperatures with high humidities, with optimal growth occurring at 20 °C (68 °F). The disease has caused devastation on potato yields in South America and in areas where this disease is common, potato yields have been seen to drop by 60%.
Hosts and symptoms
Hosts of Septoria malagutii include: potatoes and other tuber-bearing Solanum spp.; however, tomatoes can be infected by artificial inoculation. The disease is found in a series of ‘tuberosa’ in the Andes mountains of Ecuador, Peru, and Venezuela at elevations reaching above 3000m.Septoria malagutii is a deuteromycete; therefore, the fungi does not have a true sexual stage or the sexual stage is extremely uncommon. Consequently, the fungi reproduces asexually via its conidiomata, pycnidia. The pycnidia are an asexual flask shaped fruiting body that produces conidia via mitosis. Above ground parts of the potato become infected by the conidia (pycnidiospores) in a variety of natural ways such as rainy or windy conditions. Signs and symptoms of the disease are easily observable on the upper side of infected leaves where small, dark brown, and round necrotic lesions ranging in size from 1-5mm are formed. The lesions exhibit pronounced concentric ridges with 1 to 3 erumpent black pycnidia within the central ring. Other symptoms on the leaves include: the lesions often fusing together to create large necrotic areas, making the leaves brittle and susceptible to leaf drop or wind damage. Additionally, the stems can become discolored and resemble bark and the whole plant can exhibit dwarfing, early senescence, and dieback. Stem lesions are more elongated than leaf lesions reaching up to 15mm in length and 2mm in diameter. Furthermore, there has been no report on the underground plant parts, such as roots and tubers, having been affected by the pathogen. Potato is known to be the only naturally cultivated host and infection is limited to the leaves of potato plants under natural circumstances; and, fertile formation of pycnidia on living leaves has not been observed. Lastly, the prevalence of resistance in varying cultivars is low, ranging from moderately resistant to very resistant, but it is uncertain how long the pathogen can survive in the infected host debris in soil.
Environment
Septoria malagutii is specific to the Andean countries of south South America: Bolivia, Ecuador, Peru, and Venezuela. In Ecuador, it is reported to occur mostly at temperatures of around 8 °C with a high relative humidity. The pathogen has a growth preference for lower temperatures explaining why it is found only in the high altitudes of the Andes mountain range. The minimum temperature for mycelial growth is 3 °C, while optimal growth occurs at 20–21 °C, and a maximum of 27 °C. A moist period with wet leaves of up to 2 days are required for the pathogen to infect plants at 16 to 22 °C. Moreover, the disease has been reported in cold and humid conditions in the Andes at altitudes above 2000m. These conditions make it the spread of the fungi favorable splashing to neighboring plants via rainwater, and insect vectors such as beetles.
Management
Management of Septoria leaf spot of potato is important because once introduced into a new area due to its soil borne and long lasting nature it is impossible to eradicate. Today, Septoria malagutii and other septoria disease are controlled with a number of different methods including the use of fungicides and cultural controls. Fungicides such as Fluazinam, used for controlling late blight of potato, Phytotophoria infestans, have proven to be effective against S. Malagutti. Although the systemic anti-oomycete compounds have failed, fungicides should still be used during early stages of infection in order to prevent secondary spread via lesions being a source of inoculum spread. Likewise, the fungicide fentin was proven to be effective in controlling Septoria. In addition, biological controls such as copper sulfate can be integrated with other fungicides to further control the spread of disease by preventing germination of the pycnidia. Culturally, in places where potatoes can be grown all year round a Septoria infection can be avoided by planting during seasons with low humidity or higher temperatures. This is beneficial in controlling the disease because Septoria malagutii’s mycelial growth is optimal in low temperatures and its spores disperse more efficiently under humid conditions. Likewise, since the pathogen can spread from inoculum sources such as plant debris on soil and wild hosts via clothing on farmers, footwear, tubers coated in soil, and cultivation equipment, so it is important to sterilize equipment between uses in order to limit its spread.
Importance
The severity of Septoria leaf spot of Potato in the Northwestern countries of South America is notable. The disease destroys up to 60% of potato yield in South America causing significant crop loss in Andean countries. Moreover, countries within the European Union and elsewhere around the world are susceptible to establishment of the disease. Potatoes are widely grown in the EU and Septoria malagutii would have host availability and thrive in the cool humid climate of European countries, so it is essential to keep the disease contained to the Andes.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Potato diseases
malagutii
Fungus species | Septoria malagutii | [
"Biology"
] | 1,275 | [
"Fungi",
"Fungus species"
] |
11,816,591 | https://en.wikipedia.org/wiki/Helminthosporium%20solani | Helminthosporium solani is a fungal plant pathogen responsible for the plant disease known as silver scurf. Silver scurf is a blemish disease, meaning the effect it has on tubers is mostly cosmetic and affects "fresh market, processing and seed tuber potatoes." There are some reports of it affecting development, meaning growth and tuber yield. This is caused by light brown lesions, which in turn change the permeability of tuber skin and then it causes tuber shrinkage and water loss, which finally causes weight loss. The disease has become economically important because silver scurf affected potatoes for processing and direct consumption have been rejected by the industry. The disease cycle can be divided into two stages: field and storage. It is mainly a seed borne disease and the primary source of inoculum is mainly infected potato seed tubers. Symptoms develop and worsen in storage because the conditions are conducive to sporulation. The ideal conditions for the spread of this disease are high temperatures and high humidity. There are also many cultural practices that favor spread and development. There are multiple ways to help control the disease.
Signs and symptoms
Silver scurf is a plant disease of potato, which is caused by the anamorphic ascomycete fungus, Helminthosporium solani. Potato tubers are the only known host of Helminthosporium solani. It is a highly specific pathogen which does not have a secondary host or alternate host. A common symptom of this disease is blemishing on the surface of the potato tubers. These blemishes are tan and/or gray due to a loss of pigment, and they are usually irregularly shaped. Also, a post-harvest symptom can be shrinkage and shriveling of the outer tissue of the potato due to water loss. Black spots can also be found on the surface of infected tubers, which are a sign of the disease. These are made up of conidia and conidiophores of the pathogenic fungus. The conidia are characterized by being very darkly melanized and having multiple pseudosepta. Another characteristic of this fungus is the absence of motile spores.
Like with many other fungal plant diseases, a diagnosis can be made by looking for the specific sexual structures of the fungus and observing them for the specific characteristics of silver scurf. Another way that silver scurf can be diagnosed is through molecular techniques, such as PCR and sequencing to identify the presence of the pathogen. The primer pair, HSF19-HSR447, has been generated to be specific for amplifying only a section of Helminthosporium solani DNA.
Currently, no known host factors have been identified that have been linked to increase susceptibility or development of the disease. It seems as though the environmental conditions are what plays a major role in severity of the disease.
Disease cycle
The disease cycle of silver scurf can be divided in two phases (field and storage). The primary source of inoculum is infected potato seed tubers. This inoculum is then transferred to the daughter tubers through an unknown mechanism, although indirect evidence suggests it happens when they come in direct contact or close proximity to daughter tubers. Conidia produced on the surface of seed tubers are dispersed by rain or irrigation to uninfected tubers. These conidia germinate and infect tubers. The pathogen enters through the periderm or lenticels. After that, the pathogen colonizes the periderm cells in the tuber. Infection may happen when tubers are formed and can continue in the season. In harvest (mostly in summer), silver scurf symptoms are not too apparent. However, the symptoms develop and worsen due relatively humid and warm temperatures in storage, since these conditions are conducive to sporulation. Secondary inoculum is produced by conidia, which can spread in storage by wind of ventilation while the tubers are in storage. When a seed tuber from this storage is planted, this can then carry inoculum to the field. It was believed that overwintering soil-borne inoculum wasn’t important in the disease cycle, but recent studies suggest H.solani may survive in the soil for a short period of time, which can cause more infection.
This is an imperfect fungus and its teleomorph has not been described. Disease symptoms appear on tubers, but not on the haulm (vine) or roots, and are limited to the periderm, composed of phellem, phelloderm and cortical layers that replace the epidermis of the tuber. See next section (Environment) to understand the occurrence and severity of the different stages of the life cycle mentioned here.
Environment
There are a number of conditions that favor the spread and development of H. solani. Usually, the temperature range of 15 – 32 °C combined with high humidity increase conidial germination. In addition to this, there are many cultural practices which affect the conditions that favor disease spread and development. These practices include: the level of H. solani present on the seed, planting and harvesting dates, crop rotations and warehouse management. It has been demonstrated that later harvest dates increase the development of the disease. It has also been demonstrated that the disease was more severe when planting densities were higher. All of these factors combined have an effect of disease spread and development.
Pathogenesis
The spores can still infect and cause disease in daughter tubers in the soil for about two years. It is also possible for the pathogen to spread by growing through the roots of a potato plant to the developing tubers and cause infection. H. solani conidia are found on the outside of potato tubers, and the hyphae enter the tuber to cause disease. The pathogen can enter the tissue through wounds or natural openings, as well as being able to directly penetrate the periderm with the use of an appressorium and penetration peg. The fungus is contained in the outer layers of the potato and cannot infect very deep into the tuber. The discoloration on the periderm of the potato is formed from the loss of pigmentation caused by extreme dryness of the cell and suberin deposition. Not much is currently known about the molecular aspects of the mechanism for spread and infection of the disease, but there is currently ongoing research on this pathogen to gain a better understanding.
Disease control
Chemical control
Fungicides control many plant diseases efficiently, but very limited types of fungicides are efficient against the silver scurf pathogen. Fungicides usually apply to soil or seed tubers before culturing.
Thiabendazole (TBZ) fungicide
TBZ is widely used as post-harvest treatment on potatoes since the early 1970s. Silver scurf on tubers can be reduced by the systemic broad-spectrum fungicide TBZ. TBZ is low toxicity and is used to prevent or control silver scurf for short time period, e.g. several months, with no effect on quality or retention of residues.
TBZ-resistant H. solani isolates
The TBZ fungicide used to be very effective until 1977 when TBZ-resistant H. solani isolates were found in potato stores, as post-harvest treatment. TBZ resistance in H. solani resulted from a point mutation of a single base at codon 198 from glutamic acid to glutamine, or alanine, in the b-tubulin. This mutation functions in avoiding TBZ and other benzimidazole fungicides from binding to the H. solani b-tubulin gene thus results in TBZ-resistant phenotypes.
Fungicides besides TBZ
As the frequency of resistant isolates to TBZ increase, some other fungicides have been tested to control silver scurf, such as imazalil, prochloraz and propiconazole fungicides, which are all classified in conazole, DMI (demethylation inhibitors). Imazalil and prochloraz fungicides are commonly used in seed treatment, while propiconazole fungicide is usually for foliar treatment.
Host resistance
One of the major reasons for the increasing economic importance of silver scurf is the lack of high levels of resistance in potato cultivars.
Interspecific crosses with wild Solanum species have been used to increase disease resistance in cultivars of S. tuberosum. Genes from the wild tuber-bearing species Solanum demissum, Solanum chacoense and Solanum aculae, which have low sporulation of H. solani, have been incorporated into the background of some Canadian potato cultivars. These interspecific crosses and advanced selections are being screened for resistance to different diseases including silver scurf. However, no silver scurf-resistant cultivars of Solanum tuberosum have, thus far, been identified. There is lack of reports of silver scurf resistant potato cultivars.
Suppressive soils
Soil types influenced the development of silver scurf to a great extent, both at harvest season and the following three-month storage period. Some soils displayed suppressiveness in different levels. The results from experimental trials revealed a significant negative correlation between silver scurf disease severity, and soil NO3 content and Fe availability. NO3 had been negatively correlated with silver scurf disease previously. This provides a possible suppressive effect of these two soil components. NO3 is an efficient nitrogen source used by H. solani. Therefore, a direct adverse effect between silver scurf and NO3 is not likely to happen. A probable explanation for this observation is that NO3 could function on other soil microorganisms which possibly act as H. solani antagonists. These results indicated that microbial antagonists may be the key components contributing to soil suppressiveness and their antagonists may lead efficient biological control of silver scurf.
Biological control
Biological control is considered an attractive alternative to chemicals for the efficient, reliable, and environmentally safe control of plant pathogens.
A fungus of the genus Cephalosporium (now renamed Acremonium strictum) was able to decrease the dissemination of silver scurf in storage. Cephalosporium has shown the ability to significantly diminish sporulation, spore germination and mycelial growth of H. solani. However, Cephalosporium does not reduce silver scurf on previously infected potatoes.
In laboratory experiments, isolations from potato growing soil and the rhizosphere of potato plants during sprouting, Trichoderma hamatum, Trichoderma koningii, Trichoderma polysporum, Trichoderma harzianum and Trichoderma viride were the most inhibitory microorganisms to H. solani growth in vitro.
Achromobacter piechaudii, Bacillus cereus, Cellulomonadaceae fimi, Pseudomonas chlororaphis, Pseudomonas fluorescens, Pseudomonas putidaputida, and Streptomyces griseus were able to inhibit mycelial growth and/or conidial germination through the production of diffusible metabolites and that antibiosis was likely responsible fully or partially for their antagonism of H. solani.
Biopesticides
Serenade ASO (a formulation of Bacillus subtilis) has proved to suppress silver scurf, reduced both the incidence and severity of silver scurf under low disease pressure and delayed the beginning of silver scurf in storage for five months.
Relevance
When silver scurf was first found in Moscow in 1871, it was considered as a minor plant disease. After an increase in silver scurf incidence from America, Europe, Middle East, Africa, China, and New Zealand since 1968, the disease was later considered a pathogen of major importance. Although the disease did not cause potato yield losses and only affected the cosmetic appearance of the tuber, it had a huge impact on the potato market. With growing consumer demands for attractive appearance in fresh market cultivars, silver scurf on potatoes with blemishes and discoloration have been rejected by the industry. Furthermore, the silver scurf causes water loss which makes it difficult to peel the tubers. The excess tuber shrinkage also causes weight loss in tubers. From the cosmetic effect, dehydration, and weight loss of the tubers, the fresh market is facing major economic losses from the disease even today. For example, the Idaho's potato industry lost about 7 to 8.5 million dollars from the silver scurf disease. Not only does the cost come from rejecting silver scurf diseased potatoes, but it also comes from an increase in the amount of time needed for sorting and inspecting every potato.
References
Fungal plant pathogens and diseases
Potato diseases
Pleosporaceae
Fungi described in 1849
Taxa named by Michel Charles Durieu de Maisonneuve
Taxa named by Camille Montagne
Fungus species | Helminthosporium solani | [
"Biology"
] | 2,727 | [
"Fungi",
"Fungus species"
] |
11,816,617 | https://en.wikipedia.org/wiki/Polyscytalum%20pustulans | Polyscytalum pustulans is an ascomycete fungus that is a plant pathogen causing skin spot of potatoes.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Potato diseases
Enigmatic Ascomycota taxa
Fungus species | Polyscytalum pustulans | [
"Biology"
] | 59 | [
"Fungi",
"Fungus species"
] |
11,816,650 | https://en.wikipedia.org/wiki/Ulocladium%20atrum | Ulocladium atrum is a fungal saprophyte.
U. atrum is used to control Botrytis cinerea, a fungal pathogen (gray mold) of grapes and other fruit.
The species has also been found as a cause of keratitis, inflammation of the cornea.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal pest control agents
Pleosporaceae
Fungus species | Ulocladium atrum | [
"Biology"
] | 87 | [
"Fungi",
"Fungus species",
"Fungal pest control agents"
] |
11,816,713 | https://en.wikipedia.org/wiki/Ascochyta%20fabae | Ascochyta fabae is a plant pathogen.
See also
List of Ascochyta species
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporales
Fungi described in 1885
Fungus species | Ascochyta fabae | [
"Biology"
] | 48 | [
"Fungi",
"Fungus species"
] |
11,816,826 | https://en.wikipedia.org/wiki/Apiospora%20montagnei | Apiospora montagnei is a plant pathogen that causes kernel blight on barley but is more often seen a saprophyte or secondary invader of many other plant species.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Barley diseases
Enigmatic Sordariomycetes taxa
Fungi described in 1875
Fungus species | Apiospora montagnei | [
"Biology"
] | 75 | [
"Fungi",
"Fungus species"
] |
11,816,852 | https://en.wikipedia.org/wiki/Ascochyta%20hordei | Ascochyta hordei is a plant pathogen that causes Ascochyta leaf spot on barley and wheat. ALS of barley can also be caused by other Ascochyta including A. graminea, A. sorghi, and A. tritici. It is considered a minor disease.
Distribution
Russia, the Ukraine, southern Byelorussia, North Ossetia, Albania, Japan, Mid Canterbury, New Zealand, and the mountains of North America.
References
Fungal plant pathogens and diseases
Barley diseases
Wheat diseases
hordei
Fungi described in 1930
Fungus species | Ascochyta hordei | [
"Biology"
] | 118 | [
"Fungi",
"Fungus species"
] |
11,816,879 | https://en.wikipedia.org/wiki/Ascochyta%20graminea | Ascochyta graminea is a plant pathogen that causes Ascochyta leaf spot on barley which can also be caused by the related fungi Ascochyta hordei, Ascochyta sorghi and Ascochyta tritici. It is considered a minor disease of barley.
See also
List of Ascochyta species
References
Fungal plant pathogens and diseases
Barley diseases
graminea
Fungi described in 1950
Fungus species | Ascochyta graminea | [
"Biology"
] | 91 | [
"Fungi",
"Fungus species"
] |
11,816,893 | https://en.wikipedia.org/wiki/Project%20Vamp | Project Vamp was a U.S. Navy project for the U.S. Hydrographic Office, which consisted of a special coastal survey along the Virginia and Massachusetts shores during 1954 and 1955.
Oceanography | Project Vamp | [
"Physics",
"Environmental_science"
] | 42 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
11,816,901 | https://en.wikipedia.org/wiki/Drechslera%20andersenii | Drechslera andersenii is a fungus that is a plant pathogen. It was originally found on the leaves of Lolium perenne (perennial ryegrass) in Great Britain. It was also found on Italian ryegrass.
It was found in China in 2018.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporaceae
Fungi described in 1985
Fungus species | Drechslera andersenii | [
"Biology"
] | 80 | [
"Fungi",
"Fungus species"
] |
11,816,962 | https://en.wikipedia.org/wiki/Septoria%20passerinii | Septoria passerinii is a fungal plant pathogen that infects barley causing Septoria speckled leaf blotch.
References
Fungal plant pathogens and diseases
Barley diseases
passerinii
Fungus species | Septoria passerinii | [
"Biology"
] | 41 | [
"Fungi",
"Fungus species"
] |
11,817,026 | https://en.wikipedia.org/wiki/Colletotrichum%20musae | Colletotrichum musae is a plant pathogen primarily affecting the genus Musa, which includes bananas and plantains. It is best known as a cause of anthracnose (the black and brown spots) indicating ripeness on bananas.
Symptoms
Symptoms appear as dark brown/black lesions on green fruit. On yellow fruit these lesions increase in size, orange fungal growth can be found in centre of lesions. Symptoms can also be found on the tips of fruit. Symptoms also include premature ripening of fruit.
Management
The CABI-led programme, Plantwise recommend several methods to prevent the spread of the disease. These include; covering emerging fruit in plastic coverings and avoiding damage during harvest, removal of decaying parts and weeds of non-crop species to reduce favourable humid conditions for fungal infection.
Plantwise also recommends sufficient irrigation and draining of plantations to reduce unnecessary conditions which favour fungi.
Sources
References
musae
Fungal plant pathogens and diseases
Fungi described in 1957
Fungus species | Colletotrichum musae | [
"Biology"
] | 197 | [
"Fungi",
"Fungus species"
] |
11,817,041 | https://en.wikipedia.org/wiki/Phyllachora%20musicola | Phyllachora musicola is a plant pathogen infecting bananas.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Banana diseases
Phyllachorales
Fungus species | Phyllachora musicola | [
"Biology"
] | 46 | [
"Fungi",
"Fungus species"
] |
11,817,057 | https://en.wikipedia.org/wiki/Pestalotiopsis%20leprogena | Pestalotiopsis leprogena is a fungal plant pathogen infecting bananas.
References
Fungal plant pathogens and diseases
Banana diseases
leprogena
Fungus species | Pestalotiopsis leprogena | [
"Biology"
] | 35 | [
"Fungi",
"Fungus species"
] |
11,817,082 | https://en.wikipedia.org/wiki/Cercospora%20hayi | Cercospora hayi is a fungal plant pathogen. It can cause the brown spot disease in bananas.
References
hayi
Fungal plant pathogens and diseases
Fungus species | Cercospora hayi | [
"Biology"
] | 35 | [
"Fungi",
"Fungus species"
] |
11,817,097 | https://en.wikipedia.org/wiki/Musicillium%20theobromae | Verticillium theobromae is a plant pathogen infecting banana and plantain.
See also
List of banana and plantain diseases
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Banana diseases
Fungi described in 1920
Enigmatic Hypocreales taxa
Fungus species | Musicillium theobromae | [
"Biology"
] | 66 | [
"Fungi",
"Fungus species"
] |
11,817,121 | https://en.wikipedia.org/wiki/Cladosporium%20musae | Cladosporium musae is a fungal plant pathogen that causes Cladosporium speckle on banana and which occurs in most countries in which the fruit is cultivated. Unsuccessful attempts to transfer the Cladosporium pathogen in vitro to healthy banana plants seem to confirm reports that the infection remains latent in otherwise healthy plants.
References
musae
Fungal plant pathogens and diseases
Banana diseases
Fungi described in 1945
Fungus species | Cladosporium musae | [
"Biology"
] | 84 | [
"Fungi",
"Fungus species"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.