id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
962,174
https://en.wikipedia.org/wiki/Node%20of%20Ranvier
Nodes of Ranvier ( ), also known as myelin-sheath gaps, occur along a myelinated axon where the axolemma is exposed to the extracellular space. Nodes of Ranvier are uninsulated axonal domains that are highly enriched in sodium and potassium ion channels complexed with cell adhesion molecules, allowing them to participate in the exchange of ions required to regenerate the action potential. Nerve conduction in myelinated axons is referred to as saltatory conduction () due to the manner in which the action potential seems to "jump" from one node to the next along the axon. This results in faster conduction of the action potential. The nodes of Ranvier are present in both the peripheral and central nervous systems. Overview The nodes are primarily are composed of sodium and potassium voltage-gated ion channels; CAMs such as neurofascin-186 and NrCAM; and cytoskeletal adaptor proteins such as ankyrin-G and spectrinβIV. Many vertebrate axons are surrounded by a myelin sheath, allowing rapid and efficient saltatory ("jumping") propagation of action potentials. The contacts between neurons and glial cells display a very high level of spatial and temporal organization in myelinated fibers. The myelinating glial cells - oligodendrocytes in the central nervous system (CNS), and Schwann cells in the peripheral nervous system (PNS) - are wrapped around the axon, leaving the axolemma relatively uncovered at the regularly spaced nodes of Ranvier. The internodal glial membranes are fused to form compact myelin, whereas the cytoplasm-filled paranodal loops of myelinating cells are spirally wrapped around the axon at both sides of the nodes. This organization demands a tight developmental control and the formation of a variety of specialized zones of contact between different areas of the myelinating cell membrane. Each node of Ranvier is flanked by paranodal regions where helicoidally wrapped glial loops are attached to the axonal membrane by a septate-like junction. The segment between nodes of Ranvier is termed as the internode, and its outermost part that is in contact with paranodes is referred to as the juxtaparanodal region. The nodes are encapsulated by microvilli stemming from the outer aspect of the Schwann cell membrane in the PNS, or by perinodal extensions from astrocytes in the CNS. Structure The internodes are the myelin segments and the gaps between are referred to as nodes. The size and the spacing of the internodes vary with the fiber diameter in a curvilinear relationship that is optimized for maximal conduction velocity. The size of the nodes span from 1–2 μm whereas the internodes can be up to (and occasionally even greater than)1.5 millimetres long, depending on the axon diameter and fiber type. The structure of the node and the flanking paranodal regions are distinct from the internodes under the compact myelin sheath, but are very similar in CNS and PNS. The axon is exposed to the extra-cellular environment at the node and is constricted in its diameter. The decreased axon size reflects a higher packing density of neurofilaments in this region, which are less heavily phosphorylated and are transported more slowly. Vesicles and other organelles are also increased at the nodes, which suggest that there is a bottleneck of axonal transport in both directions as well as local axonal-glial signaling. When a longitudinal section is made through a myelinating Schwann cell at the node, three distinctive segments are represented: the stereotypic internode, the paranodal region, and the node itself. In the internodal region, the Schwann cell has an outer collar of cytoplasm, a compact myelin sheath, and inner collar of cytoplasm, and the axolemma. At the paranodal regions, the paranodal cytoplasm loops contact thickenings of the axolemma to form septate –like junctions. In the node alone, the axolemma is contacted by several Schwann microvilli and contains a dense cytoskeletal undercoating. Differences in the central and peripheral nervous systems Although freeze fracture studies have revealed that the nodal axolemma in both the CNS and PNS is enriched in intra-membranous particles (IMPs) compared to the internode, there are some structural differences reflecting their cellular constituents. In the PNS, specialized microvilli project from the outer collar of Schwann cells and come very close to nodal axolemma of large fibers. The projections of the Schwann cells are perpendicular to the node and are radiating from the central axons. However, in the CNS, one or more of the astrocytic processes come in close vicinity of the nodes. Researchers declare that these processes stem from multi-functional astrocytes, as opposed to from a population of astrocytes dedicated to contacting the node. On the other hand, in the PNS, the basal lamina that surrounds the Schwann cells is continuous across the node. A study suggests that in the CNS, nerve cells individually alter the size of the nodes to tune conduction speeds, leading node length to vary much more across different axons than within one. Composition The nodes of Ranvier Na+/Ca2+ exchangers and high density of voltage-gated Na+ channels that generate action potentials. A sodium channel consists of a pore-forming α subunit and two accessory β subunits, which anchor the channel to extra-cellular and intra-cellular components. The nodes of Ranvier in the central and peripheral nervous systems mostly consist of αNaV1.6 and β1 subunits. The extra-cellular region of β subunits can associate with itself and other proteins, such as tenascin R and the cell-adhesion molecules neurofascin and contactin. Contactin is also present at nodes in the CNS and interaction with this molecule enhances the surface expression of Na+ channels. Ankyrin has been found to be bounded to βIV spectrin, a spectrin isoform enriched at nodes of Ranvier and axon initial segments. The PNS nodes are surrounded by Schwann cell microvilli, which contain ERMs and EBP50 that may provide a connection to actin microfilaments. Several extracellular matrix proteins are enriched at nodes of Ranvier, including tenascin-R, Bral-1, and proteoglycan NG2, as well as phosphacan and versican V2. At CNS nodes, the axonal proteins also include contactin; however, different from the PNS, Schwann cell microvilli are replaced by astrocyte perinodal extensions. Molecular organization The molecular organization of the nodes corresponds to their specialized function in impulse propagation. The level of sodium channels in the node versus the internode suggests that the number IMPs corresponds to sodium channels. Potassium channels are essentially absent in the nodal axolemma, whereas they are highly concentrated in the paranodal axolemma and Schwann cell membranes at the node. The exact function of potassium channels have not quite been revealed, but it is known that they may contribute to the rapid repolarization of the action potentials or play a vital role in buffering the potassium ions at the nodes. This highly asymmetric distribution of voltage-gated sodium and potassium channels is in striking contrast to their diffuse distribution in unmyelinated fibers. The filamentous network subjacent to the nodal membrane contains cytoskeletal proteins called spectrin and ankyrin. The high density of ankyrin at the nodes may be functionally significant because several of the proteins that are populated at the nodes share the ability to bind to ankyrin with extremely high affinity. All of these proteins, including ankyrin, are enriched in the initial segment of axons which suggests a functional relationship. Now the relationship of these molecular components to the clustering of sodium channels at the nodes is still not known. Although some cell-adhesion molecules have been reported to be present at the nodes inconsistently; however, a variety of other molecules are known to be highly populated at the glial membranes of the paranodal regions where they contribute to its organization and structural integrity. Development Myelination of nerve fibers The complex changes that the Schwann cell undergoes during the process of myelination of peripheral nerve fibers have been observed and studied by many. The initial envelopment of the axon occurs without interruption along the entire extent of the Schwann cell. This process is sequenced by the in-folding of the Schwann cell surface so that a double membrane of the opposing faces of the in-folded Schwann cell surface is formed. This membrane stretches and spirally wraps itself over and over as the in-folding of the Schwann cell surface continues. As a result, the increase in the thickness of the extension of the myelin sheath in its cross-sectional diameter is easily ascertained. It is also evident that each of the consecutive turns of the spiral increases in size along the length of the axon as the number of turns increase. However, it is not clear whether or not the increase in length of the myelin sheath can be accounted solely by the increase in length of axon covered by each successive turn of the spiral, as previously explained. At the junction of two Schwann cells along an axon, the directions of the lamellar overhang of the myelin endings are of opposite sense. This junction, adjacent of the Schwann cells, constitutes the region designated as the node of Ranvier. Early stages Researchers prove that in the developing CNS, Nav1.2 is initially expressed at all forming nodes of Ranvier. Upon maturation, nodal Nav1.2 is down-regulated and replaced by Nav1.6. Nav1.2 is also expressed during PNS node formation, which suggests that the switching of Nav-channel subtypes is a general phenomenon in the CNS and PNS. In this same investigation, it was shown that Nav1.6 and Nav1.2 colocalize at many nodes of Ranvier during early myelination. This also led to the suggestion that early clusters of Nav1.2 and Nav1.6 channels are destined to later become nodes of Ranvier. Neurofascin is also reported to be one of the first proteins to accumulate at newly forming nodes of Ranvier. They are also found to provide the nucleation site for attachment of ankyrin G, Nav channels, and other proteins. The recent identification of the Schwann cell microvilli protein gliomedin as the likely binding partner of axonal neurofascin brings forward substantial evidence for the importance of this protein in recruiting Nav channels to the nodes of Ranvier. Furthermore, Lambert et al. and Eshed et al. also indicates that neurofascin accumulates before Nav channels and is likely to have crucial roles in the earliest events associated with node of Ranvier formation. Thus, multiple mechanisms may exist and work synergistically to facilitate clustering of Nav channels at nodes of Ranvier. Nodal formation The first event appears to be the accumulation of cell adhesion molecules such as NF186 or NrCAM. The intra-cellular regions of these cell-adhesion molecules interact with ankyrin G, which serves as an anchor for sodium channels. In the PNS, this interaction has been elucidated. The Ig superfamily membrane protein NrCAM acts as a pioneer molecule in the formation of the nodes by recruiting ankyrin-G, a mediator protein in the connection of actin-spectrin cytoskeleton to the ion gated channels present at the node. At the same time, the periaxonal extension of the glial cell wraps around the axon, giving rise to the paranodal regions. This movement along the axon contributes significantly to the overall formation of the nodes of Ranvier by permitting heminodes formed at the edges of neighboring glial cells to fuse into complete nodes. Septate-like junctions form at the paranodes with the enrichment of NF155 in glial paranodal loops. Immediately following the early differentiation of the nodal and paranodal regions, potassium channels, Caspr2 and TAG1 accumulate in the juxta-paranodal regions. This accumulation coincides directly with the formation of compact myelin. In mature nodal regions, interactions with the intracellular proteins appear vital for the stability of all nodal regions. In the CNS, oligodendrocytes do not possess microvilli, but appear capable to initiate the clustering of some axonal proteins through secreted factors. The combined effects of such factors with the subsequent movements generated by the wrapping of oligodendrocyte periaxonal extension could account for the organization of CNS nodes of Ranvier. Function Action potential An action potential is a spike of both positive and negative ionic discharge that travels along the membrane of a cell. The creation and conduction of action potentials represents a fundamental means of communication in the nervous system. Action potentials represent rapid reversals in voltage across the plasma membrane of axons. These rapid reversals are mediated by voltage-gated ion channels found in the plasma membrane. The action potential travels from one location in the cell to another, but ion flow across the membrane occurs only at the nodes of Ranvier. As a result, the action potential signal jumps along the axon, from node to node, rather than propagating smoothly, as they do in axons that lack a myelin sheath. The clustering of voltage-gated sodium and potassium ion channels at the nodes permits this behavior. Saltatory conduction Since an axon can be unmyelinated or myelinated, the action potential has two methods to travel down the axon. These methods are referred to as continuous conduction for unmyelinated axons, and saltatory conduction for myelinated axons. Saltatory conduction is defined as an action potential moving in discrete jumps down a myelinated axon. This process is outlined as the charge passively spreading to the next node of Ranvier to depolarize it to threshold which will then trigger an action potential in this region which will then passively spread to the next node and so on. Saltatory conduction provides one advantage over conduction that occurs along an axon without myelin sheaths. This is that the increased speed afforded by this mode of conduction assures faster interaction between neurons. On the other hand, depending on the average firing rate of the neuron, calculations show that the energetic cost of maintaining the resting potential of oligodendrocytes can outweigh the energy savings of action potentials. So, axon myelination does not necessarily save energy. Formation regulation Paranode regulation via mitochondria accumulation Mitochondria and other membranous organelles are normally enriched in the PNP region of peripheral myelinated axons, especially those large caliber axons. The actual physiological role of this accumulation and factors that regulate it are not understood; however, it is known that mitochondria are usually present in areas of the cell that expresses a high energy demand. In these same regions, they are also understood to contain growth cones, synaptic terminals, and sites of action potential initiation and regeneration, such as the nodes of Ranvier. In the synaptic terminals, mitochondria produce the ATP needed to mobilize vesicles for neurotransmission. In the nodes of Ranvier, mitochondria serve as an important role in impulse conduction by producing the ATP that is essential to maintain the activity of energy-demanding ion pumps. Supporting this fact, about five times more mitochondria are present in the PNP axoplasm of large peripheral axons than in the corresponding internodal regions of these fibers. Nodal regulation Via αII-Spectrin Saltatory conduction in myelinated axons requires organization of the nodes of Ranvier, whereas voltage-gated sodium channels are highly populated. Studies show that αII-Spectrin, a component of the cytoskeleton is enriched at the nodes and paranodes at early stages and as the nodes mature, the expression of this molecule disappears. It is also proven that αII-Spectrin in the axonal cytoskeleton is absolutely vital for stabilizing sodium channel clusters and organizing the mature node of Ranvier. Possible regulation via the recognition molecule OMgp It has been shown previously that OMgp (oligodendrocyte myelin glycoprotein) clusters at nodes of Ranvier and may regulate paranodal architecture, node length and axonal sprouting at nodes. However, a follow-up study showed that the antibody used previously to identify OMgp at nodes crossreacts with another node-enriched component versican V2 and that OMgp is not required for the integrity of nodes and paranodes, arguing against the previously reported localization and proposed functions of OMgp at nodes. Clinical significance The proteins in these excitable domains of neuron when injured may result in cognitive disorders and various neuropathic ailments. History The myelin sheath of long nerves was discovered and named by German pathological anatomist Rudolf Virchow in 1854. French pathologist and anatomist Louis-Antoine Ranvier later discovered the nodes, or gaps, in the myelin sheath that now bear his name. Born in Lyon, Ranvier was one of the most prominent histologists of the late 19th century. Ranvier abandoned pathological studies in 1867 and became an assistant of physiologist Claude Bernard. He was the chairman of General Anatomy at the Collège de France in 1875. Ranvier discovered the nodes in 1878. Using staining techniques developed by Ludwig Mauthner, he noticed that myelinated axons were only stained at regular intervals, leading to the discovery of the nodes. Reportedly, he dismissed the idea of nodes in the CNS although their existence was proven later. His refined histological techniques and his work on both injured and normal nerve fibers became world-renowned. His observations on fiber nodes and the degeneration and regeneration of cut fibers had a great influence on Parisian neurology at the Salpêtrière. Soon afterwards, he discovered gaps in sheaths of nerve fibers, which were later called the Nodes of Ranvier. This discovery later led Ranvier to careful histological examination of myelin sheaths and Schwann cells. Additional images See also Internodal segment Schwann cell Oligodendrocyte Myelin References External links Cell Centered Database – Node of Ranvier – "PNS, nerve (LM, Medium)" Membrane biology Neurohistology Signal transduction
Node of Ranvier
[ "Chemistry", "Biology" ]
3,992
[ "Membrane biology", "Signal transduction", "Molecular biology", "Biochemistry", "Neurochemistry" ]
962,292
https://en.wikipedia.org/wiki/Outfall
An outfall is the discharge point of a waste stream into a body of water; alternatively it may be the outlet of a river, drain or a sewer where it discharges into the sea, a lake or ocean. United States of America In the United States, industrial facilities that discharge storm water which was exposed to industrial activities at the site are required to have a multi-sector general permit. Issuing permits for storm water is delegated to the individual states that are authorized by the Environmental Protection Agency (EPA). Facilities that apply for a permit must specify the number of outfalls at the site. According to the EPA's Multi-Sector General Permit For Stormwater Discharges Associated With Industrial Activity, outfalls are locations where the stormwater exits the facility, including pipes, ditches, swales, and other structures that transport stormwater. If there is more than one outfall present, measure at the primary outfall (i.e., the outfall with the largest volume of stormwater discharge associated with industrial activity). Outfalls from sewage plants can be up to in diameter and release of treated human waste miles from the shore. A wastewater treatment system discharges treated effluent to a water body from an outfall. An ocean outfall may be conveyed several miles offshore, to discharge by nozzles at the end of a spreader or T-shaped structure. Outfalls may also be constructed as an outfall tunnel or subsea tunnel and discharge effluent to the ocean via one or more marine risers with nozzles. See also Combined sewer Greywater Marine outfall Night soil River mouth References Sewerage infrastructure
Outfall
[ "Chemistry" ]
336
[ "Water treatment", "Sewerage infrastructure" ]
962,389
https://en.wikipedia.org/wiki/Bulk%20mail
Bulk mail broadly refers to mail that is mailed and processed in bulk at reduced rates. The term is sometimes used as a synonym for advertising mail. The United States Postal Service (USPS) defines bulk mail broadly as "quantities of mail prepared for mailing at reduced postage rates." The preparation includes presorting and placing into containers by ZIP code. The containers, along with a manifest, are taken to an area in a post office called a bulk-mail-entry unit. The presorting and the use of containers allow highly automated mail processing, both in bulk and piecewise, in processing facilities called bulk mail centers (BMCs). In 2009, the USPS announced plans to streamline sorting and delivery. BMCs were renamed Network Distribution Centers. Junk mail Although bulk mail, junk mail, and admail are, strictly speaking, not synonymous, the terms are used in common parlance to refer to unsolicited invitations delivered by mail (typically, but not invariably, at bulk rates) to homes and businesses. References External links "Business Mail 101", from the United States Postal Service "Bulk Mail", at Australia Post Postal systems Philatelic terminology
Bulk mail
[ "Technology" ]
243
[ "Transport systems", "Postal systems" ]
962,428
https://en.wikipedia.org/wiki/Bryggenet
Bryggenet is a community network in the Islands Brygge quarter of Copenhagen, Denmark. Bryggenet serves an area of about 4000 residences with fast Internet access, cable TV and radio, and telephone services at cost prices. Bryggenet was started in 2001 by a group of volunteers, initially with the intent of providing fast and cheap internet access to the residents of a number of co-ops in the quarter, but the project quickly expanded into TV and telephone. All three services went live in early 2003 and have now been working for over 7 years, generally quite well. Since the start, still more co-ops and housing estates have signed up. As of mid-2005 more than 3500 apartments take part. Internet access is provided in two sizes: 'basic internet' with 70 Mbit/s per 1000 subscribers, and 'fast internet' with 280 Mbit/s per 1000 subscribers. Prices are currently approx. $10/month and $25/month flat fee. Cable TV is also available in two sizes: 'small' with 8 must-carry or inexpensive channels (Danish, local and Scandinavian); and 'large' with 36 channels (Danish, local, Scandianvian, English, US, German, French, Spanish, Arabic and various thematic channels). The exact composition of each package is determined by subscriber votes every few years. Prices are approx. $7/month and $25/month. Telephone lines are provided by normal analog technology. A subscription is approx. $7/month and call rates approx. 20% lower than most commercial operators. Local calls are free of charge. Bryggenet owns the infrastructure. The backbone is a net of fibre optic cables throughout the area, connecting each member building. The internal wiring of each building are normal cat5 PDS cables for internet and telephone, and television. Internet access is provided by a fibre optic cable to Teliasonera, one of the major Scandinavian telecom players; TV and radio signals are delivered via a set of satellite dishes locally; and telephone lines by pools of ISDN lines. All in all Bryggenet represents an investment of about $2 million, plus countless working hours of the volunteers. External links website Community networks
Bryggenet
[ "Technology" ]
464
[ "Computing stubs", "Computer network stubs" ]
962,481
https://en.wikipedia.org/wiki/Twin%20Quasar
The Twin Quasar (also known as Twin QSO, Double Quasar, SBS 0957+561, TXS 0957+561, Q0957+561 or QSO 0957+561 A/B), was discovered in 1979 and was the first identified gravitationally lensed double quasar, not to be confused with the first detection of light deflection in 1919. It is a quasar that appears as two images, a result from gravitational lensing. Quasar The Twin Quasar is a single quasar whose appearance is distorted by the gravity of another galaxy much closer to Earth along the same line of sight. This gravitational lensing effect is a result of the warping of space-time by the nearby galaxy, as described by general relativity. The single quasar thus appears as two separate images, separated by 6 arcseconds. Both images have an apparent magnitude of 17, with the A component having 16.7 and the B component having 16.5. There is a 417 ± 3-day time lag between the two images. The Twin Quasar lies at redshift z = 1.41 (8.7 billion ly), while the lensing galaxy lies at redshift z = 0.355 (3.7 billion ly). The lensing galaxy with apparent dimension of 0.42×0.22 arcminutes lies almost in line with the B image, lying 1 arcsecond off. The quasar lies 10 arcminutes north of NGC 3079, in the constellation Ursa Major. The astronomical data services SIMBAD and NASA/IPAC Extragalactic Database (NED) list several other names for this system. Lens The lensing galaxy, YGKOW G1 (sometimes called G1 or Q0957+561 G1), is a giant elliptical (type cD) lying within a cluster of galaxies that also contributed to the lensing. History The quasars QSO 0957+561A/B were discovered in early 1979 by an Anglo-American team around Dennis Walsh, Robert Carswell and Ray Weyman, with the aid of the 2.1 m Telescope at Kitt Peak National Observatory in Arizona, United States. The team noticed that the two quasars were unusually close to each other, and that their redshift and visible light spectrum were very similar to each other. They published their suggestion of "the possibility that they are two images of the same object formed by a gravitational lens". The Twin Quasar was one of the first directly observable effects of gravitational lensing, which was described in 1936 by Albert Einstein as a consequence of his 1916 general theory of relativity, though in that 1936 paper he also predicted "Of course, there is no hope of observing this phenomenon directly." Critics identified a difference in appearance between the two quasars in radio frequency images. In mid-1979, a team led by David Roberts at the Very Large Array (VLA) near Socorro, New Mexico, discovered a relativistic jet emerging from quasar A with no corresponding equivalent in quasar B. Furthermore, the distance between the two images, 6 arcseconds, was too great to have been produced by the gravitational effect of the galaxy G1, a galaxy identified near quasar B. In 1980, Peter J. Young and collaborators discovered that galaxy G1 is part of a galaxy cluster which increases the gravitational deflection and can explain the observed distance between the images. Finally, a team led by Marc V. Gorenstein observed essentially identical relativistic jets on very small scales from both A and B in 1983 using Very Long Baseline Interferometry (VLBI). Subsequent, more detailed VLBI observations demonstrated the expected (parity reversed) magnification of the image B jet with respect to image A jet. The difference between the large-scale radio images is attributed to the special geometry needed for gravitational lensing, which is satisfied by the quasar but not by all of the extended jet emission seen by the VLA near image A. Slight spectral differences between quasar A and quasar B can be explained by different densities of the intergalactic medium in the light paths, resulting in differing extinction. 30 years of observation made it clear that image A of the quasar reaches earth about 14 months earlier than the corresponding image B, resulting in a difference of path length of 1.1 ly. Possible planet In 1996, a team at Harvard-Smithsonian Center for Astrophysics led by Rudy E. Schild discovered an anomalous fluctuation in one image's light curve, which they speculated was caused by a planet approximately three Earth masses in size within the lensing galaxy. This conjecture cannot be proven because the chance alignment that led to its discovery will never happen again. If it could be confirmed, however, it would make it the most distant known planet, 4 billion ly away. Candidate magnetospheric eternally collapsing object In 2006, R. E. Schild suggested that the accreting object at the heart of Q0957+561 is not a supermassive black hole, as is generally believed for all quasars, but a magnetospheric eternally collapsing object. Schild's team at the Harvard-Smithsonian Center for Astrophysics asserted that "this quasar appears to be dynamically dominated by a magnetic field internally anchored to its central, rotating supermassive compact object" (R. E. Schild). See also Cloverleaf quasar Cosmic string Gravitational lens Hypothetical astronomical object References External links Q0957+561: Die historisch erste Linse mit QuasarThe University of Cologne. Q0957+561CCD image based on 45-min total exposureMarch 2007. Q0957+561 A,B. Simbad Ursa Major Gravitationally lensed quasars Gravitational lensing Astronomical objects discovered in 1979
Twin Quasar
[ "Astronomy" ]
1,255
[ "Ursa Major", "Constellations" ]
962,501
https://en.wikipedia.org/wiki/Floating%20car%20data
Floating car data (FCD) in traffic engineering and management is typically timestamped geo-localization and speed data directly collected by moving vehicles, in contrast to traditional traffic data collected at a fixed location by a stationary device or observer. In a physical interpretation context, FCD provides a Lagrangian description of the vehicle movements whereas stationary devices provide an Eulerian description. The participating vehicle acts itself consequently as a moving sensor using an onboard GPS receiver or cellular phone. The most common and widespread use of FCD is to determine the traffic speed on the road network. Based on these data, traffic congestion can be identified, travel times can be calculated, and traffic reports can be rapidly generated. In contrast to stationary devices such as traffic cameras, number plate recognition systems, and induction loops embedded in the roadway, no additional hardware on the road network is necessary. Floating cellular data Floating cellular data is one of the methods to collect floating car data. This method uses cellular network data (CDMA, GSM, UMTS, GPRS). No special devices/hardware are necessary: every switched-on mobile phone becomes a traffic probe and is as such an anonymous source of information. The location of the mobile phone is determined using (1) triangulation or (2) the hand-over data stored by the network operator. As GSM localisation is less accurate than GPS based systems, many phones must be tracked and complex algorithms used to extract high-quality data. For example, care must be taken not to misinterpret cellular phones on a high speed railway track near the road as incredibly fast journeys along the road. However, the more congestion, the more cars, the more phones and thus more probes. In metropolitan areas where traffic data are most needed the distance between cell sites is lower and thus precision increases. Advantages over GPS-based or conventional methods such as cameras or street embedded sensors include: No infrastructure or hardware in cars or along the road. It is much less expensive, offers more coverage of more streets, it is faster to set up (no work zones) and needs less maintenance. In 2007, GDOT demonstrated in Atlanta that such system can emulate very well road sensors data for section speeds. A 2007 study by GMU investigated the relationship between vehicle free flow speed and geometric variables on urban street segments using FCD. Vehicle re-identification Vehicle re-identification methods require sets of detectors mounted along the road. In this technique, a unique serial number for a device in the vehicle is detected at one location and then detected again (re-identified) further down the road. Travel times and speed are calculated by comparing the time at which a specific device is detected by pairs of sensors. This can be done using the MAC addresses from Bluetooth devices, or using the radio-frequency identification (RFID) serial numbers from Electronic Toll Collection (ETC) transponders (also called "toll tags"). The ETC transponders, which are uniquely identifiable, may be read not only at toll collection points (e.g. toll bridges) but also at many non-toll locations. This is used as a method to collect traffic flow data (which is anonymized) for the San Francisco Bay Area's 5-1-1 service. In New York City's Midtown in Motion program, its adaptive traffic control system also use RFID readers to track movement of E-ZPass tags as a means of monitoring traffic flow. The data is fed through the government-dedicated broadband wireless infrastructure to the traffic management center to be used in adaptive traffic control of the traffic lights. Global Positioning System A small number of cars (typically fleet vehicles such as courier services and taxi drivers) are equipped with a box that contains a GPS receiver. The data are then communicated with the service provider using the regular on-board radio unit or via cellular network data (more expensive). It is possible that FCD could be used as a surveillance method, although the companies deploying FCD systems give assurances that all data are anonymized in their systems, or kept sufficiently secure to prevent abuses. See also Traffic count References Advanced driver assistance systems Intelligent transportation systems Speed sensors Surveillance Transportation engineering
Floating car data
[ "Technology", "Engineering" ]
854
[ "Transport systems", "Measuring instruments", "Industrial engineering", "Information systems", "Transportation engineering", "Civil engineering", "Warning systems", "Speed sensors", "Intelligent transportation systems" ]
962,509
https://en.wikipedia.org/wiki/Solitude
Solitude, also known as social withdrawal, is a state of seclusion or isolation, meaning lack of socialisation. Effects can be either positive or negative, depending on the situation. Short-term solitude is often valued as a time when one may work, think, or rest without disturbance. It may be desired for the sake of privacy. Long-term solitude may stem from soured relationships, loss of loved ones, deliberate choice, infectious disease, mental disorders, neurological disorders such as circadian rhythm sleep disorder, or circumstances of employment or situation. A distinction has been made between solitude and loneliness. In this sense, these two words refer, respectively, to the joy and the pain of being alone. Health effects Symptoms from complete isolation, called sensory deprivation, may include anxiety, sensory illusions, or distortions of time and perception. However, this is the case when there is no stimulation of the sensory systems at all and not just lack of contact with people. Thus, this can be avoided by having other things to keep one's mind busy. Long-term solitude is often seen as undesirable, causing loneliness or reclusion resulting from inability to establish relationships. Furthermore, it might lead to clinical depression, although some people do not react to it negatively. Buddhist monks regard long-term solitude as a means of enlightenment. Marooned people have been left in solitude for years without any report of psychological symptoms afterwards. Some psychological conditions (such as schizophrenia and schizoid personality disorder) are strongly linked to a tendency to seek solitude. Enforced loneliness (solitary confinement) has been a punishment method throughout history. It is often considered a form of torture. Emotional isolation is a state of isolation where one feels emotionally separated from others despite having a well-functioning social network. Researchers, including Robert J. Coplan and Julie C. Bowker, have rejected the notion that solitary practices and solitude are inherently dysfunctional and undesirable. In their 2013 book A Handbook of Solitude, the authors note how solitude can allow for enhancements in self-esteem, generates clarity, and can be highly therapeutic. In the edited work, Coplan and Bowker invite not only fellow psychology colleagues to chime in on this issue but also a variety of other faculty from different disciplines to address the issue. Fong's chapter offers an alternative view on how solitude is more than just a personal trajectory for one to take inventory on life; it also yields a variety of important sociological cues that allow the protagonist to navigate through society, even highly politicized societies. In the process, political prisoners in solitary confinement were examined to see how they concluded their views on society. Thus Fong, Coplan, and Bowker conclude that a person's experienced solitude generates immanent and personal content as well as collective and sociological content, depending on context. Psychological effects There are both positive and negative psychological effects of solitude. Much of the time, these effects and the longevity is determined by the amount of time a person spends in isolation. The positive effects can range anywhere from more freedom to increased spirituality, while the negative effects are socially depriving and may trigger the onset of mental illness. While positive solitude is often desired, negative solitude is often involuntary or undesired at the time it occurs. Positive effects Freedom is considered to be one of the benefits of solitude; the constraints of others will not have any effect on a person who is spending time in solitude, therefore giving the person more latitude in their actions. With increased freedom, a person’s choices are less likely to be affected by exchanges with others. A person's creativity can be sparked when given freedom. Solitude can increase freedom and moreover, freedom from distractions has the potential to spark creativity. In 1994, psychologist Mihaly Csikszentmihalyi found that adolescents who cannot bear to be alone often stop enhancing creative talents. Another proven benefit to time given in solitude is the development of the self. When a person spends time in solitude from others, they may experience changes to their self-concept. This can also help a person to form or discover their identity without any outside distractions. Solitude also provides time for contemplation, growth in personal spirituality, and self-examination. In these situations, loneliness can be avoided as long as the person in solitude knows that they have meaningful relations with others. Negative effects Negative effects have been observed in prisoners. The behavior of prisoners who spend extensive time in solitude may worsen. Solitude can trigger physiological responses that increase health risks. Negative effects of solitude may also depend on age. Elementary age school children who experience frequent solitude may react negatively. This is largely because often, solitude at this age is not the child's choice. Solitude in elementary-age children may occur when they are unsure of how to interact socially, so they prefer to be alone, causing shyness or social rejection. While teenagers are more likely to feel lonely or unhappy when not around others, they are also more likely to have a more enjoyable experience with others if they have had time alone first. However, teenagers who frequently spend time alone do not have as good a global adjustment as those who balance their time of solitude with their time of socialization. Other uses As pleasure Solitude does not necessarily entail feelings of loneliness, and it may in fact be one's sole source of genuine pleasure for those who choose it with deliberate intent. Some individuals seek solitude for discovering a more meaningful and vital existence. For example, in religious contexts, some saints preferred silence, finding immense pleasure in their uniformity with God. Solitude is a state that can be positively modified utilizing it for prayer allowing to "be alone with ourselves and with God, to put ourselves in listening to His will, but also of what moves in our hearts, let purify our relationships; solitude and silence thus become spaces inhabited by God, and ability to recover ourselves and grow in humanity." In psychology, introverted persons may require spending time alone to recharge, whereas those who are simply socially apathetic might find it a pleasurable setting in which to occupy oneself with solitary tasks. The Buddha attained enlightenment through uses of meditation, deprived of sensory input, bodily necessities, and external desires, including social interaction. The context of solitude is attainment of pleasure from within, but this does not necessitate complete detachment from the external world. This is well demonstrated in the writings of Edward Abbey with particular regard to Desert Solitaire where solitude focused only on isolation from other people allows for a more complete connection to the external world, as in the absence of human interaction the natural world itself takes on the role of the companion. In this context, the individual seeking solitude does so not strictly for personal gain or introspection, though this is often an unavoidable outcome, but instead in an attempt to gain an understanding of the natural world as entirely removed from the human perspective as possible, a state of mind much more readily attained in the complete absence of outside human presence. As punishment Isolation in the form of solitary confinement is a punishment or precaution used in many countries throughout the world for prisoners accused of serious crimes, those who may be at risk in the prison population, those who may commit suicide, or those unable to participate in the prison population due to sickness or injury. Research has found that solitary confinement does not deter inmates from committing further violence in prison. Psychiatric institutions may institute full or partial isolation for certain patients, particularly the violent or subversive, in order to address their particular needs and to protect the rest of the recovering population from their influence. See also Existential isolation Hermit Hikikomori Hitbodedut Boredom Loner Privacy regulation theory Solitude (painting) References External links Deresiewicz, William (March 2010). Solitude and Leadership. "If you want others to follow, learn to be alone with your thoughts." The American Scholar Living arrangements Emotions Behavior
Solitude
[ "Biology" ]
1,603
[ "Behavior" ]
962,547
https://en.wikipedia.org/wiki/List%20of%20ideological%20symbols
This is a partial list of symbols and labels used by political parties, groups or movements around the world. Some symbols are associated with one or more worldwide ideologies and used by many parties that support a particular ideology. Others are region or country-specific. Colors Worldwide Black – anarchism, fascism, pirate parties, black nationalism Blue – conservatism, men's rights movement, pro-Europeanism, Zionism, American liberalism, Japanese liberalism Brown – fascism, Nazism, far-right politics Gold – capitalism, classical liberalism, right-libertarianism Green – agrarianism, anarcho-egoism, anarcho-primitivism, capitalism, environmentalism, Islamism, green anarchism, green politics, black nationalism, Irish republicanism Gray – independent politicians Lavender – LGBT movements, transgender rights movement Magenta – centrism Orange – Christian democracy, populism, mutualist anarchism, classical liberalism, Ulster unionism Pink – feminism, LGBT movements, transgender rights movement Purple – monarchism, royalism Red – communism, democratic socialism, social democracy, socialism, American conservatism, Japanese conservatism Saffron – Hindu nationalism White – anti-communism, independent politicians, monarchism, pacifism, white nationalism, Zionism Yellow – liberalism, left-libertarianism Australia Blue – The Liberal Party Brown – National Socialist Party of Australia Red – The Labor Party Green – The Greens Green and yellow – The National Party Blue and orange - Pauline Hanson's One Nation Bangladesh Blue, red and green – Bangladesh Nationalist Party Fern Green – Bangladesh Jamaat Islami Green – Bangladesh Awami League Yellow – Jatiyo Party Canada Blue – Conservative Party of Canada Green – Green Party of Canada Light blue – Bloc Québécois Orange – New Democratic Party Purple – People's Party of Canada Red – Liberal Party of Canada France Red – La France Insoumise Red – Parti Communiste Français Pink – Parti Socialiste Green – Europe Ecologie Les Verts Green – Génération.s Orange – Mouvement Démocrate Orange – La République en Marche Yellow – Renaissance Blue – Les Républicains Dark blue – Rassemblement National Hungary Green and gold – Christian Democratic People's Party Orange – Fidesz Red, white and green – Arrow Cross Party India Blue – Bahujan Samaj Party Blue – Mizo National Front Blue, red, and green – Rashtiya Lok Janshakti Party and Lok Janshakti Party (Ram Vilas) Blue, white, and green – Yuvajana Sramika Rythu Congress Party Bright green – All India Trinamool Congress Deep green – Biju Janata Dal Blue – Aam Aadmi Party Green – All India Anna Dravida Munnetra Kazhagam Green – All India Majlis-e-Ittehadul Muslimeen, Janata Dal (Secular) Green – Jharkhand Mukti Morcha Green – National People's Party (India) Green and yellow – Rashtriya Loktantrik Party Maize and green – Indigenous People's Front of Tripura Navy blue and orange – Shiromani Akali Dal Pacific Blue – Nationalist Congress Party Pink – Janta Congress Chhattisgarh, Bharat Rashtra Samithi Red – All India Forward Bloc, Communist Party of India, Communist Party of India (Marxist), Left Front, Revolutionary Socialist Party (India) Red – Sikkim Krantikari Morcha Red and black – Dravida Munnetra Kazhagam Red and green – Samajwadi Party Red and white – Nationalist Democratic Progressive Party Saffron and green – Bharatiya Janata Party Sky blue – Indian National Congress White – other parties and independents Yellow – Right to Recall Party Yellow – Telugu Desam Party Yellow and green – Jannayak Janta Party Ireland Blue – Fine Gael Dark green – Sinn Féin Green – Fianna Fáil Green and gold – Green Party Maroon – Solidarity–People Before Profit Purple – Social Democrats Red – Labour Party Japan Blue – Constitutional Democratic Party Blue and pink – Komeito Red – Liberal Democratic Party, Communist Party Mexico Black and red – Camisas Rojas Green, gold and black – Nationalist Front of Mexico The Netherlands Azure and navy – Christian Union Green – Christian Democratic Appeal Green – Democrats 66 Green – Farmer–Citizen Movement Green – Party for the Animals Green and red – GreenLeft Maroon – Forum for Democracy Navy and red – JA21 Orange – Reformed Political Party Orange and blue – People's Party for Freedom and Democracy Purple – Volt Red – Labour Party Red – Socialist Party Red, white, and blue – Party for Freedom Turquoise – DENK New Zealand ACT New Zealand Green Party of Aotearoa New Zealand New Zealand Labour Party New Zealand National Party New Zealand First Te Pāti Māori Portugal Blue – People's Monarchist Party Dark blue – Chega Green – Ecologist Party "The Greens" Green – Earth Party Green – Together for the People Green – LIVRE Orange – Social Democratic Party Red – Portuguese Communist Party Red – Portuguese Workers' Communist Party Red (official) and maroon (customary) – Left Bloc Red (official) and pink (customary) – Socialist Party Sky blue – CDS – People's Party Sky blue – Liberal Initiative Teal – People Animals Nature Russia White, blue and red – United Russia Red – Communist Party Yellow and maroon – A Just Russia – For Truth Blue and yellow (official), light blue (customary) – LDPR Turquoise and black – New People Slovakia Dark blue and white – Slovak Togetherness Green – People's Party Our Slovakia White, blue and red – Slovak People's Party Sweden Blue – Moderate Party Blue and white – Christian Democrats Blue and white – Liberals Green – Centre Party Green – Green Party Orange and blue – Alliance Pink – Feminist Initiative Purple – Pirate Party Red – Left Party Red – Swedish Social Democratic Party Red and green – Red-Greens Yellow and light-blue – Sweden Democrats Syria Black, red and white – Syrian Social Nationalist Party Turkey Blue – Good Party Purple – Peoples' Democratic Party Red – Nationalist Movement Party Red – Republican People's Party Yellow – Justice and Development Party United Kingdom Blue – Conservative Party Green – Green Party Green and yellow – Plaid Cymru Orange – Liberal Democrats Purple and yellow – UKIP Red – Labour Party Red, white, and blue – DUP Turquoise and white – Reform UK Yellow and black – SNP United States Black, gold, white and maroon – American Indian Movement Blue – Democratic Party Blue and buff – Whig Party (United States) Gold with dark gray, sometimes with dark blue or purple – Libertarian Party Green – Green Party Orange – American Solidarity Party (Christian democracy) Purple – politically mixed or moderate regions; Constitution Party, Veterans Party of America Red – Republican Party Teal and white – Justice Party White or gray – senior citizens, women's voting rights, third parties (other than the Greens), independent candidates and voters Icons Worldwide a³ (lowercase a, cubed) – Agorism Ballot – democracy Beehive – co-operative movement Bird in flight – classical liberalism, right-libertarianism Black rose – anarchism Arrow Cross – Hungarism Black sail – pirate parties Black sun – esoteric Nazism, neo-Nazism, white nationalism Bear – Putinism, Russian conservatism Carnation – social democracy and democratic socialism Cat, wildcat – worker collectivism, symbol of Industrial Workers of the World; Georgism Celtic cross – white nationalism, neo-Nazism, white pride, Irish nationalism, Celtic neopaganism ✝ Christian cross – Christianity Cross and sickle – Christian communism Ⓐ Circumscribed A – anarchism ⚙ Cogwheel – Labour movement, working class, agriculturalism Constitution – democracy Cross of Burgundy – Spanish nationalism, Carlism, nostalgia for the Spanish Empire ☨ Cross of Lorraine – Gaullism Cross of Saint Peter – Satanism, Opposition to Christianity, Anti-Christian sentiment Cross potent – Roman Catholicism, Austrofascism ♕ Crown – monarchism 🕊 Dove – love and/or peace (often used by pacifist groups) Eagle – nationalism, patriotism, conservatism Easter lily (calla lily) – Irish republicanism, Irish nationalism Fasces – fascism, neo-fascism, Italian fascism, magisterial power, authority Fist and rose – socialism and social democracy Flash and circle – British fascism 🍀 Four-leaf clover – agrarianism, Hibernophila, Irish nationalism, good luck 🌐Globe – globalism, neoliberalism, Internationalism ☭ Hammer and sickle – communism, Marxism–Leninism Hammer, sickle and brush – Juche, Kimilsungism–Kimjongilism Hawk of Quraish – Arab nationalism, Pan-Arabism Heart ensigned with a crosslet (Sacred Heart) – Integralism Labrys – Lesbian feminism, Metaxism, Matriarchy, Third Positionism, Révolution nationale Lambda – Identitarianism, Nouvelle Droite Lion – Nobility, Judeo Christianity, Rastafari Machete and Gear – Angolanidade ♂ Mars symbol - masculinity 📰 Newspaper – democracy, press freedom Nordic cross – Nordic model social democracy Olive tree – peace, community, health Parthenon – democracy ☮ Peace sign – peace, pacifism, nuclear disarmament, democracy Plough – communism, agrarian socialism, peasant movement, peasants rights Poppy – remembrance, WW1 and WW2 Protest sign - democracy and resistance to tyranny Rainbow or rainbow flag – LGBT rights Raised fist – solidarity, syndicalism, unity, resistance, communism, radicalism in general Rebel Alliance - democracy and resistance to tyranny ⚑ Red flag – socialism, communism, anti-fascism Red Hand of Ulster – Ulster loyalism, Ulster unionism, Ulster nationalism ★ Red star – socialism, Marxism, communism, Neozapatismo Red whirlwind (Zawba'a) – Syrian Social Nationalism Ribbon of Saint George – Anti-Maidan, Ruscism, Pro-war nationalist opposition to Vladimir Putin, support for the Russian invasion of Ukraine 🌹 Rose – social democracy and democratic socialism Runic letters – various letters of the runic alphabet – particularly the Algiz, Eihwaz, Odal, Sowilō, and Tiwaz runes – have been used by various neo-Nazi and white supremacist groups post-WW2. However, these runes are also very commonly used by non-racialist Heathens and followers of Germanic Neopaganism in an apolitical context. Shahada – Wahhabism, Islamism Sigma – Brazilian integralism, Manosphere, Tateism Six-pointed Star and Fist – Kahanism Six Arrows – Kemalism Smiling Sun – anti-nuclear movement St. Michael's Cross (Archangel Michael Cross) – Legionarism, Neo-Legionarism ✡ Star of David – Zionism Starry Plough – Irish republican socialism 🗽 Statue of Liberty - liberty, democracy, American democracy 🌻 Sunflower – green politics Swastika – Nazism, fascism, neo-Nazism; Hindu, Jain, or Buddhist theology (original use) Three Arrows – mid 20th century European social democracy; the arrows represent anti-fascism, anti-communism, and anti-monarchism Three-finger salute (pro-democracy) - democracy and resistance to tyranny Throne, sword and altar – conservatism Torch – right-libertarianism, conservatism, patriotism, classical liberalism Triskelion – Polytheistic reconstructionism, Celtic neopaganism Upside-down crown – republicanism, anti-monarchism ♀ Venus symbol – feminism Venus symbol and raised fist combined – radical feminism ✌ V sign – voluntarism, peace, victory, veganism ⚖ Weighing scale – law, justice Wolf salute – Pan-Turkism Wolfsangel – Azov movement, Ukrainian nationalism, Third Positionism, neo-Nazism, rebellion Yoke and arrows – Falangism Z (military symbol) – Putinism, Ruscism, Russian irredentism Bangladesh Boat – Bangladesh Awami League Sheaf of Paddy – Bangladesh Nationalist Party Plough – Jatiya Party Belgium Circled upright triangle – Vlaamsch Nationaal Verbond Inverted Arrow Cross – French National-Collectivist Party Bold rooster (coq hardi) – Walloon Movement, Rassemblement Wallonie France, Wallonie Libre, Walloon Rally, Sword, cog and plough – Verdinaso Brazil Dove and olive branch – Brazilian Socialist Party Fist and rose – Democratic Labour Party (Brazil) Oak tree – Republicans (Brazil) Red star – Workers' Party (Brazil) Toucan – Brazilian Social Democracy Party Sigma – Brazilian Integralist Action, Brazilian Integralist Front Cambodia Devata – Cambodian People's Party Golden Angkor Wat – Khmer Rouge Canada Red maple leaf — Liberal Party of Canada Red maple leaf within the letter C — Conservative Party of Canada Orange maple leaf — New Democratic Party Orange torch — National Unity Party of Canada Colombia letter C – Colombian Conservative Party letter L – Colombian Liberal Party letter U – Social National Unity Party ("Party of the U") Costa Rica Golden torch – People's Vanguard Party Croatia letter U – Ustaša – Croatian Revolutionary Movement Denmark letter A – Social Democrats Rose – Social Democrats letter B – Social Liberal Party letter C – Conservative People's Party letter F – Socialist People's Party letter I – Liberal Alliance letter K – Christian Democrats letter N – People's Movement against the EU letter O – Danish People's Party letter Ø – Red-Green Alliance letter V – Venstre, Liberal Party of Denmark Tiwaz rune – Nordic Resistance Movement Finland Tiwaz rune – Nordic Resistance Movement Greece Green Sun - Panhellenic Socialist Movement Compass - Greek Solution Meandros - Golden Dawn Swastika - Golden Dawn (Greece) Hand holding a Torch - Nea Dimokratia Hong Kong Yellow Umbrella - pro-democracy symbol popularized in 2014 Hong Kong protests Hungary Arrow Cross - Arrow Cross Party Iceland Tiwaz rune – Nordic Resistance Movement India Ard – Bodoland People's Front (Jharkhand) Arrow – Janata Dal (United) (Bihar, Jharkhand, Karnataka, Nagaland) Banana – All Jharkhand Students Union (Jharkhand) Bicycle – Jammu and Kashmir National Panthers Party (Jammu and Kashmir), Samajwadi Party (Uttar Pradesh), Telugu Desam Party (Andhra Pradesh) Book – National People's Party Bow and arrow – Jharkhand Mukti Morcha (Jharkhand), Shiv Sena (Maharashtra) Broom – Aam Aadmi Party Bungalow – Lok Janshakti Party (Bihar) Candles – People's Democratic Front (Meghalaya) Car – Telangana Rashtra Samithi Ceiling fan – Rashtriya Lok Samta Party (Bihar), YSR Congress Party (Andhra Pradesh, Telangana) Clock – Nationalist Congress Party Coconut – Goa Forward Party (Goa) Conch – Biju Janata Dal (Odisha) Crown – People's Democratic Alliance (Meghalaya) Dao – Indigenous People's Front of Tripura (Tripura) Drum – United Democratic Party (Meghalaya) (Meghalaya) Ears of maize and sickle – Communist Party of India Elephant – Asom Gana Parishad (Assam), Bahujan Samaj Party (with the exception of the states of Assam and Sikkim where certain state parties use the elephant) Five-pointed star – Mizo National Front (Mizoram) Farmer ploughing (within square farm) – Janta Congress Chhattisgarh (Chhattisgarh) Flowers and grass – All India Trinamool Congress Glasses – Indian National Lok Dal (Haryana) Globe – Nationalist Democratic Progressive Party (Nagaland) Hammer, sickle and star – Communist Party of India (Marxist) Hand pump – Rashtriya Lok Dal (Uttar Pradesh) Hurricane lamp – Rashtriya Janata Dal (Bihar, Jharkhand) Ink pot and pen – People's Democratic Party (Jammu and Kashmir) Jug – All India N.R. Congress (Puducherry) Key – Jannayak Janta Party (Haryana) Lightbulb – Mizoram People's Conference (Mizoram) Locomotive – Maharashtra Navnirman Sena (Maharashtra) Maize – People's Party of Arunachal (Arunachal Pradesh) Mango – Pattali Makkal Katchi (Puducherry) Kite – People's Party of Punjab (Punjab), All India Majlis-e-Ittehadul Muslimeen (Telangana) Ladder – Muslim League (Kerala) Lady farmer carrying paddy on her head – Janata Dal (Secular) (Arunachal Pradesh, Karnataka, Kerala) Lion – All India Forward Bloc (West Bengal), Hill State People's Democratic Party (Meghalaya), Maharashtrawadi Gomantak Party (Goa) Lock and key – All India United Democratic Front (Assam) Lotus – Bharatiya Janata Party Nagara – Desiya Murpokku Dravida Kazhagam (Tamil Nadu) Palm (of hand) – Indian National Congress Plough – Jammu & Kashmir National Conference (Jammu and Kashmir) Rooster – Naga People's Front (Manipur, Nagaland) Spade and ashpan rake – Revolutionary Socialist Party (Kerala, West Bengal) Spectacles – Indian National Lok Dal (Haryana) Rising sun – Dravida Munnetra Kazhagam (Tamil Nadu) Sun without rays – Zoram Nationalist Party (Mizoram) Table lamp – Sikkim Krantikari Morcha (Sikkim) Telephone – Himachal Vikas Congress (Himachal Pradesh) Two leaves – All India Anna Dravida Munnetra Kazhagam (Tamil Nadu), Kerala Congress (M) (Kerala) Umbrella – Sikkim Democratic Front (Sikkim) Water bottle – Rashtriya Loktantrik Party (Rajasthan) Weighing scale – Shiromani Akali Dal (Punjab) Iran Slashed equal sign – Nation Party of Iran, Pan-Iranist Party Ireland Starry Plough – Irish Citizen Army, Irish Republican Socialist Party, Irish National Liberation Army, Irish People's Liberation Organisation, Labour Party Israel Hand with two raised fingers – Lehi (militant group) Six-pointed Star and Fist – Kach (political party), Jewish Defense League Italy Arrowed turtle - CasaPound Labrys - Ordine Nuovo, Movimento Politico Ordine Nuovo Tricolor Flame (Fiamma Tricolore) - Brothers of Italy, Social Movement Tricolour Flame, The Right – Tricolour Flame, Italian Social Movement Wolfsangel - Terza Posizione Japan Rising Sun Flag - Zaitokukai Lebanon Cedar tree split into three parts - Kataeb Party, Kataeb Regulatory Forces Cross of Resistance (Salib al-Muqawama) - Lebanese Forces (militia) Nepal Bus – Nepal Federal Socialist Party Khukuri – Sanghiya Loktantrik Rastriya Manch Madal – Nepal Majdoor Kisan Party Plough – Rastriya Prajatantra Party Smiley – Bibeksheel Nepali Dal Sun – Nepal Communist Party Tree – Nepali Congress Tumbler – Rastriya Janamorcha Umbrella – People's Socialist Party, Nepal Weighing scale – Sajha Party The Netherlands Ancient Greek temple – Forum for Democracy Seagull – Party for Freedom Tomato – Socialist Party Red rose – Labour Party Pakistan Arrow – Pakistan Peoples Party Book – Jamiat Ulema-e-Islam (JUI) Cricket bat – Pakistan Tehreek-e-Insaaf Kite – Muttahida Qaumi Movement (MQM) Lantern - Awami National Party (ANP) Sunflower – Green Party of Pakistan Tiger – Pakistan Muslim League (N) Weighing scale – Jamaat-e-Islami Pakistan (JI) Russia Bladed swastika – Russian National Unity ☧ Chi Rho – Great Russia Cross potent – People's National Party, Russian National Union Symbol of Chaos – Eurasianism Slovakia Christian cross – Christian Democratic Movement Blue cube – Slovak Democratic and Christian Union – Democratic Party Flying dove – Free Forum Eagle – Slovak National Party Red star – Communist Party of Slovakia letter S – People's Party – Movement for a Democratic Slovakia Stork – Christian Democratic Movement South Africa Triskele of three sevens – Afrikaner Weerstandsbeweging Sweden Black sail – Pirate Party Cornflower – Liberals (pre-2016) Dandelion – Green Party Four-leaf clover – Centre Party Hepatica – Sweden Democrats Jēran – National Youth letter L – Liberals letter M – Moderate Party Tiwaz rune – Nordic Resistance Movement Red carnation – Left Party Rose – Social Democrats Wolfsangel – Vitt Ariskt Motstånd Wood anemone – Christian Democrats (pre-2017) Switzerland Phrygian cap with a Swiss cross – Swiss Party of Labour Taiwan Blue Sky with a White Sun – Kuomintang Turkey Bee – Motherland Party Dolphin – Liberal Democratic Party Dove – Democratic Left Party Horse – Democrat Party Kayı tribe symbol – Good Party Lightbulb – Justice and Development Party Six arrows – Republican People's Party Three crescents – Nationalist Movement Party Wolf – Idealist Hearths Ukraine At sign – Internet Party of Ukraine Hand with three fingers raised – Svoboda (political party) Tryzub with sword – OUN-M, Right Sector, Tryzub (organization) Wolfsangel – Patriot of Ukraine, Azov Civil Corps, Social-National Assembly, Social-National Party of Ukraine, Karelian National Battalion United Kingdom Bee – Co-operative Party Earth with sunflower petals – Green Party of England and Wales Flash and circle – British Union of Fascists Griffin – Libertarian Party Liberty bird – Liberal Democrats Lion – Democratic Unionist Party, Britain First, UK Independence Party (2017-2018), Young Conservatives (UK) Pound sign – UK Independence Party (1993–2017) Red house – Aspire (political party) Red rose – Labour Party Thistle - Scottish Labour (since 2022) Saltire – the Scottish National Party and Scottish Conservative Party both use stylised saltires in their party logos Scribbled oak tree – Conservative Party Shovel – Labour Party (UK) until 1983 Stylised P-shaped Flag – Pirate Party UK Sun cross – British Movement Sunflower – Scottish Green Party Torch – former logo of the Labour Party (1920s to 1983) and the Conservative Party (1980s to 2006). Union Flag – used in the logos of the Ulster Unionist Party, Democratic Unionist Party, British National Party, Conservative Party (traditional), amongst others Welsh Dragon – former logo of Plaid Cymru; also appeared alongside the thistle, daffodil and clover leaf on the post-war Tory logo Welsh poppy – Plaid Cymru White Rose – logo of the Yorkshire Party, symbol of Yorkshire as a whole United States Abraham Lincoln – Republican Party, used on some paper ballots in the US; also used as a fundraising symbol (such as with the party's annual "Lincoln Dinner" in many states). Bear – California National Party Benjamin Franklin – Democratic Party, used on some paper ballots in the US Black and white cockade – Federalist Party Camel – Prohibition Party Donkey – Democratic Party Eagle – Republican Party (used on ballots in New York State); Constitution Party, American Party Elephant – Republican Party Lady Justice – Justice Party Letter L – Silver Legion of America Lion – National Party Minute Man and Embattled Farmer are the symbols of American Patriot Party (2003 to present) Moose – Vermont Progressive Party; also used in 1912 for the Bull Moose Party Panther – Black Panther Party Pelican – American Solidarity Party. Used for its association with Christian democracy. Penguin – used in some states as a symbol of the Libertarian Party Porcupine – Libertarian Party. Used as a symbol of the Free State Project in New Hampshire and libertarian ideas and movements in general. Raccoon – Whig Party Red rose – Democratic Socialists of America Red, white and blue cockade – Democratic-Republican Party Star – Democratic Party (used on ballots in New York State) Statue of Liberty – Libertarian Party. Also a national symbol Sunflower – Green Party; also, Republican presidential candidate Alfred Landon of Kansas in 1936 Thomas Jefferson and Andrew Jackson – Democratic Party – used as a fundraising symbol (such as with the party's annual "Jefferson-Jackson Dinner" in many states) Tiger – formerly, the New York City Democratic Party and the Tammany Hall political machine that controlled it for more than a century and a half. Torch – Conservative Party of New York; Libertarian Party Flags Black flag – Anarchism, Islamism, Jihadism, Rebellion Black Bauhinia flag – Pro-democracy camp (Hong Kong), Hong Kong nationalism, Hong Kong independence, opposition to Chinese state nationalism Black-yellow-white flag – Russian ultranationalism, Russian imperialism, Russian irredentism Canadian Duality Flag – Canadian federalism, Quebec autonomism Canadian Red Ensign – Canadian Anglophila, Support for the Commonwealth of Nations, Far-right politics in Canada, Canadian nationalism, White nationalism/supremacism in Canada, Alt right movement of Canada Calcutta flag – Indian independence movement Bisected red-and-black flag – Anarchist communism Confederate battle flag — Culture of the Southern United States, Historical commemoration of the Civil War, Neo-Confederates, Southern heritage, Lost Cause of the Confederacy, White supremacy, Rebellion Doug flag – Cascadia movement Estelada – Catalan independence movement, Catalan nationalism Flag of China – Chinese socialism, Chinese communism, Pro-Beijing camp (Hong Kong) Flag of Israel – Zionism Flag of Nazi Germany – Nazism, neo-Nazism, White supremacy, Aryanism, Nazi chic, Shock value Flag of North Korea – Kimilsungism–Kimjongilism, Pro-DPRK, Juche, Songun, Shock value Flag of Rhodesia – Rhodesian exile movement, Nostalgia for Rhodesia, White nationalism, White supremacy, Alt-right politics Flag of South Vietnam – Vietnamese diaspora, Anti-communism, Vietnamese democracy movement, Vietnamese heritage, Vietnamese ethnic unity, American nationalism Flag of the Arab Revolt – Pan-Arabism, Arab nationalism Flag of the Soviet Union – Communism, Soviet patriotism, Nostalgia for the Soviet Union, Marxism–Leninism, Communist chic, Neo-Sovietism, Support for the Russian invasion of Ukraine, Shock value Flag of the Ukrainian Insurgent Army – Banderism, Ukrainian nationalism, Opposition to the Russian invasion of Ukraine, Anti-Sovietism, Russophobia Flag of the United States – American conservatism, American libertarianism, American nationalism, American exceptionalism, Americanism, Trumpism, Radical right Gadsden flag – Right libertarianism, Classical liberalism, Liberty, Libertarian conservativism, Tea Party movement, Individualism, Americanism Green flag – Third International Theory, Gaddafi loyalism, Irish nationalism Jihadist flag – Islamism, Islamic fundamentalism, Jihadism, Islamic extremism, Shock value Kapok flag – Cantonese nationalism, Cantonia Independence Movement, Cantonese culture Morning Star flag – Free Papua Movement, Papuan nationalism Oranje, Blanje, Blou – Afrikaner ethnonationalism, Support for Apartheid, White supremacy, Anti-Black racism, White separatism Pan-African flag – Pan-Africanism, Black nationalism, Black power, Garveyism, pro-UNIA Pine Tree Flag – Christian nationalism, American Libertarianism, Christian Patriot movement, Culture of New England, Right-wing libertarianism, Americanism Prince's Flag – Dutch patriotism, Greater Netherlands movement, Nostalgia for the Dutch Republic, Pan-Netherlands politics, Far-right politics in Holland Rainbow flag (LGBT) – LGBT pride, LGBT rights, LGBT movements Red flag – Socialism, Communism, Marxism, Labour movement, Left-wing politics, Anarchism Senyera – Catalan identity, Catalan nationalism White-blue-white flag – Anti-Putinism, opposition to the Russian invasion of Ukraine, Irpin Declaration, Russian opposition White-red-white flag – Belarusian democracy movement, Belarusian opposition, opposition to Alexander Lukashenko, Belarusian nationalism, anti-Union State References Political parties Party symbols Symbols Parties Symbols
List of ideological symbols
[ "Mathematics" ]
5,768
[ "Symbols", "Lists of symbols" ]
558,397
https://en.wikipedia.org/wiki/Rogue%20planet
A rogue planet, also termed a free-floating planet (FFP) or an isolated planetary-mass object (iPMO), is an interstellar object of planetary mass which is not gravitationally bound to any star or brown dwarf. Rogue planets may originate from planetary systems in which they are formed and later ejected, or they can also form on their own, outside a planetary system. The Milky Way alone may have billions to trillions of rogue planets, a range the upcoming Nancy Grace Roman Space Telescope is expected to refine. Some planetary-mass objects may have formed in a similar way to stars, and the International Astronomical Union has proposed that such objects be called sub-brown dwarfs. A possible example is Cha 110913−773444, which may either have been ejected and become a rogue planet or formed on its own to become a sub-brown dwarf. Terminology The two first discovery papers use the names isolated planetary-mass objects (iPMO) and free-floating planets (FFP). Most astronomical papers use one of these terms. The term rogue planet is more often used for microlensing studies, which also often uses the term FFP. A press release intended for the public might use an alternative name. The discovery of at least 70 FFPs in 2021, for example, used the terms rogue planet, starless planet, wandering planet and free-floating planet in different press releases. Discovery Isolated planetary-mass objects (iPMO) were first discovered in 2000 by the UK team Lucas & Roche with UKIRT in the Orion Nebula. In the same year the Spanish team Zapatero Osorio et al. discovered iPMOs with Keck spectroscopy in the σ Orionis cluster. The spectroscopy of the objects in the Orion Nebula was published in 2001. Both European teams are now recognized for their quasi-simultaneous discoveries. In 1999 the Japanese team Oasa et al. discovered objects in Chamaeleon I that were spectroscopically confirmed years later in 2004 by the US team Luhman et al. Observation There are two techniques to discover free-floating planets: direct imaging and microlensing. Microlensing Astrophysicist Takahiro Sumi of Osaka University in Japan and colleagues, who form the Microlensing Observations in Astrophysics and the Optical Gravitational Lensing Experiment collaborations, published their study of microlensing in 2011. They observed 50 million stars in the Milky Way by using the MOA-II telescope at New Zealand's Mount John Observatory and the University of Warsaw telescope at Chile's Las Campanas Observatory. They found 474 incidents of microlensing, ten of which were brief enough to be planets of around Jupiter's size with no associated star in the immediate vicinity. The researchers estimated from their observations that there are nearly two Jupiter-mass rogue planets for every star in the Milky Way. One study suggested a much larger number, up to 100,000 times more rogue planets than stars in the Milky Way, though this study encompassed hypothetical objects much smaller than Jupiter. A 2017 study by Przemek Mróz of Warsaw University Observatory and colleagues, with six times larger statistics than the 2011 study, indicates an upper limit on Jupiter-mass free-floating or wide-orbit planets of 0.25 planets per main-sequence star in the Milky Way. In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet (named OGLE-2016-BLG-1928) unbound to any star and free floating in the Milky Way galaxy. Direct imaging Microlensing planets can only be studied by the microlensing event, which makes the characterization of the planet difficult. Astronomers therefore turn to isolated planetary-mass objects (iPMO) that were found via the direct imaging method. To determine a mass of a brown dwarf or iPMO one needs for example the luminosity and the age of an object. Determining the age of a low-mass object has proven to be difficult. It is no surprise that the vast majority of iPMOs are found inside young nearby star-forming regions of which astronomers know their age. These objects are younger than 200 Myrs, are massive (>5 ) and belong to the L- and T-dwarfs. There is however a small growing sample of cold and old Y-dwarfs that have estimated masses of 8-20 . Nearby rogue planet candidates of spectral type Y include WISE 0855−0714 at a distance of . If this sample of Y-dwarfs can be characterized with more accurate measurements or if a way to better characterize their ages can be found, the number of old and cold iPMOs will likely increase significantly. The first iPMOs were discovered in the early 2000s via direct imaging inside young star-forming regions. These iPMOs found via direct imaging formed probably like stars (sometimes called sub-brown dwarf). There might be iPMOs that form like a planet, which are then ejected. These objects will however be kinematically different from their natal star-forming region, should not be surrounded by a circumstellar disk and have high metallicity. None of the iPMOs found inside young star-forming regions show a high velocity compared to their star-forming region. For old iPMOs the cold WISE J0830+2837 shows a Vtan of about 100 km/s, which is high, but still consistent with formation in our galaxy. For WISE 1534–1043 one alternative scenario explains this object as an ejected exoplanet due to its high Vtan of about 200 km/s, but its color suggests it is an old metal-poor brown dwarf. Most astronomers studying massive iPMOs believe that they represent the low-mass end of the star-formation process. Astronomers have used the Herschel Space Observatory and the Very Large Telescope to observe a very young free-floating planetary-mass object, OTS 44, and demonstrate that the processes characterizing the canonical star-like mode of formation apply to isolated objects down to a few Jupiter masses. Herschel far-infrared observations have shown that OTS 44 is surrounded by a disk of at least 10 Earth masses and thus could eventually form a mini planetary system. Spectroscopic observations of OTS 44 with the SINFONI spectrograph at the Very Large Telescope have revealed that the disk is actively accreting matter, similar to the disks of young stars. Binaries The first discovery of a resolved planetary-mass binary was 2MASS J1119–1137AB. There are however other binaries known, such as 2MASS J1553022+153236AB, WISE 1828+2650, WISE 0146+4234, WISE J0336−0143 (could also be a brown dwarf and a planetary-mass object (BD+PMO) binary), NIRISS-NGC1333-12 and several objects discovered by Zhang et al. In the Orion Nebula a population of 40 wide binaries and 2 triple systems were discovered. This was surprising for two reasons: The trend of binaries of brown dwarfs predicted a decrease of distance between low mass objects with decreasing mass. It was also predicted that the binary fraction decreases with mass. These binaries were named Jupiter-mass binary objects (JuMBOs). They make up at least 9% of the iPMOs and have a separation smaller than 340 AU. It is unclear how these JuMBOs formed, but an extensive study argued that they formed in situ, like stars. If they formed like stars, then there must be an unknown "extra ingredient" to allow them to form. If they formed like planets and were later ejected, then it has to be explained why these binaries did not break apart during the ejection process. Future measurements with JWST might resolve if these objects formed as ejected planets or as stars. A study by Kevin Luhman reanalysed the NIRCam data and found that most JuMBOs did not appear in his sample of substellar objects. Moreover the color were consistent with reddened background sources or low signal-to-noise sources. Only JuMBO 29 is identified as a good candidate in this work. JuMBO 29 also was observed with NIRSpec and one component was identified as a young M8 source. This spectral type is consistent with a low mass for the age of the Orion Nebula. Total number of known iPMOs There are likely hundreds of known candidate iPMOs, over a hundred objects with spectra and a small but growing number of candidates discovered via microlensing. Some large surveys include: As of December 2021, the largest-ever group of rogue planets was discovered, numbering at least 70 and up to 170 depending on the assumed age. They are found in the OB association between Upper Scorpius and Ophiuchus with masses between 4 and 13 and age around 3 to 10 million years, and were most likely formed by either gravitational collapse of gas clouds, or formation in a protoplanetary disk followed by ejection due to dynamical instabilities. Follow-up observations with spectroscopy from the Subaru Telescope and Gran Telescopio Canarias showed that the contamination of this sample is quite low (≤6%). The 16 young objects had a mass between 3 and 14 , confirming that they are indeed planetary-mass objects. In October 2023 an even larger group of 540 planetary-mass object candidates was discovered in the Trapezium Cluster and inner Orion Nebula with JWST. The objects have a mass between 13 and 0.6 . A surprising number of these objects formed wide binaries, which was not predicted. Formation There are in general two scenarios that can lead to the formation of an isolated planetary-mass object (iPMO). It can form like a planet around a star and is then ejected, or it forms like a low-mass star or brown dwarf in isolation. This can influence its composition and motion. Formation like a star Objects with a mass of at least one Jupiter mass were thought to be able to form via collapse and fragmentation of molecular clouds from models in 2001. Pre-JWST observations have shown that objects below 3-5 are unlikely to form on their own. Observations in 2023 in the Trapezium Cluster with JWST have shown that objects as massive as 0.6 might form on their own, not requiring a steep cut-off mass. A particular type of globule, called globulettes, are thought to be birthplaces for brown dwarfs and planetary-mass objects. Globulettes are found in the Rosette Nebula and IC 1805. Sometimes young iPMOs are still surrounded by a disk that could form exomoons. Due to the tight orbit of this type of exomoon around their host planet, they have a high chance of 10-15% to be transiting. Disks Some very young star-forming regions, typically younger than 5 million years, sometimes contain isolated planetary-mass objects with infrared excess and signs of accretion. Most well known is the iPMO OTS 44 discovered to have a disk and being located in Chamaeleon I. Charmaeleon I and II have other candidate iPMOs with disks. Other star-forming regions with iPMOs with disks or accretion are Lupus I, Rho Ophiuchi Cloud Complex, Sigma Orionis cluster, Orion Nebula, Taurus, NGC 1333 and IC 348. A large survey of disks around brown dwarfs and iPMOs with ALMA found that these disks are not massive enough to form earth-mass planets. There is still the possibility that the disks already have formed planets. Studies of red dwarfs have shown that some have gas-rich disks at a relative old age. These disks were dubbed Peter Pan Disks and this trend could continue into the planetary-mass regime. One Peter Pan disk is the 45 Myr old brown dwarf 2MASS J02265658-5327032 with a mass of about 13.7 , which is close to the planetary-mass regime. Recent studies of the nearby planetary-mass object 2MASS J11151597+1937266 found that this nearby iPMO is surrounded by a disk. It shows signs of accretion from the disk and also infrared excess. Formation like a planet Ejected planets are predicted to be mostly low-mass (<30 Figure 1 Ma et al.) and their mean mass depends on the mass of their host star. Simulations by Ma et al. did show that 17.5% of 1 stars eject a total of 16.8 per star with a typical (median) mass of 0.8 for an individual free-floating planet (FFP). For lower mass red dwarfs with a mass of 0.3 12% of stars eject a total of 5.1 per star with a typical mass of 0.3 for an individual FFP. Hong et al. predicted that exomoons can be scattered by planet-planet interactions and become ejected exomoons. Higher mass (0.3-1 ) ejected FFP are predicted to be possible, but they are also predicted to be rare. Ejection of a planet can occur via planet-planet scatter or due a stellar flyby. Another possibility is the ejection of a fragment of a disk that then forms into a planetary-mass object. Another suggested scenario is the ejection of planets in a tilted circumbinary orbit. Interactions with the central binary and the planets with each other can lead to the ejection of the lower-mass planet in the system. Other scenarios If a stellar or brown dwarf embryo experiences a halted accretion, it could remain low-mass enough to become a planetary-mass object. Such a halted accretion could occur if the embryo is ejected or if its circumstellar disk experiences photoevaporation near O-stars. Objects that formed via the ejected embryo scenario would have smaller or no disk and the fraction of binaries decreases for such objects. It could also be that free-floating planetary-mass objects for from a combination of scenarios. Fate Most isolated planetary-mass objects will float in interstellar space forever. Some iPMOs will have a close encounter with a planetary system. This rare encounter can have three outcomes: The iPMO will remain unbound, it could be weakly bound to the star, or it could "kick out" the exoplanet, replacing it. Simulations have shown that the vast majority of these encounters result in a capture event with the iPMO being weakly bound with a low gravitational binding energy and an elongated highly eccentric orbit. These orbits are not stable and 90% of these objects gain energy due to planet-planet encounters and are ejected back into interstellar space. Only 1% of all stars will experience this temporary capture. Warmth Interstellar planets generate little heat and are not heated by a star. However, in 1998, David J. Stevenson theorized that some planet-sized objects adrift in interstellar space might sustain a thick atmosphere that would not freeze out. He proposed that these atmospheres would be preserved by the pressure-induced far-infrared radiation opacity of a thick hydrogen-containing atmosphere. During planetary-system formation, several small protoplanetary bodies may be ejected from the system. An ejected body would receive less of the stellar-generated ultraviolet light that can strip away the lighter elements of its atmosphere. Even an Earth-sized body would have enough gravity to prevent the escape of the hydrogen and helium in its atmosphere. In an Earth-sized object the geothermal energy from residual core radioisotope decay could maintain a surface temperature above the melting point of water, allowing liquid-water oceans to exist. These planets are likely to remain geologically active for long periods. If they have geodynamo-created protective magnetospheres and sea floor volcanism, hydrothermal vents could provide energy for life. These bodies would be difficult to detect because of their weak thermal microwave radiation emissions, although reflected solar radiation and far-infrared thermal emissions may be detectable from an object that is less than 1,000 astronomical units from Earth. Around five percent of Earth-sized ejected planets with Moon-sized natural satellites would retain their satellites after ejection. A large satellite would be a source of significant geological tidal heating. List The table below lists rogue planets, confirmed or suspected, that have been discovered. It is yet unknown whether these planets were ejected from orbiting a star or else formed on their own as sub-brown dwarfs. Whether exceptionally low-mass rogue planets (such as OGLE-2012-BLG-1323 and KMT-2019-BLG-2073) are even capable of being formed on their own is currently unknown. Discovered via direct imaging These objects were discovered with the direct imaging method. Many were discovered in young star-clusters or stellar associations and a few old are known (such as WISE 0855−0714). List is sorted after discovery year. Discovered via microlensing These objects were discovered via microlensing. Rogue planets discovered via microlensing can only be studied by the lensing event. Some of them could also be exoplanets in a wide orbit around an unseen star. Discovered via transit See also Interstellar object – an astronomical object in interstellar space that is not gravitationally bound to a star ʻOumuamua – an interstellar object that passed through the Solar System in 2017 Rogue black hole – a gravitationally unbound black hole Rogue extragalactic planets – rogue planets outside the Milky Way galaxy Tidally detached exomoon – rogue planets that were originally moons In fiction A Pail of Air (1951) — a science fiction short story Fritz Leiber Space: 1999 (1975-77) — British science-fiction television programme Remina (2004–2005) – horror manga by Junji Ito Melancholia (2011) – science fiction film by Lars von Trier Dark Eden (2012) – a social science fiction novel by Chris Beckett The Wandering Earth (2019) – a science fiction film directed by Frant Gwo Gemini Home Entertainment (2019–present) – horror anthology web series by Remy Abode Carol & the End of the World (2023) – an animated adult comedy miniseries by Dan Guterman References Bibliography "Possibility of Life Sustaining Planets in Interstellar Space" Article by Stevenson similar to the Nature article but with more information. External links Definition of a "Planet" (Resolution B5 – IAU) Strange New Worlds Could Make Miniature Solar Systems Robert Roy Britt (SPACE.com) 5 June 2006 11:35 am ET The IAU draft definition of "planet" and "plutons" press release (International Astronomical Union) 2006 Articles containing video clips Planetary-mass objects Space hazards Types of planet
Rogue planet
[ "Astronomy" ]
3,839
[ "Planetary-mass objects", "Astronomical objects" ]
558,812
https://en.wikipedia.org/wiki/Extraterrestrial%20UFO%20hypothesis
The extraterrestrial hypothesis (ETH) proposes that some unidentified flying objects (UFOs) are best explained as being physical spacecraft occupied by extraterrestrial intelligence or non-human aliens, or non-occupied alien probes from other planets visiting Earth. In spite of ardent believers that various UFO sightings are verifiable evidence for the hypothesis, no rigorous analysis has ever concluded as much. Origins of the term Use of the term extraterrestrial hypothesis in printed material on UFOs seems to date to at least the latter half of the 1960s. French ufologist Jacques Vallée used it in his 1966 book Challenge to science: the UFO enigma. It was used in a publication by French engineer Aimé Michel in 1967, by James E. McDonald in a symposium in March 1968 and again by McDonald and James Harder while testifying before the Congressional Committee on Science and Astronautics, in July 1968. Skeptic Philip J. Klass used it in his 1968 book UFOs--Identified. In 1969 physicist Edward Condon defined the "extraterrestrial hypothesis" or "ETH" as the "idea that some UFOs may be spacecraft sent to Earth from another civilization or space other than Earth, or on a planet associated with a more distant star," while presenting the findings of the much debated Condon Report. Some UFO historians credit Condon with popularizing the term and its abbreviation "ETH." Chronology Although the extraterrestrial hypothesis (ETH) as a phrase is a comparatively new concept, one which owes much to the flying saucer sightings of the 1940s–1960s, its origins can be traced back to a number of earlier events, such as the now-discredited Martian canals and ancient Martian civilization promoted by astronomer Percival Lowell, popular culture including the writings of H. G. Wells and fellow science fiction pioneers such as Edgar Rice Burroughs, who likewise wrote of Martian civilizations, and even to the works of figures such as the Swedish philosopher, mystic and scientist Emanuel Swedenborg, who promoted a variety of unconventional views that linked other worlds to the afterlife. In the early part of the twentieth century, Charles Fort collected accounts of anomalous physical phenomena from newspapers and scientific journals, including many reports of extraordinary aerial objects. These were published in 1919 in The Book of the Damned. In this and two subsequent books, New Lands (1923) and Lo! (1931), Fort theorized that visitors from other worlds were observing Earth. Fort's reports of aerial phenomena were frequently cited in American newspapers when the UFO phenomenon first attracted widespread media attention in June and July 1947. The modern ETH—specifically, the implicit linking of unidentified aircraft and lights in the sky to alien life—took root during the late 1940s and took its current form during the 1950s. It drew on pseudoscience, as well as popular culture. Unlike earlier speculation of extraterrestrial life, interest in the ETH was also bolstered by many unexplained sightings investigated by the U.S. government and governments of other countries, as well as private civilian groups, such as NICAP and APRO. Historical reports of extraterrestrial visits An early example of speculation over extraterrestrial visitors can be found in the French newspaper Le Pays, which on June 17, 1864, published a story about two American geologists who had allegedly discovered an alien-like creature, a mummified three-foot-tall hairless humanoid with a trunk-like appendage on its forehead, inside a hollow egg-shaped structure. H. G. Wells, in his 1898 science fiction classic The War of the Worlds, popularized the idea of Martian visitation and invasion. Even before Wells, there was a sudden upsurge in reports in "Mystery airships" in the United States. For example, The Washington Times in 1897 speculated that the airships were "a reconnoitering party from Mars", and the Saint Louis Post-Dispatch wrote: "these may be visitors from Mars, fearful, at the last, of invading the planet they have been seeking." Later, there was a more international airship wave from 1909-1912. An example of an extraterrestrial explanation at the time was a 1909 letter to a New Zealand newspaper suggesting "atomic powered spaceships from Mars." From the 1920s, the idea of alien visitation in space ships was commonplace in popular comic strips and radio and movie serials, such as Buck Rogers and Flash Gordon. In particular, the Flash Gordon serials have the Earth being attacked from space by alien meteors, ray beams, and biological weapons. In 1938, a radio broadcast version of The War of the Worlds by Orson Welles, using a contemporary setting for H. G. Wells' Martian invasion, created some public panic in the United States. The 1947 flying saucer wave in America On June 24, 1947, at about 3:00 p.m. local time, pilot Kenneth Arnold reported seeing nine unidentified disk-shaped aircraft flying near Mount Rainier. When no aircraft emerged that seemed to account for what he had seen, Arnold quickly considered the possibility of the objects being extraterrestrial. On July 7, 1947, two stories came out where Arnold was raising the topic of possible extraterrestrial origins, both as his opinion and those who had written to him. In an Associated Press story, Arnold said he had received quantities of fan mail eager to help solve the mystery. Some of them "suggested the discs were visitations from another planet." When the 1947 flying saucer wave hit the United States, there was much speculation in the newspapers about what they might be in news stories, columns, editorials, and letters to the editor. For example, on July 10, U.S. Senator Glen Taylor of Idaho commented, "I almost wish the flying saucers would turn out to be space ships from another planet," because the possibility of hostility "would unify the people of the earth as nothing else could." On July 8, R. DeWitt Miller was quoted by UP saying that the saucers had been seen since the early nineteenth century. If the present discs weren't secret Army weapons, he suggested they could be vehicles from Mars, or other planets, or maybe even "things out of other dimensions of time and space." Other articles brought up the work of Charles Fort, who earlier in the twentieth century had documented numerous reports of unidentified flying objects that had been written up in newspapers and scientific journals. Even if people thought the saucers were real, most were generally unwilling to leap to the conclusion that they were extraterrestrial in origin. Various popular theories began to quickly proliferate in press articles, such as secret military projects, Russian spy devices, hoaxes, optical illusions, and mass hysteria. According to journalist Edward R. Murrow, the ETH as a serious explanation for "flying saucers" did not earn widespread attention until about 18 months after Arnold's sighting. These attitudes seem to be reflected in the results of the first U.S. poll of public UFO perceptions released by Gallup on August 14, 1947. The term "flying saucer" was familiar to 90% of the respondents. As to what people thought explained them, the poll further showed, that most people either held no opinion or refused to answer the question (33%), or generally believed that there was a mundane explanation. 29% thought they were optical illusions, mirages, or imagination; 15% a U.S. secret weapon; 10% a hoax; 3% a "weather forecasting device"; 1% of Soviet origin, and 9% had "other explanations," including fulfillment of Biblical prophecy, secret commercial aircraft, or phenomena related to atomic testing. U.S. military investigation and debunkery On July 9, Army Air Forces Intelligence began a secret study of the best saucer reports, including that of Arnold's. A follow-up study by the Air Materiel Command intelligence and engineering departments at Wright Field, Ohio led to the formation of the U.S. Air Force's Project Sign at the end of 1947, the first official U.S. military UFO study. In 1948, Project Sign concluded without endorsing any unified explanation for all UFO reports, and the ETH was rejected by USAF Chief of Staff General Hoyt Vandenberg, citing a lack of physical evidence. Vandenberg dismantled Project Sign, and with this official policy in place, subsequent public Air Force reports concluded, that there was insufficient evidence to warrant further investigation of UFOs. In 1952, Life Magazine published "Have We Visitors From Space?" which popularized the Extraterrestrial Hypothesis and is thought to have triggered the 1952 UFO flap. Immediately following the great UFO wave of 1952 and the military debunking of radar and visual sightings, plus jet interceptions over Washington, D.C. in August, the CIA's Office of Scientific Investigation took particular interest in UFOs. Though the ETH was mentioned, it was generally given little credence. However, others within the CIA, such as the Psychological Strategy Board, were more concerned about how an unfriendly power such as the Soviet Union might use UFOs for psychological warfare purposes, exploit the gullibility of the public for the sensational, and clog intelligence channels. Under a directive from the National Security Council to review the problem, in January 1953, the CIA organized the Robertson Panel, a group of scientists who quickly reviewed the Blue Book's best evidence, including motion pictures and an engineering report that concluded that the performance characteristics were beyond that of earthly craft. After two days' review, all cases were claimed to have conventional explanations. An official policy of public debunkery was recommended using the mass media and authority figures in order to influence public opinion and reduce the number of UFO reports. Evolution of public opinion The early 1950s also saw a number of movies depicting flying saucers and aliens, including The Day the Earth Stood Still (1951), The War of the Worlds (1953), Earth vs. the Flying Saucers (1956), and Forbidden Planet (1956). A poll published in Popular Science magazine in August 1951 reported that of the respondents who self-reported as UFO witnesses, 52% believed that they had seen a man-made aircraft, while only 4% believed that they had seen an alien craft; an additional 28% were uncertain, with more than half of these stating they believed they were either man-made aircraft, or "visitors from afar." By 1957, 25% of Americans responded that they either believed, or were willing to believe in the ETH, while 53% responded that they were not. 22% reported that they were uncertain. A Roper poll in 2002 reported that 56% of respondents thought UFOs were real, with 48% believing that UFOs had visited Earth. Religion Fewer sightings despite camera phone technology As the proliferation of smartphone camera technology across the population has not led to a significant increase in recorded UFO sightings, the claimed phenomenology of UFOs has been called into question. This goes counter to the predictions of supporters of the extraterrestrial hypothesis, even causing a crisis of confidence among some within the informal UFO research community. Involvement of scientists The scientific community has shown very little support for the ETH, and has largely accepted the explanation that reports of UFOs are the result of people misinterpreting common objects or phenomena, or are the work of hoaxers. Professor Stephen Hawking has expressed skepticism about the ETH. In a 1969 lecture, U.S. astrophysicist Carl Sagan said: "The idea of benign or hostile space aliens from other planets visiting the Earth [is clearly] an emotional idea. There are two sorts of self-deception here: either accepting the idea of extraterrestrial visitation by space aliens in the face of very meager evidence because we want it to be true; or rejecting such an idea out of hand, in the absence of sufficient evidence, because we don't want it to be true. Each of these extremes is a serious impediment to the study of UFOs." Similarly, British astrophysicist Peter A. Sturrock wrote "for many years, discussions of the UFO issue have remained narrowly polarized between advocates and adversaries of a single theory, namely the extraterrestrial hypothesis ... this fixation on the ETH has narrowed and impoverished the debate, precluding an examination of other possible theories for the phenomenon." An informal poll done by Sturrock in 1973 of American Institute of Aeronautics and Astronautics members found that about 10% of them believed that UFOs were vehicles from outer space. In another poll conducted in 1977, Sturrock asked members of the American Astronomical Society to assign probabilities to eight possible explanations for UFOs. The results were: The primary scientific arguments against ETH were summarized by astronomer and UFO researcher J. Allen Hynek during a presentation at the 1983 MUFON Symposium, where he outlined seven key reasons why he could not accept the ETH. Failure of sophisticated surveillance systems to detect incoming or outgoing UFOs Gravitational and atmospheric considerations Statistical considerations Elusive, evasive and absurd behavior of UFOs and their occupants Isolation of the UFO phenomenon in time and space: the Cheshire Cat effect The space unworthiness of UFOs The problem of astronomical distances Hynek argued that: Despite worldwide radar systems and Earth-orbiting satellites, UFOs are alleged to flit in and out of the atmosphere, leaving little to no evidence. Space aliens are alleged to be overwhelmingly humanoid, and are allegedly able to exist on Earth without much difficulty often lacking "space suits", even though extra-solar planets would likely have different atmospheres, biospheres, gravity and other factors, and extraterrestrial life would likely be very different from Earthly life. The number of reported UFOs and of purported encounters with UFO-inhabitants outstrips the number of expeditions that an alien civilization (or civilizations) could statistically be expected to mount. The behavior of extraterrestrials reported during alleged abductions is often inconsistent and irrational. UFOs are isolated in time and space: like the Cheshire Cat, they seem to appear and disappear at will, leaving only vague, ambiguous and mocking evidence of their presence Reported UFOs are often far too small to support a crew traveling through space, and their reported flight behavior is often not representative of a craft under intelligent control (erratic flight patterns, sudden course changes). The distance between planets makes interstellar travel impractical, particularly because of the amount of energy that would be required for interstellar travel using conventional means, (According to a NASA estimate, it would take 7 joules of energy to send the then-current Space Shuttle on a one-way 50-year journey to the nearest star, an enormous amount of energy) and because of the level of technology that would be required to circumvent conventional energy/fuel/speed limitations using exotic means, such as Einstein-Rosen Bridges as ways to shorten distances from point A to point B. (see Faster-than-light travel). According to the personal assessment of Hynek at the time, points 1 through 6 could be argued, but point 7 represented an "insurmountable" barrier to the validity of the ETH. NASA NASA frequently fields questions in regard to the ETH and UFOs. As of 2006, its official standpoint was that ETH has a lack of empirical evidence. "no one has ever found a single artifact, or any other convincing evidence for such alien visits". David Morrison. "As far as I know, no claims of UFOs as being alien craft have any validity -- the claims are without substance, and certainly not proved". David Morrison Despite public interest, up until 2021, NASA had considered the study of ETH to be irrelevant to its work because of the number of false leads that a study would provide, and the limited amount of usable scientific data that it would yield. On the History Channel UFO Hunters episode "The NASA Files" (2008), Former NASA astronauts have commented; Gordon Cooper wrote that NASA and the government "swept these and other sightings under the rug". Brian O'Leary stated "some of my fellow astronauts and scientists astronauts that did go up and who have observed things, very clearly, they were told - not to report it". In June 2021, NASA Administrator Bill Nelson announced that he had directed NASA scientists to investigate Unidentified Aerial Phenomenon. During an interview at the University of Virginia, Bill Nelson explored the possibility that UAP could represent extraterrestrial technology. NASA scientist Ravi Kopparapu advocates studying UAP. In August 2021, at the American Institute of Aeronautics and Aviation, Kopparapu presented a paper from the American Association for the Advancement of Science, 134th Meeting General Symposium that supported ETH. Kopparapu stated he and his colleagues found the paper "perfectly credible". Conspiracy theories A frequent concept in ufology and popular culture is that the true extent of information about UFOs is being suppressed by some form of conspiracy of silence, or by an official cover-up that is acting to conceal information. In 1968, American engineer James Harder argued that significant evidence existed to prove UFOs "beyond reasonable doubt," but that the evidence had been suppressed and largely neglected by scientists and the general public, thus preventing sound conclusions from being reached on the ETH. "Over the past 20 years a vast amount of evidence has been accumulating that bears on the existence of UFOs. Most of this is little known to the general public or to most scientists. But on the basis of the data and ordinary rules of evidence, as would be applied in civil or criminal courts, the physical reality of UFOs has been proved beyond a reasonable doubt." J A Harder A survey carried out by Industrial Research magazine in 1971 showed that more Americans believed the government was concealing information about UFOs (76%) than believed in the existence of UFOs (54%), or in ETH itself (32%). People have had a long-standing curiosity about extraterrestrial life. Aliens are the subject of numerous urban legends, including claims that they have long been present on earth or that they may be able to assist humans in resolving certain issues. Despite these myths, the truth is that there is no scientific proof to back up these assertions, hence we cannot declare with certainty whether or not aliens exist. Documents and investigations regarding ETH Other private or government studies, some secret, have concluded in favor of the ET hypothesis, or have had members who disagreed in contravention with official conclusions reached by the committees and agencies to which they belonged. The following are examples of sources that have focused specifically on the topic: In 1967, Greek physicist Paul Santorini, a Manhattan Project scientist, publicly stated that a 1947 Greek government investigation into the European Ghost rockets of 1946 under his lead quickly concluded that they were not missiles. Santorini claimed the investigation was then quashed by military officials from the U.S., who knew them to be extraterrestrial, because there was no defense against the advanced technology and they feared widespread panic should the results become public. A 1948 Top Secret USAF Europe document (at right) states that Swedish air intelligence informed them that at least some of their investigators into the ghost rockets and flying saucers concluded they had extraterrestrial origins: "...Flying saucers have been reported by so many sources and from such a variety of places that we are convinced that they cannot be disregarded and must be explained on some basis which is perhaps slightly beyond the scope of our present intelligence thinking. When officers of this Directorate recently visited the Swedish Air Intelligence Service... their answer was that some reliable and fully technically qualified people have reached the conclusion that 'these phenomena are obviously the result of a high technical skill which cannot be credited to any presently known culture on earth.' They are therefore assuming that these objects originate from some previously unknown or unidentified technology, possibly outside the earth." In 1948, the USAF Project Sign produced a Top Secret Estimate of the Situation, concluding that the ETH was the most likely explanation for the most perplexing unexplained cases. The study was ordered destroyed by USAF Chief of Staff General Hoyt Vandenberg, citing lack of proof. Knowledge of the existence of the Estimate has come from insiders who said they read a surviving copy, including the later USAF Project Blue Book head Edward J. Ruppelt, and astronomer and USAF consultant J. Allen Hynek. West Germany, in conjunction with other European countries, conducted a secret study from 1951 to 1954, also concluding that UFOs were extraterrestrial. This study was revealed by German rocketry pioneer Hermann Oberth, who headed the study and who also made many public statements supporting the ETH in succeeding years. At the study's conclusion in 1954, Oberth declared: "These objects (UFOs) are conceived and directed by intelligent beings of a very high order. They do not originate in our solar system, perhaps not in our galaxy." Soon afterwards, in an October 24, 1954, article in The American Weekly, Oberth wrote: "It is my thesis that flying saucers are real and that they are space ships from another solar system. I think that they possibly are manned by intelligent observers who are members of a race that may have been investigating our earth for centuries..." The CIA started their own internal scientific review the following day. Some CIA scientists were also seriously considering the ETH. An early memo from August was very skeptical, but also added: "...as long as a series of reports remains 'unexplainable' (interplanetary aspects and alien origin not being thoroughly excluded from consideration) caution requires that intelligence continue coverage of the subject." A report from later that month was similarly skeptical, but nevertheless concluded: "...sightings of UFOs reported at Los Alamos and Oak Ridge, at a time when the background radiation count had risen inexplicably. Here we run out of even 'blue yonder' explanations that might be tenable, and we still are left with numbers of incredible reports from credible observers." A December 1952 memo from the Assistant CIA Director of Scientific Intelligence (O/SI) was much more urgent: "...the reports of incidents convince us that there is something going on that must have immediate attention. Sightings of unexplained objects at great altitudes and traveling at high speeds in the vicinity of U.S. defense installation [sic] are of such nature that they are not attributable to natural phenomena or known types of aerial vehicles." Some of the memos also made it clear, that CIA interest in the subject was not to be made public, partly in fear of possible public panic. (Good, 331–335) The CIA organized the January 1953 Robertson Panel of scientists to debunk the data collected by the Air Force's Project Blue Book. This included an engineering analysis of UFO maneuvers by Blue Book (including a motion picture film analysis by Naval scientists) that had concluded UFOs were under intelligent control and likely extraterrestrial. Extraterrestrial "believers" within Project Blue Book included Major Dewey Fournet, in charge of the engineering analysis of UFO motion, who later became a board member on the civilian UFO organization NICAP. Blue Book director Edward J. Ruppelt privately commented on other firm "pro-UFO" members in the USAF investigations, including some Pentagon generals, such as Charles P. Cabell, USAF Chief of Air Intelligence, who, angry at the inaction and debunkery of Project Grudge, dissolved it in 1951, established Project Blue Book in its place, and made Ruppelt director. In 1953, Cabell became deputy director of the CIA. Another defector from the official Air Force party line was consultant J. Allen Hynek, who started out as a staunch skeptic. After 20 years of investigation, he changed positions and generally supported the ETH. He became the most publicly known UFO advocate scientist in the 1970s and 1980s. The first CIA Director, Vice Admiral Roscoe H. Hillenkoetter, stated in a signed statement to Congress, also reported in The New York Times (February 28, 1960): "It is time for the truth to be brought out... Behind the scenes high-ranking Air Force officers are soberly concerned about the UFOs. However, through official secrecy and ridicule, many citizens are led to believe the unknown flying objects are nonsense... I urge immediate Congressional action to reduce the dangers from secrecy about unidentified flying objects." In 1962, in his letter of resignation from NICAP, he told director Donald Keyhoe, "I know the UFOs are not U.S. or Soviet devices. All we can do now is wait for some actions by the UFOs." Although the 1968 Condon Report came to a negative conclusion (written by Condon), it is known that many members of the study strongly disagreed with Condon's methods and biases. Most quit the project in disgust, or were fired for insubordination. A few became ETH supporters. Perhaps the best known example is David Saunders, who in his 1968 book UFOs? Yes lambasted Condon for extreme bias, and for ignoring or misrepresenting critical evidence. Saunders wrote: "It is clear... that the sightings have been going on for too long to explain in terms of straightforward terrestrial intelligence. It's in this sense that ETI (Extra Terrestrial Intelligence) stands as the 'least implausible' explanation of 'real UFOs'." In 1999, the private French COMETA report (written primarily by military defense analysts) stated the conclusion regarding UFO phenomena, that a "single hypothesis sufficiently takes into account the facts and, for the most part, only calls for present-day science. It is the hypothesis of extraterrestrial visitors." The report noted issues with formulating the extraterrestrial hypothesis, likening its study to the study of meteorites, but concluded, that although it was far from the best scientific hypothesis, "strong presumptions exist in its favour". The report also concludes, that the studies it presents, "demonstrate the almost certain physical reality of completely unknown flying objects with remarkable flight performances and noiselessness, apparently operated by intelligent [beings] ... Secret craft definitely of earthly origins (drones, stealth aircraft, etc.) can only explain a minority of cases. If we go back far enough in time, we clearly perceive the limits of this explanation." Jean-Jacques Velasco, the head of the official French UFO investigation SEPRA, wrote a book in 2005, saying, that 14% of the 5800 cases studied by SEPRA were 'utterly inexplicable and extraterrestrial' in origin. However, the CNES own report says 28% of sightings remain unidentified. Yves Sillard, the head of the new official French UFO investigation GEIPAN and former head of French space agency CNES, echoes Velasco's comments and adds, that the United States 'is guilty of covering up this information.' However, this is not the official public posture of SEPRA, CNES, or the French government. (The CNES placed their 5,800 case files on the Internet starting March 2007.) Official White House position In November 2011, the White House released an official response to two petitions asking the U.S. government to acknowledge formally that aliens have visited Earth and to disclose any intentional withholding of government interactions with extraterrestrial beings. According to the response, "The U.S. government has no evidence that any life exists outside our planet, or that an extraterrestrial presence has contacted or engaged any member of the human race." Also, according to the response, there is "no credible information to suggest that any evidence is being hidden from the public's eye." The response further noted that efforts, like SETI, the Kepler space telescope and the NASA Mars rover, continue looking for signs of life. The response noted "the odds are pretty high" that there may be life on other planets but "the odds of us making contact with any of them—especially any intelligent ones—are extremely small, given the distances involved." See also Alan F. Alford Ancient astronauts Chariots of the Gods? David Icke James E. McDonald Fermi paradox Giorgio A. Tsoukalos Murry Hope Robert K. G. Temple Zecharia Sitchin Psychosocial hypothesis Interdimensional hypothesis Space animal hypothesis Time-traveler hypothesis Cryptoterrestrial hypothesis References External links 20th-century neologisms Pseudoscience UFO conspiracy theories Ufology
Extraterrestrial UFO hypothesis
[ "Technology" ]
5,912
[ "UFO conspiracy theories", "Science and technology-related conspiracy theories" ]
558,847
https://en.wikipedia.org/wiki/Alien%20abduction
Alien abduction (also called abduction phenomenon, alien abduction syndrome, or UFO abduction) refers to the phenomenon of people reporting what they believe to be the real experience of being kidnapped by extraterrestrial beings and subjected to physical and psychological experimentation. People claiming to have been abducted are usually called "abductees" or "experiencers". Most scientists and mental health professionals explain these experiences by factors such as suggestibility (e.g. false memory syndrome), sleep paralysis, deception, and psychopathology. Skeptic Robert Sheaffer sees similarity between some of the aliens described by abductees and those depicted in science fiction films, in particular Invaders From Mars (1953). Typical claims involve forced medical examinations that emphasize the subject's reproductive systems. Abductees sometimes claim to have been warned against environmental abuses and the dangers of nuclear weapons, or to have engaged in interspecies breeding. The contents of the abduction narrative often seem to vary with the home culture of the alleged abductee. Unidentified flying objects (UFOs), alien abduction, and mind control plots can also be part of radical political apocalyptic and millenarian narratives. Reports of the abduction phenomenon have been made all around the world, but are most common in English-speaking countries, especially the United States. The first alleged alien abduction claim to be widely publicized was the Betty and Barney Hill abduction in 1961. UFO abduction claims have declined since their initial surge in the mid-1970s, and alien abduction narratives have found less popularity in mainstream media. Skeptic Michael Shermer proposed that the ubiquity of camera phones increases the burden of evidence for such claims and may be a cause for their decline. Overview Mainstream scientists reject claims that the phenomenon literally occurs as reported. According to John E. Mack, a psychiatrist who gave credence to such claims, most of those who report alien abductions and believe their experiences were real are sane, common people, and psychopathology was associated only with some cases. Mack reported that some abduction reports are quite detailed, and an entire subculture has developed around the subject, with support groups and a detailed mythos explaining the reasons for abductions: The various aliens (Greys, Reptilians, "Nordics" and so on) are said to have specific roles, origins, and motivations. Abduction claimants do not always attempt to explain the phenomenon, but some take independent research interest in it themselves and explain the lack of greater awareness of alien abduction as the result of either extraterrestrial or governmental interest in cover-up. History Paleo-abductions While the term "alien abduction" did not achieve widespread attention until the 1960s, modern speculation about some older stories interpreted them as possible cases. UFO researcher Jerome Clark dubbed them "paleo-abductions". In the November 27, 1896, edition of the Stockton, California, Daily Mail, Colonel H. G. Shaw claimed he and a friend were harassed by three tall, slender humanoids whose bodies were covered with a fine, downy hair who tried to kidnap the pair. In the October 1953 issue of Man to Man Magazine, an article by Leroy Thorpe titled "Are the Flying Saucers Kidnapping Humans?" asks the question "Are an unlucky few of us, and perhaps not so few at that, being captured with the same ease as we would net butterflies, perhaps for zoological specimens, perhaps for vivisection or some other horrible death designed to reveal to our interplanetary invaders what makes us tick?" Rogerson writes that the 1955 publication of Harold T. Wilkins's Flying Saucers Uncensored declared that Karl Hunrath and Wilbur Wilkinson, who had claimed they were contacted by aliens, had disappeared under mysterious circumstances; Wilkins reported speculation that the duo were the victims of "alleged abduction by flying saucers". Two landmark cases An early alien abduction claim occurred in the mid-1950s with the Brazilian Antônio Vilas-Boas case, which did not receive much attention until several years later. Widespread publicity was generated by the Betty and Barney Hill abduction case of 1961, culminating in a made-for-television film broadcast in 1975 (starring James Earl Jones and Estelle Parsons) dramatizing the events. The Hill incident was probably the prototypical abduction case and was perhaps the first in which the claimant described beings that later became widely known as the Greys and in which the beings were said to explicitly identify an extraterrestrial origin. Though these two cases are sometimes viewed as the earliest abductions, skeptic Peter Rogerson notes that these cases established a template that later abductees and researchers would refine but rarely deviate from. Additionally, Rogerson notes purported abductions were cited contemporaneously at least as early as 1954, and that "the growth of the abduction stories is a far more tangled affair than the 'entirely unpredisposed' official history would have us believe." (The phrase "entirely predisposed" appeared in folklorist Thomas E. Bullard's study of alien abduction; he argued that alien abductions as reported in the 1970s and 1980s had little precedent in folklore or fiction.) Later developments R. Leo Sprinkle, a University of Wyoming psychologist, became interested in the abduction phenomenon in the 1960s. Sprinkle became convinced of the phenomenon's actuality and was perhaps the first to suggest a link between abductions and cattle mutilation. Eventually, Sprinkle came to believe that he had been abducted by aliens in his youth; he was forced from his job in 1989. Budd Hopkins had been interested in UFOs for some years. In the 1970s, he became interested in abduction reports and began using hypnosis to extract more details of dimly remembered events. Hopkins soon became a figurehead of the growing abductee subculture. The 1980s brought a major degree of mainstream attention to the subject. Works by Hopkins, novelist Whitley Strieber, historian David M. Jacobs and psychiatrist John E. Mack presented alien abduction as a plausible experience. Also of note in the 1980s was the publication of folklorist Thomas E. Bullard's comparative analysis of nearly 300 alleged abductees. With Hopkins, Jacobs and Mack, accounts of alien abduction became a prominent aspect of ufology. There had been earlier abduction reports (the Hills being the best known), but they were believed to be few and saw rather little attention from ufology (and even less attention from mainstream professionals or academics). Jacobs and Hopkins argued that alien abduction was far more common than earlier suspected; they estimate that tens of thousands (or more) North Americans had been taken by unexplained beings. Furthermore, Jacobs and Hopkins argued that there was an elaborate process underway in which aliens were attempting to create human–alien hybrids, the most advanced stage of which in the "human hybridization program" are known as hubrids, though the motives for this effort were unknown. There had been anecdotal reports of phantom pregnancy related to UFO encounters at least as early as the 1960s, but Budd Hopkins and especially David M. Jacobs were instrumental in popularizing the idea of widespread, systematic interbreeding efforts on the part of the alien intruders. The descriptions of alien encounters as researched and presented by Hopkins, Jacobs and Mack were similar, with slight differences in each researcher's emphasis; the process of selective citation of abductee interviews that supported these variations was sometimes criticized – though abductees who presented their own accounts directly, such as Whitley Strieber, fared no better. The involvement of Jacobs and Mack marked something of a sea change in the abduction studies. According to Boston Globe writer Linda Rodriguez McRobbie, "Abduction and contact stories aren’t quite the fodder for daytime talk show and New York Times bestsellers they were a few decades ago...Today, credulous stories of alien visitation rarely crack the mainstream media, however much they thrive on niche TV channels and Internet forums." Skeptic Michael Shermer noted that "the camera-phone age is increasing the burden of evidence on experiencers". John E. Mack Harvard psychiatry professor John E. Mack believed in the credibility of alien abduction claims. Niall Boyce writing in The Lancet called him "a well-meaning man uncritically elaborating on tales of alien abduction, and potentially both cementing and constructing false memories". Boyce observed that Mack's work in hypnotic regression of claimants helped spread the Grey aliens meme into the culture. Mack was a well known, highly esteemed psychiatrist, author of over 150 scientific articles and winner of the Pulitzer Prize for his biography of T. E. Lawrence. Mack became interested in claims of alien abduction in the late 1980s, interviewing over 800 people and eventually writing two books on the subject. Due to Mack's belief and subsequent promotion of the claims of those he interviewed, his professional reputation suffered, prompting Harvard to review his position in 1994. He retained tenure, but "was not taken seriously by his colleagues anymore”. Abductees The precise number of alleged abductees is uncertain. One of the earliest studies of abductions found 1,700 claimants, while contested surveys argued that 5–6 percent of the general population allege to have been abducted. Demographics Although abduction and other UFO-related reports are usually made by adults, sometimes young children report similar experiences. These child-reports often feature very specific details in common with reports of abduction made by adults, including the circumstances, narrative, entities and aftermaths of the alleged occurrences. Often, these young abductees have family members who have reported having abduction experiences. Family involvement in the military, or a residence near a military base is also common among child abduction claimants. Mental health As a category, some studies show that abductees have psychological characteristics that render their testimony suspect, while others show that "as a group, abduction experients are not different from the general population in term of psychopathology prevalence". Elizabeth Slater conducted a blind study of nine abduction claimants and found them to be prone to "mildly paranoid thinking", nightmares and having a weak sexual identity, while Richard McNally of Harvard Medical School concluded in a similar study of 10 abductees that "none of them was suffering from any sort of psychiatric illness." Political conspiracy theories Political scientist Michael Barkun, without taking a position on if UFOs and aliens are real, highlighted links between radical politics and conspiracy theories involving UFOs, alien visitation, environmental pollution, hidden groups, government and world takeover. He observed the rise of a form of eclectic and apocalyptic millenarism which he termed "improvisional millenarism". UFO and abduction stories can often be part of stigmatized or suppressed knowledge narratives, where alleged orthodoxy is claimed to be maintained in error for nefarious purposes and to keep society in ignorance. UFO and alien-related conspiracy theories emerged in far-right politics from the 1980s onwards. According to Barkun, in popular culture, TV shows like The X-Files and its motion picture not only included aliens as part of coverup conspiracies, with militias and black helicopters but also featured demonization of FEMA, a common target of conspiracy theorists and millenarian scenarios. One conspiracy theory alleges that FEMA plans to incarcerate "patriots" suddenly in concentration camps during a disaster. Political scientist Jodi Dean noted that the stigma of alien abduction stories is seductive to dismiss "consensus reality" in favor of deviant alternative realities. Self-described abduction victims often join self-help communities of victims and may resort to questionable regression therapy, similarly to other self-reported victims of child sexual abuse or satanic ritual abuse. Some espouse conspiracy theories of sophisticated technological mind control, including the use of implants, to force them to serve an alleged New World Order, or for the purposes of the antichrist, considering it important to warn the world of such imminent danger. Abduction narrative Various researchers have noted common points in report narratives. According to CUFOS's definition of abductee, the person must have been taken against their will by apparent non-human beings, taken to a special place perceived as extraterrestrial or to be a spaceship. They then must experience being subjected to an examination or to engage in some form of communication with the beings (or both). Communication may be perceived as telepathic rather than verbal. The memory of the experience may be conscious or "recovered" through means like hypnosis. Although different cases vary in detail (sometimes significantly), some UFO researchers, such as folklorist Thomas E. Bullard argue that there is a broad, fairly consistent sequence and description of events that make up the typical "close encounter of the fourth kind" (a popular but unofficial designation building on J. Allen Hynek's classifications). Though the features outlined below are often reported, there is some disagreement as to exactly how often they actually occur. Bullard argues most abduction accounts feature the following events. They generally follow the sequence noted below, though not all abductions feature all the events: Capture. The abductee is somehow rendered incapable of resisting, and taken from terrestrial surroundings to an apparent alien spacecraft. Examination and Procedures. Invasive physiological and psychological procedures, and on occasion simulated behavioral situations, training & testing, or sexual liaisons. Conference. The abductors communicate with the abductee or direct them to interact with specific individuals for some purpose, typically telepathically but sometimes using the abductee's native language. Tour. The abductees are given a tour of their captors' vessel, though this is disputed by some researchers who consider this definition a confabulation of intent when just apparently being taken around to multiple places inside the ship. Loss of Time. Abductees often rapidly forget the majority of their experience, either as a result of fear, medical intervention, or both. Return. The abductees are returned to earth, occasionally in a different location from where they were allegedly taken or with new injuries or disheveled clothing. Theophany. Coinciding with their immediate return, abductees may have a profound sense of love, a "high" similar to those induced by certain drugs, or a "mystical experience", accompanied by a feeling of oneness with God, the universe, or their abductors. Whether this is the result of a metaphysical change, Stockholm syndrome, or prior medical tampering is often not scrutinized by the abductees at the time. Aftermath. The abductee must cope with the psychological, physical, and social effects of the experience. When describing the "abduction scenario", David M. Jacobs says: The entire abduction event is precisely orchestrated. All the procedures are predetermined. There is no standing around and deciding what to do next. The beings are task-oriented and there is no indication whatsoever that we have been able to find of any aspect of their lives outside of performing the abduction procedures. Capture Abduction claimants report unusual feelings preceding the onset of an abduction experience. These feelings manifest as a compulsive desire to be at a certain place at a certain time or as expectations that something "familiar yet unknown" will soon occur. Abductees also report feeling severe, undirected anxiety at this point even though nothing unusual has actually occurred yet. This period of foreboding can last for up to several days before the abduction actually takes place or be completely absent. Eventually, the experiencer will undergo an apparent "shift" into an altered state of consciousness. British abduction researchers have called this change in consciousness "the Oz Factor". External sounds cease to have any significance to the experiencer and fall out of perception. They report feeling introspective and unusually calm. This stage marks a transition from normal activity to a state of "limited self-willed mobility". As consciousness shifts one or more lights are alleged to appear, occasionally accompanied by a strange mist. The source and nature of the lights differ by report; sometimes the light emanates from a source outside the house (presumably the abductors' UFO), sometimes the lights are in the bedroom with the experiencer and transform into alien figures. As the alleged abduction proceeds, claimants say they will walk or be levitated into an alien craft, in the latter case often through solid objects such as walls, ceilings or a closed window. Alternatively, they may experience rising through a tunnel or along a beam of light, with or without the abductors accompanying them, into the awaiting craft. Examination The examination phase of the so-called "abduction narrative" is characterized by the performance of medical procedures and examinations by apparently alien beings against or irrespective of the will of the experiencer. Such procedures often focus on sex and reproductive biology. However, the literature holds reports of a wide variety of procedures allegedly performed by the beings. The entity that appears to be in charge of the operation is often taller than the others involved and is sometimes described as appearing to be of a different species. Miller notes different areas of emphasis between human medicine and what is reported as being practiced by the abductors. This could result from a difference in the purpose of the examination – routine diagnosis or treatment or both versus scientific examination of an unfamiliar species –, or it could be due to a different level of technology that renders certain kinds of manual procedures unnecessary. The abductors' areas of interest appear to be the cranium, nervous system, skin, reproductive system, and to a lesser degree, the joints. Systems given less attention than a human doctor would – or omitted entirely – include the cardiovascular system, the respiratory system below the pharynx and the lymphatic system. The abductors also appear to ignore the upper region of the abdomen in favor of the lower one. The abductors do not appear to wear gloves during the "examination". Other constants of terrestrial medicine like pills and tablets are missing from abduction narratives, although sometimes abductees are asked to drink liquids. Injections also seem to be rare and IVs are almost completely absent. Miller says he has never heard an abductee claim to have a tongue depressor used on them. Subsequent procedures After the so-called medical exam, the alleged abductees often report other procedures being performed with the entities. Common among these post-examination procedures are what abduction researchers refer to as imaging, envisioning, staging, and testing. "Imaging" procedures consist of an abductee being made to view screens displaying images and scenes that appear to be specially chosen with the intent to provoke certain emotional responses in the abductee. "Envisioning" is a similar procedure, with the primary difference being that the images being viewed, rather than being on a screen, actually seem to be projected into the experiencer's mind. "Staging" procedures have the abductee playing a more active role, according to reports containing this element. It shares vivid hallucination-like mental visualization with the envisioning procedures, but during staging the abductee interacts with the illusionary scenario like a role player or an actor. "Testing" marks something of a departure from the above procedures in that it lacks the emotional analysis feature. During testing the experiencer is placed in front of a complicated electronic device and is instructed to operate it. The experiencer is often confused, saying that they do not know how to operate it. However, when they actually set about performing the task, the abductee will find that they do, in fact, know how to operate the machine. Child presentation Abductees of all ages and genders sometimes report being subjected to a "child presentation". As its name implies, the child presentation involves the abduction claimant being shown a "child". Often the children appear to be neither human, nor the same species as the abductors. Instead, the child will almost always share characteristics of both species. These children are labeled by experiencers as hybrids between humans and their abductors, usually Greys. Unlike Budd Hopkins and David Jacobs, folklorist Thomas E. Bullard could not identify a child presentation phase in the abduction narrative, even after undertaking a study of 300 abduction reports. Bullard says that the child presentation "seems to be an innovation in the story" and that "no clear antecedents" to descriptions of the child presentation phase exist before its popularization by Hopkins and Jacobs. Less common elements Bullard also studied the 300 reports of alien abduction in an attempt to observe the less prominent aspects of the claims. He notes the emergence of four general categories of events that recur regularly, although not as frequently as stereotypical happenings like the medical examination. These four types of events are: The conference The tour The journey Theophany Chronologically within abduction reports, these rarer episodes tend to happen in the order listed, between the medical examination and the return. After allegedly displaying cold callous disregard towards the abduction experiencers, sometimes the entities will change drastically in behavior once the initial medical exam is completed. They become more relaxed and hospitable towards their captive and lead him or her away from the site of the examination. The entities then hold a conference with the experiencer, wherein they discuss things relevant to the abduction phenomenon. Bullard notes five general categories of discussion that occur during the conference "phase" of reported abduction narratives: An interrogation session, explanatory segment, task assignment, warnings, and prophecies. Tours of the abductors' craft are a rare but recurring feature of the abduction narrative. The tour seems to be given by the alleged abductors as a courtesy in response to the harshness and physical rigors of the forced medical examination. Sometimes the abductees report traveling on a "journey" to orbit around Earth or to what appear to be other planets. Some abductees find that the experience is terrifying, particularly if the aliens are of a more fearsome species, or if the abductee was subjected to extensive probing and medical testing. Return Eventually, the abductors will return the abductees, usually to exactly the same location and circumstances they were in before being taken. Usually, explicit memories of the abduction experience will not be present, and the abductee will only realize they have experienced "missing time" upon checking a timepiece. Sometimes the alleged abductors appear to make mistakes when returning their captives. UFO researcher Budd Hopkins has joked about "the cosmic application of Murphy's Law" in response to this observation. Hopkins has estimated that these "errors" accompany 4–5 percent of abduction reports. One type of common apparent mistake made by the abductors is failing to return the experiencer to the same spot that they were taken from initially. This can be as simple as a different room in the same house, or abductees can even find themselves outside and all the doors of the house locked from the inside. Another common error is putting the abductee's clothes (e.g. pajamas) on backwards. Realization event Physician and abduction researcher John G. Miller sees significance in the reason a person would come to see themselves as being a victim of the abduction phenomenon. He terms the insight or development leading to this shift in identity from non-abductee to abductee the "realization event". The realization event is often a single, memorable experience, but Miller reports that not all abductees experience it as a distinct episode. Either way, the realization event can be thought of as the "clinical horizon" of the abduction experience. Trauma and recovery Most people alleging alien abductions report invasive examinations of their bodies and some ascribe psychological trauma to their experiences. "Post-abduction syndrome" is a term used by abductees to describe the effects of abduction, though it is not recognized by any professional treatment organizations. People who have a false memory which makes them believe that they have been abducted by aliens develop symptoms similar to post-traumatic stress disorder. People who believe they have been abducted by aliens usually have previous New Age beliefs, a vivid fantasy life, and suffer from sleep paralysis, according to a 2003 study by Harvard University. Support groups Support groups for people who believed they were abducted began appearing in the mid-1980s. These groups appear throughout the United States, Canada and Australia. Hypnosis Many alien abductees recall much of their alleged abduction(s) through hypnosis. Due to the extensive use of hypnosis, and other methods which they view as being manipulative, skeptics explain the abduction narratives as false memories and suggestions. Criticism Alleged abductees seek out hypnotherapists to try to resolve issues such as missing time or unexplained physical symptoms such as muscle pain or headaches. This usually involves two phases, an information gathering stage, in which the hypnotherapist asks about unexplained illnesses or unusual phenomena during the patients' lives (caused by or distortions of the alleged abduction), followed by hypnosis and guided imagery to facilitate recall. The information-gathering enhances the likelihood that the events discussed will be incorporated into later abduction "memories". Seven steps are hypothesized to lead to the development of false memories: A person is predisposed to accept the idea that certain puzzling or inexplicable experiences might be telltale signs of UFO abduction. The person seeks out a therapist, whom he or she views as an authority and who is, at the very least, receptive to this explanation and has some prior familiarity with UFO abduction reports. Alternatively, the therapist frames the puzzling experiences in terms of an abduction narrative. Alternative explanations of the experiences are not explored. There is increasing commitment to the abduction explanation and increasing anxiety reduction associated with ambiguity reduction. The therapist legitimates or ratifies the abductee's experience, which constitutes additional positive reinforcement. The client adopts the role of the "victim" or abductee, which becomes integrated into the psychotherapy and the client's view of self. Supportive arguments Harvard psychiatrist John E. Mack counters this argument, noting "It might be useful to restate that a large proportion of the material relating to abductions is recalled without the use of an altered state of consciousness, and that many abduction reporters appear to relive powerful experiences after only the most minimal relaxation exercise, hardly justifying the word hypnosis at all. The relaxation exercise is useful to relieve the experiencer's need to attend to the social demands and other stimuli of face-to-face conversation, and to relieve the energies involved in repressing memories and emotion." Perspectives There have been a variety of explanations offered for abduction phenomena, ranging from sharply skeptical appraisals, to uncritical acceptance of all abductee claims, to the demonological, to everything in between. Some have elected not to attempt explanations, noting instead similarities to other phenomena, or simply documenting the development of the alien abduction phenomenon. Others are intrigued by the entire phenomenon but hesitate in making any definitive conclusions. Psychiatrist John E. Mack concluded: "The furthest you can go at this point is to say there's an authentic mystery here. And that is, I think, as far as anyone ought to go" (emphasis as in original). Mack was unconvinced by piecemeal counterclaims, however, and countered that skeptical explanations naturally need to "take into account the entire range of phenomena associated with abduction experiences", up to and including "missing time", directly contemporaneous UFO sightings, and the occurrence in small children. Putting aside the question of whether abduction reports are literally and objectively "real", literature professor Terry Matheson argues that their popularity and their intriguing appeal are easily understood. Tales of abduction "are intrinsically absorbing; it is hard to imagine a more vivid description of human powerlessness". After experiencing the frisson of delightful terror one may feel from reading ghost stories or watching horror movies, Matheson notes that people "can return to the safe world of their homes, secure in the knowledge that the phenomenon in question cannot follow. But as the abduction myth has stated almost from the outset, there is no avoiding alien abductors". Matheson writes that when compared to the earlier contactee reports, abduction accounts are distinguished by their "relative sophistication and subtlety, which enabled them to enjoy an immediately more favorable reception from the public". Some writers, have said abduction experiences bear similarities to pre-20th century accounts of demonic manifestations, noting as many as a dozen similarities. One notable example is the Orthodox monk Fr. Seraphim Rose, who devotes a whole chapter in his book Orthodoxy and the Religion of the Future to the phenomena of UFOs and abductions, which, he concludes, are manifestations of the demonic. As some studies suggest that in some UFO/alien encounters, these phenomena could be related to dissociative REM sleep states, like lucid dreams, sleep paralysis, and out-of-body experiences. In a 2021 study, published in International Journal of Dream Research, researchers focused on the hypothesis that if some of alien abduction stories are the products of REM sleep, then they could be deliberately emulated by lucid dreaming practitioners. To check the hypothesis, they instructed a group of volunteers to try to emulate alien encounters via lucid dreams. Of the volunteers, 114 (75%) were able to experience alien encounters. Regarding the successful cases, 20% were close to reality in terms of the absence of paradoxical dreamlike events. And only among this 20% sleep paralysis and fear were observed, which are common in 'real' stories. In theory, random people might spontaneously encounter the same situation during REM sleep and confuse the events with reality. Testimonials Abduction researcher Brian Thompson claims that a nurse reported to him 1957 in Cincinnati she encountered a praying mantis-like entity two days after a V-shaped UFO sighting. This mantis-like creature is reminiscent of the insectoid-type entity reported in some abduction accounts. He related this report to fellow researcher Leonard Stringfield. Stringfield told him of two cases he had in his files where separate witnesses reported identical circumstances in the same place and year. While some corroborated accounts seem to support the literal reality of the abduction experience, others seem to support a psychological explanation for the phenomenon's origins. Jenny Randles and Keith Basterfield both noted at the 1992 MIT alien abduction conference that of the five cases they knew of where an abduction researcher was present at the onset of an abduction experience, the experiencer "didn't physically go anywhere". Brazilian researcher Gilda Moura reported on a similar case, the Sueli case, from her home country. When psychologist and UFO researcher Don Donderi said that these cases were "evidence of psychological processes" that did not "have anything to do with a physical alien abduction", Moura replied: "If the Sueli case is not an abduction, I don't know what is an abduction any more". Gilda Moura noted that in the Brazilian Sueli case during the abduction UFOs were observed. Later, she claims the experiencer had eye burns, saw lights and there seemed to be residual poltergeist activity. Attempts at confirmation It has been argued that if actual "flesh and blood" aliens are abducting humans, there should be some hard evidence that this is occurring. Proponents of the physical reality of the abduction experience have suggested ways that could conceivably confirm abduction reports. One procedure reported occurring during the alleged examination phase of the experience is the insertion of a long needle-like contraption into a woman's navel. Some have speculated that this could be a form of laparoscopy. If this is true, after the abduction there should be free gas in the woman's abdomen, which could be seen on an X-ray image. The presence of free gas would be extremely abnormal and would help substantiate the claim of some sort of procedure being done to her. Notable abduction claims 1956: Elizabeth Klarer (South Africa) 1957: Antônio Vilas Boas (Brazil) 1961: Betty and Barney Hill (US) 1964: Lonnie Zamora incident 1973: Pascagoula Abduction (US) 1975: Travis Walton (US) 1978: Valentich disappearance (Australia) 1979: Robert Taylor incident (Scotland) 1970s–1980s: Whitley Strieber (US) 1994: Meng Zhaoguo incident (China) Notable figures Raymond E. Fowler Steven M. Greer Budd Hopkins Linda Moulton Howe David Icke David M. Jacobs John E. Mack Riley Martin Whitley Strieber See also Alien abduction claimants Alien abduction entities Alien abduction insurance Alien invasion Alien language Anterograde amnesia Astral projection Confabulation Delirium Grey alien Hallucination Hypnotherapy Incubus List of reported UFO sightings Mare (folklore) Recovered-memory therapy Sexuality in Christian demonology Sleep paralysis Temporal lobe epilepsy The Myth of Repressed Memory Witchcraft – similarities include the involvement of sexual contact with non-human creatures in historical accusations of witchcraft. Footnotes Bibliography Jacobs, David M. (Ph.D.) (2015), Walking Among Us: The Alien Plan to Control Humanity, Disinformation Books, an imprint of Red Wheel/Weiser, LLC; The Disinformation Company Ltd., . C. J. Stevens, The Supernatural Side of Maine, 2002, about alien abductions and people from Maine who faced the supernatural. External links R. Leo Sprinkle papers at the University of Wyoming – American Heritage Center Ballester-Olmos, V.J. and Heiden, Richard W. (Eds.), The Reliability of UFO Witness Testimony. UPIAR, Turin, Italy (2023). ISBN 9791281441002. abduction abduction Kidnapping in folklore Supernatural urban legends UFO conspiracy theories UFO-related phenomena
Alien abduction
[ "Technology" ]
6,963
[ "UFO conspiracy theories", "Science and technology-related conspiracy theories" ]
558,959
https://en.wikipedia.org/wiki/Railway%20electrification
Railway electrification is the use of electric power for the propulsion of rail transport. Electric railways use either electric locomotives (hauling passengers or freight in separate cars), electric multiple units (passenger cars with their own motors) or both. Electricity is typically generated in large and relatively efficient generating stations, transmitted to the railway network and distributed to the trains. Some electric railways have their own dedicated generating stations and transmission lines, but most purchase power from an electric utility. The railway usually provides its own distribution lines, switches, and transformers. Power is supplied to moving trains with a (nearly) continuous conductor running along the track that usually takes one of two forms: an overhead line, suspended from poles or towers along the track or from structure or tunnel ceilings and contacted by a pantograph, or a third rail mounted at track level and contacted by a sliding "pickup shoe". Both overhead wire and third-rail systems usually use the running rails as the return conductor, but some systems use a separate fourth rail for this purpose. In comparison to the principal alternative, the diesel engine, electric railways offer substantially better energy efficiency, lower emissions, and lower operating costs. Electric locomotives are also usually quieter, more powerful, and more responsive and reliable than diesel. They have no local emissions, an important advantage in tunnels and urban areas. Some electric traction systems provide regenerative braking that turns the train's kinetic energy back into electricity and returns it to the supply system to be used by other trains or the general utility grid. While diesel locomotives burn petroleum products, electricity can be generated from diverse sources, including renewable energy. Historically, concerns of resource independence have played a role in the decision to electrify railway lines. The landlocked Swiss confederation which almost completely lacks oil or coal deposits but has plentiful hydropower electrified its network in part in reaction to supply issues during both World Wars. Disadvantages of electric traction include: high capital costs that may be uneconomic on lightly trafficked routes, a relative lack of flexibility (since electric trains need third rails or overhead wires), and a vulnerability to power interruptions. Electro-diesel locomotives and electro-diesel multiple units mitigate these problems somewhat as they are capable of running on diesel power during an outage or on non-electrified routes. Different regions may use different supply voltages and frequencies, complicating through service and requiring greater complexity of locomotive power. There used to be a historical concern for double-stack rail transport regarding clearances with overhead lines but it is no longer universally true , with both Indian Railways and China Railway regularly operating electric double-stack cargo trains under overhead lines. Railway electrification has constantly increased in the past decades, and as of 2022, electrified tracks account for nearly one-third of total tracks globally. History Railway electrification is the development of powering trains and locomotives using electricity instead of diesel or steam power. The history of railway electrification dates back to the late 19th century when the first electric tramways were introduced in cities like Berlin, London, and New York City. In 1881, the first permanent railway electrification in the world was the Gross-Lichterfelde Tramway in Berlin, Germany. Overhead line electrification was first applied successfully by Frank Sprague in Richmond, Virginia in 1887-1888, and led to the electrification of hundreds of additional street railway systems by the early 1890s. The first electrification of a mainline railway was the Baltimore and Ohio Railroad's Baltimore Belt Line in the United States in 1895–96. The early electrification of railways used direct current (DC) power systems, which were limited in terms of the distance they could transmit power. However, in the early 20th century, alternating current (AC) power systems were developed, which allowed for more efficient power transmission over longer distances. In the 1920s and 1930s, many countries worldwide began to electrify their railways. In Europe, Switzerland, Sweden, France, and Italy were among the early adopters of railway electrification. In the United States, the New York, New Haven and Hartford Railroad was one of the first major railways to be electrified. Railway electrification continued to expand throughout the 20th century, with technological improvements and the development of high-speed trains and commuters. Today, many countries have extensive electrified railway networks with of standard lines in the world, including China, India, Japan, France, Germany, and the United Kingdom. Electrification is seen as a more sustainable and environmentally friendly alternative to diesel or steam power and is an important part of many countries' transportation infrastructure. Classification Electrification systems are classified by three main parameters: Voltage Current Direct current (DC) Alternating current (AC) Frequency Contact system Overhead lines (catenary) Third rail Fourth rail Ground-level power supply Selection of an electrification system is based on economics of energy supply, maintenance, and capital cost compared to the revenue obtained for freight and passenger traffic. Different systems are used for urban and intercity areas; some electric locomotives can switch to different supply voltages to allow flexibility in operation. Standardised voltages Six of the most commonly used voltages have been selected for European and international standardisation. Some of these are independent of the contact system used, so that, for example, 750VDC may be used with either third rail or overhead lines. There are many other voltage systems used for railway electrification systems around the world, and the list of railway electrification systems covers both standard voltage and non-standard voltage systems. The permissible range of voltages allowed for the standardised voltages is as stated in standards BSEN50163 and IEC60850. These take into account the number of trains drawing current and their distance from the substation. Direct current Overhead lines 1,500V DC is used in Japan, Indonesia, Hong Kong (parts), Ireland, Australia (parts), France (also using , the Netherlands, New Zealand (Wellington), Singapore (on the North East MRT line), the United States (Chicago area on the Metra Electric district and the South Shore Line interurban line and Link light rail in Seattle, Washington). In Slovakia, there are two narrow-gauge lines in the High Tatras (one a cog railway). In the Netherlands it is used on the main system, alongside 25kV on the HSL-Zuid and Betuwelijn, and 3,000V south of Maastricht. In Portugal, it is used in the Cascais Line and in Denmark on the suburban S-train system (1650V DC). In the United Kingdom, 1,500VDC was used in 1954 for the Woodhead trans-Pennine route (now closed); the system used regenerative braking, allowing for transfer of energy between climbing and descending trains on the steep approaches to the tunnel. The system was also used for suburban electrification in East London and Manchester, now converted to 25kVAC. It is now only used for the Tyne and Wear Metro. In India, 1,500V DC was the first electrification system launched in 1925 in Mumbai area. Between 2012 and 2016, the electrification was converted to 25kV 50Hz, which is the countrywide system. 3kV DC is used in Belgium, Italy, Spain, Poland, Slovakia, Slovenia, South Africa, Chile, the northern portion of the Czech Republic, the former republics of the Soviet Union, and in the Netherlands on a few kilometers between Maastricht and Belgium. It was formerly used by the Milwaukee Road from Harlowton, Montana, to Seattle, across the Continental Divide and including extensive branch and loop lines in Montana, and by the Delaware, Lackawanna and Western Railroad (now New Jersey Transit, converted to 25kVAC) in the United States, and the Kolkata suburban railway (Bardhaman Main Line) in India, before it was converted to 25kV 50Hz. DC voltages between 600V and 750V are used by most tramways and trolleybus networks, as well as some metro systems as the traction motors accept this voltage without the weight of an on-board transformer. Medium-voltage DC Increasing availability of high-voltage semiconductors may allow the use of higher and more efficient DC voltages that heretofore have only been practical with AC. The use of medium-voltage DC electrification (MVDC) would solve some of the issues associated with standard-frequency AC electrification systems, especially possible supply grid load imbalance and the phase separation between the electrified sections powered from different phases, whereas high voltage would make the transmission more efficient. UIC conducted a case study for the conversion of the Bordeaux-Hendaye railway line (France), currently electrified at 1.5kV DC, to 9kV DC and found that the conversion would allow to use less bulky overhead wires (saving €20 million per 100route-km) and lower the losses (saving 2GWh per year per 100route-km; equalling about €150,000 p.a.). The line chosen is one of the lines, totalling 6000km, that are in need of renewal. In the 1960s the Soviets experimented with boosting the overhead voltage from 3 to 6kV. DC rolling stock was equipped with ignitron-based converters to lower the supply voltage to 3kV. The converters turned out to be unreliable and the experiment was curtailed. In 1970 the Ural Electromechanical Institute of Railway Engineers carried out calculations for railway electrification at , showing that the equivalent loss levels for a system could be achieved with DC voltage between 11 and 16kV. In the 1980s and 1990s was being tested on the October Railway near Leningrad (now Petersburg). The experiments ended in 1995 due to the end of funding. Third rail Most electrification systems use overhead wires, but third rail is an option up to 1,500V. Third rail systems almost exclusively use DC distribution. The use of AC is usually not feasible due to the dimensions of a third rail being physically very large compared with the skin depth that AC penetrates to in a steel rail. This effect makes the resistance per unit length unacceptably high compared with the use of DC. Third rail is more compact than overhead wires and can be used in smaller-diameter tunnels, an important factor for subway systems. Fourth rail The London Underground in England is one of few networks that uses a four-rail system. The additional rail carries the electrical return that, on third-rail and overhead networks, is provided by the running rails. On the London Underground, a top-contact third rail is beside the track, energized at , and a top-contact fourth rail is located centrally between the running rails at , which combine to provide a traction voltage of . The same system was used for Milan's earliest underground line, Milan Metro's line 1, whose more recent lines use an overhead catenary or a third rail. The key advantage of the four-rail system is that neither running rail carries any current. This scheme was introduced because of the problems of return currents, intended to be carried by the earthed (grounded) running rail, flowing through the iron tunnel linings instead. This can cause electrolytic damage and even arcing if the tunnel segments are not electrically bonded together. The problem was exacerbated because the return current also had a tendency to flow through nearby iron pipes forming the water and gas mains. Some of these, particularly Victorian mains that predated London's underground railways, were not constructed to carry currents and had no adequate electrical bonding between pipe segments. The four-rail system solves the problem. Although the supply has an artificially created earth point, this connection is derived by using resistors which ensures that stray earth currents are kept to manageable levels. Power-only rails can be mounted on strongly insulating ceramic chairs to minimise current leak, but this is not possible for running rails, which have to be seated on stronger metal chairs to carry the weight of trains. However, elastomeric rubber pads placed between the rails and chairs can now solve part of the problem by insulating the running rails from the current return should there be a leakage through the running rails. The Expo and Millennium Line of the Vancouver SkyTrain use side-contact fourth-rail systems for their supply. Both are located to the side of the train, as the space between the running rails is occupied by an aluminum plate, as part of stator of the linear induction propulsion system used on the Innovia ART system. While part of the SkyTrain network, the Canada Line does not use this system and instead uses more traditional motors attached to the wheels and third-rail electrification. Rubber-tyred systems A few lines of the Paris Métro in France operate on a four-rail power system. The trains move on rubber tyres which roll on a pair of narrow roll ways made of steel and, in some places, of concrete. Since the tyres do not conduct the return current, the two guide bars provided outside the running 'roll ways' become, in a sense, a third and fourth rail which each provide , so at least electrically it is a four-rail system. Each wheel set of a powered bogie carries one traction motor. A side sliding (side running) contact shoe picks up the current from the vertical face of each guide bar. The return of each traction motor, as well as each wagon, is effected by one contact shoe each that slide on top of each one of the running rails. This and all other rubber-tyred metros that have a track between the roll ways operate in the same manner. Alternating current Railways and electrical utilities use AC as opposed to DC for the same reason: to use transformers, which require AC, to produce higher voltages. The higher the voltage, the lower the current for the same power (because power is current multiplied by voltage), and power loss is proportional to the current squared. The lower current reduces line loss, thus allowing higher power to be delivered. As alternating current is used with high voltages. Inside the locomotive, a transformer steps the voltage down for use by the traction motors and auxiliary loads. An early advantage of AC is that the power-wasting resistors used in DC locomotives for speed control were not needed in an AC locomotive: multiple taps on the transformer can supply a range of voltages. Separate low-voltage transformer windings supply lighting and the motors driving auxiliary machinery. More recently, the development of very high power semiconductors has caused the classic DC motor to be largely replaced with the three-phase induction motor fed by a variable frequency drive, a special inverter that varies both frequency and voltage to control motor speed. These drives can run equally well on DC or AC of any frequency, and many modern electric locomotives are designed to handle different supply voltages and frequencies to simplify cross-border operation. Low-frequency alternating current Five European countries Germany, Austria, Switzerland, Norway and Sweden have standardized on 15kV Hz (the 50Hz mains frequency divided by three) single-phase AC. On 16 October 1995, Germany, Austria and Switzerland changed from Hz to 16.7Hz which is no longer exactly one-third of the grid frequency. This solved overheating problems with the rotary converters used to generate some of this power from the grid supply. In the US, the New York, New Haven, and Hartford Railroad, the Pennsylvania Railroad and the Philadelphia and Reading Railway adopted 11kV 25Hz single-phase AC. Parts of the original electrified network still operate at 25Hz, with voltage boosted to 12kV, while others were converted to 12.5 or 25kV 60Hz. In the UK, the London, Brighton and South Coast Railway pioneered overhead electrification of its suburban lines in London, London Bridge to Victoria being opened to traffic on 1December 1909. Victoria to Crystal Palace via Balham and West Norwood opened in May 1911. Peckham Rye to West Norwood opened in June 1912. Further extensions were not made owing to the First World War. Two lines opened in 1925 under the Southern Railway serving Coulsdon North and Sutton railway station. The lines were electrified at 6.7kV 25Hz. It was announced in 1926 that all lines were to be converted to DC third rail and the last overhead-powered electric service ran in September 1929. Standard frequency alternating current AC power is used at 60Hz in North America (excluding the aforementioned 25Hz network), western Japan, South Korea and Taiwan; and at 50Hz in a number of European countries, India, Saudi Arabia, eastern Japan, countries that used to be part of the Soviet Union, on high-speed lines in much of Western Europe (including countries that still run conventional railways under DC but not in countries using 16.7Hz, see above). Most systems like this operate at 25kV, although 12.5kV sections exist in the United States, and 20kV is used on some narrow-gauge lines in Japan. On "French system" HSLs, the overhead line and a "sleeper" feeder line each carry 25kV in relation to the rails, but in opposite phase so they are at 50kV from each other; autotransformers equalize the tension at regular intervals. Three-phase alternating current Various railway electrification systems in the late nineteenth and twentieth centuries utilised three-phase, rather than single-phase electric power delivery due to ease of design of both power supply and locomotives. These systems could either use standard network frequency and three power cables, or reduced frequency, which allowed for return-phase line to be third rail, rather than an additional overhead wire. Comparisons AC versus DC for mainlines The majority of modern electrification systems take AC energy from a power grid that is delivered to a locomotive, and within the locomotive, transformed and rectified to a lower DC voltage in preparation for use by traction motors. These motors may either be DC motors which directly use the DC or they may be three-phase AC motors which require further conversion of the DC to variable frequency three-phase AC (using power electronics). Thus both systems are faced with the same task: converting and transporting high-voltage AC from the power grid to low-voltage DC in the locomotive. The difference between AC and DC electrification systems lies in where the AC is converted to DC: at the substation or on the train. Energy efficiency and infrastructure costs determine which of these is used on a network, although this is often fixed due to pre-existing electrification systems. Both the transmission and conversion of electric energy involve losses: ohmic losses in wires and power electronics, magnetic field losses in transformers and smoothing reactors (inductors). Power conversion for a DC system takes place mainly in a railway substation where large, heavy, and more efficient hardware can be used as compared to an AC system where conversion takes place aboard the locomotive where space is limited and losses are significantly higher. However, the higher voltages used in many AC electrification systems reduce transmission losses over longer distances, allowing for fewer substations or more powerful locomotives to be used. Also, the energy used to blow air to cool transformers, power electronics (including rectifiers), and other conversion hardware must be accounted for. Standard AC electrification systems use much higher voltages than standard DC systems. One of the advantages of raising the voltage is that, to transmit certain level of power, lower current is necessary (). Lowering the current reduces the ohmic losses and allows for less bulky, lighter overhead line equipment and more spacing between traction substations, while maintaining power capacity of the system. On the other hand, the higher voltage requires larger isolation gaps, requiring some elements of infrastructure to be larger. The standard-frequency AC system may introduce imbalance to the supply grid, requiring careful planning and design (as at each substation power is drawn from two out of three phases). The low-frequency AC system may be powered by separate generation and distribution network or a network of converter substations, adding the expense, also low-frequency transformers, used both at the substations and on the rolling stock, are particularly bulky and heavy. The DC system, apart from being limited as to the maximum power that can be transmitted, also can be responsible for electrochemical corrosion due to stray DC currents. Electric versus diesel Energy efficiency Electric trains need not carry the weight of prime movers, transmission and fuel. This is partly offset by the weight of electrical equipment. Regenerative braking returns power to the electrification system so that it may be used elsewhere, by other trains on the same system or returned to the general power grid. This is especially useful in mountainous areas where heavily loaded trains must descend long grades. Central station electricity can often be generated with higher efficiency than a mobile engine/generator. While the efficiency of power plant generation and diesel locomotive generation are roughly the same in the nominal regime, diesel motors decrease in efficiency in non-nominal regimes at low power while if an electric power plant needs to generate less power it will shut down its least efficient generators, thereby increasing efficiency. The electric train can save energy (as compared to diesel) by regenerative braking and by not needing to consume energy by idling as diesel locomotives do when stopped or coasting. However, electric rolling stock may run cooling blowers when stopped or coasting, thus consuming energy. Large fossil fuel power stations operate at high efficiency, and can be used for district heating or to produce district cooling, leading to a higher total efficiency. Electricity for electric rail systems can also come from renewable energy, nuclear power, or other low-carbon sources, which do not emit pollution or emissions. Power output Electric locomotives may easily be constructed with greater power output than most diesel locomotives. For passenger operation it is possible to provide enough power with diesel engines (see e.g. 'ICE TD') but, at higher speeds, this proves costly and impractical. Therefore, almost all high speed trains are electric. The high power of electric locomotives also gives them the ability to pull freight at higher speed over gradients; in mixed traffic conditions this increases capacity when the time between trains can be decreased. The higher power of electric locomotives and an electrification can also be a cheaper alternative to a new and less steep railway if train weights are to be increased on a system. On the other hand, electrification may not be suitable for lines with low frequency of traffic, because lower running cost of trains may be outweighed by the high cost of the electrification infrastructure. Therefore, most long-distance lines in developing or sparsely populated countries are not electrified due to relatively low frequency of trains. Network effect Network effects are a large factor with electrification. When converting lines to electric, the connections with other lines must be considered. Some electrifications have subsequently been removed because of the through traffic to non-electrified lines. If through traffic is to have any benefit, time-consuming engine switches must occur to make such connections or expensive dual mode engines must be used. This is mostly an issue for long-distance trips, but many lines come to be dominated by through traffic from long-haul freight trains (usually running coal, ore, or containers to or from ports). In theory, these trains could enjoy dramatic savings through electrification, but it can be too costly to extend electrification to isolated areas, and unless an entire network is electrified, companies often find that they need to continue use of diesel trains even if sections are electrified. The increasing demand for container traffic, which is more efficient when utilizing the double-stack car, also has network effect issues with existing electrifications due to insufficient clearance of overhead electrical lines for these trains, but electrification can be built or modified to have sufficient clearance, at additional cost. A problem specifically related to electrified lines are gaps in the electrification. Electric vehicles, especially locomotives, lose power when traversing gaps in the supply, such as phase change gaps in overhead systems, and gaps over points in third rail systems. These become a nuisance if the locomotive stops with its collector on a dead gap, in which case there is no power to restart. This is less of a problem in trains consisting of two or more multiple units coupled together, since in that case if the train stops with one collector in a dead gap, another multiple unit can push or pull the disconnected unit until it can again draw power. The same applies to the kind of push-pull trains which have a locomotive at each end. Power gaps can be overcome in single-collector trains by on-board batteries or motor-flywheel-generator systems. In 2014, progress is being made in the use of large capacitors to power electric vehicles between stations, and so avoid the need for overhead wires between those stations. Maintenance costs Maintenance costs of the lines may be increased by electrification, but many systems claim lower costs due to reduced wear-and-tear on the track from lighter rolling stock. There are some additional maintenance costs associated with the electrical equipment around the track, such as power sub-stations and the catenary wire itself, but, if there is sufficient traffic, the reduced track and especially the lower engine maintenance and running costs exceed the costs of this maintenance significantly. Sparks effect Newly electrified lines often show a "sparks effect", whereby electrification in passenger rail systems leads to significant jumps in patronage / revenue. The reasons may include electric trains being seen as more modern and attractive to ride, faster, quieter and smoother service, and the fact that electrification often goes hand in hand with a general infrastructure and rolling stock overhaul / replacement, which leads to better service quality (in a way that theoretically could also be achieved by doing similar upgrades yet without electrification). Whatever the causes of the sparks effect, it is well established for numerous routes that have electrified over decades. This also applies when bus routes with diesel buses are replaced by trolleybuses. The overhead wires make the service "visible" even in no bus is running and the existence of the infrastructure gives some long-term expectations of the line being in operation. Double-stack rail transport Due to the height restriction imposed by the overhead wires, double-stacked container trains have been traditionally difficult and rare to operate under electrified lines. However, this limitation is being overcome by railways in India, China and African countries by laying new tracks with increased catenary height. Such installations are in the Western Dedicated Freight Corridor in India where the wire height is at to accommodate double-stack container trains without the need of well-wagons. Advantages There are a number of advantages including the fact there is no exposure of passengers to exhaust from the locomotive and lower cost of building, running and maintaining locomotives and multiple units. Electric trains have a higher power-to-weight ratio (no onboard fuel tanks), resulting in fewer locomotives, faster acceleration, higher practical limit of power, higher limit of speed, less noise pollution (quieter operation). The faster acceleration clears lines more quickly to run more trains on the track in urban rail uses. Reduced power loss at higher altitudes (for power loss see Diesel engine) Independence of running costs from fluctuating fuel prices Service to underground stations where diesel trains cannot operate for safety reasons Reduced environmental pollution, especially in highly populated urban areas, even if electricity is produced by fossil fuels Easily accommodates kinetic energy brake reclaim using supercapacitors More comfortable ride on multiple units as trains have no underfloor diesel engines Somewhat higher energy efficiency in part due to regenerative braking and less power lost when "idling" More flexible primary energy source: can use coal, natural gas, nuclear or renewable energy (hydro, solar, wind) as the primary energy source instead of diesel fuel If the entire network is electrified, diesel infrastructure such as fueling stations, maintenance yards and indeed the diesel locomotive fleet can be retired or put to other uses – this is often the business case in favor of electrifying the last few lines in a network where otherwise costs would be too high. Having only one type of motive power also allows greater fleet homogeneity which can also reduce costs. Disadvantages Electrification cost: electrification requires an entire new infrastructure to be built around the existing tracks at a significant cost. Costs are especially high when tunnels, bridges and other obstructions have to be altered for clearance. Another aspect that can raise the cost of electrification are the alterations or upgrades to railway signalling needed for new traffic characteristics, and to protect signalling circuitry and track circuits from interference by traction current. Electrification typically requires line closures while new equipment is being installed. Appearance: the overhead line structures and cabling can have a significant landscape impact compared with a non-electrified or third rail electrified line that has only occasional signalling equipment above ground level. Fragility and vulnerability: overhead electrification systems can suffer severe disruption due to minor mechanical faults or the effects of high winds causing the pantograph of a moving train to become entangled with the catenary, ripping the wires from their supports. The damage is often not limited to the supply to one track, but extends to those for adjacent tracks as well, causing the entire route to be blocked for a considerable time. Third-rail systems can suffer disruption in cold weather due to ice forming on the conductor rail. Theft: the high scrap value of copper and the unguarded, remote installations make overhead cables an attractive target for scrap metal thieves. Attempts at theft of live 25kV cables may end in the thief's death from electrocution. In the UK, cable theft is claimed to be one of the biggest sources of delay and disruption to train services – though this normally relates to signalling cable, which is equally problematic for diesel lines. Incompatibility: Diesel trains can run on any track without electricity or with any kind of electricity (third rail or overhead line, DC or AC, and at any voltage or frequency). Not so for electric trains, which can never run on non-electrified lines, and which even on electrified lines can run only on the single, or the few, electrical system(s) for which they are equipped. Even on fully electrified networks, it is usually a good idea to keep a few diesel locomotives for maintenance and repair trains, for instance to repair broken or stolen overhead lines, or to lay new tracks. However, due to ventilation issues, diesel trains may have to be banned from certain tunnels and underground train stations mitigating the advantage of diesel trains somewhat. Birds may perch on parts with different charges, and animals may also touch the electrification system. Dead animals attract foxes or other scavengers, bringing risk of collision with trains. In most of the world's railway networks, the height clearance of overhead electrical lines is not sufficient for a double-stack container car or other unusually tall loads. To upgrade electrified lines to the correct clearances () to take double-stacked container trains, besides renewing bridges over it, would normally mean need for special pantographs violating standardisation and requiring custom made vehicles. Railway electrification around the world As of 2012, electrified tracks accounted for nearly one third of total tracks globally. As of 2018, there were of railways electrified at 25kV, either 50 or 60Hz; electrified at ; electrified at 15kV 16.7 or Hz and electrified at . As of 2023, the Swiss rail network is the largest fully electrified network in the world and one of only eleven countries or territories to achieve this, as listed in List of countries by rail transport network size. The percentage then continues falling in order with Laos, Montenegro, India, Belgium, Georgia, South Korea, Netherlands, and Japan, with all others being less than 75% electrified. Overall, China takes first place, with around of electrified railway, followed by India with over of electrified railway, and continuing with Russia, with over of electrified railway. A number of countries have zero electrified railways, instead relying on diesel multiple units, locomotive hauled services and many alternate forms of transport. The European Union contains the longest amount of electrified railways (in length), with over of electrified railway, however only making up around 55% of the total railway length. Several countries have announced plans to electrify all or most of their railway network, including Indian Railways and Israel Railways. The Trans-Siberian Railway mainly in Russia is completely electrified, making it one of the longest stretches of electrified railways in the world. See also Battery electric multiple unit Battery locomotive Conduit current collection Current collector Dual electrification Electromote Fifth rail system Ground-level power supply History of the electric locomotive Initial Electrification Experiments NY NH HR List of railway electrification systems List of tram systems by gauge and electrification Multi-system (rail) Overhead conductor rails Railroad electrification in the United States Stud contact system Traction current pylon Traction powerstation Traction substation References Further reading Sources English Gomez-Exposito A., Mauricio J.M., Maza-Ortega J.M. "VSC-based MVDC Railway Electrification System" IEEE transactions on power delivery, v. 29, no. 1, Feb. 2014 pp.422–431. (suggests ) (Jane's) Urban Transit Systems Russian Винокуров В.А., Попов Д.А. "Электрические машины железно-дорожного транспорта" (Electrical machinery of railroad transportation), Москва, Транспорт, 1986. , 520 pp. Дмитриев, В.А., "Народнохозяйственная эффективность электрификации железных дорог и применения тепловозной тяги" (National economic effectiveness of railway electrification and application of diesel traction), Москва, Транспорт 1976. Дробинский В.А., Егунов П.М. "Как устроен и работает тепловоз" (How the diesel locomotive works) 3rd ed. Moscow, Транспорт, 1980. Иванова В.Н. (ed.) "Конструкция и динамика тепловозов" (Construction and dynamics of the diesel locomotive). Москва, Транспорт, 1968 (textbook). Калинин, В.К. "Электровозы и электропоезда" (Electric locomotives and electric train sets) Москва, Транспорт, 1991 Мирошниченко, Р.И., "Режимы работы электрифицированных участков" (Regimes of operation of electrified sections [of railways]), Москва, Транспорт, 1982. Перцовский, Л. М.; "Энергетическая эффективность электрической тяги" (Energy efficiency of electric traction), Железнодорожный транспорт (magazine), #12, 1974 p.39+ Плакс, А.В. & Пупынин, В. Н., "Электрические железные дороги" (Electric Railways), Москва "Транспорт" 1993. Сидоров Н.И., Сидорожа Н.Н. "Как устроен и работает электровоз" (How the electric locomotive works) Москва, Транспорт, 1988 (5th ed.). 233 pp, . 1980 (4th ed.). Хомич А.З. Тупицын О.И., Симсон А.Э. "Экономия топлива и теплотехническая модернизация тепловозов" (Fuel economy and the thermodynamic modernization of diesel locomotives). Москва: Транспорт, 1975. 264 pp. External links Electric rail transport Rail transport Trains
Railway electrification
[ "Technology" ]
7,645
[ "Trains", "Transport systems" ]
559,009
https://en.wikipedia.org/wiki/Dragon%20Ball%20%28TV%20series%29
is a Japanese anime television series produced by Toei Animation that ran for 153 episodes from February 26, 1986, to April 19, 1989, on Fuji TV. The series is an adaptation of the first 194 chapters of the manga series of the same name created by Akira Toriyama, which were published in Weekly Shōnen Jump from 1984 to 1995. It was broadcast in 81countries worldwide and is the first television series adaptation in the Dragon Ball franchise. The series follows the adventures of Goku, a young eccentric boy with a monkey tail and exceptional strength who has a passion for fighting and battling evil-doers. Film adaptations include: Dragon Ball: Curse of the Blood Rubies (1986), Dragon Ball: Sleeping Princess in Devil's Castle (1987), and Dragon Ball: Mystical Adventure (1988). The series was followed by a sequel, Dragon Ball Z, which had its own follow-ups with Dragon Ball GT and Dragon Ball Super. The English dubbed version of the original Dragon Ball series released in the United States was edited for content and dialogue. Plot Emperor Pilaf Saga The series begins with a young monkey-tailed boy named Goku who lives alone in a forest befriending a teenage girl named Bulma, who is in search of the seven mystical , one of which is in Goku's possession. Together, they go on an adventure to find the balls, which summon the eternal dragon Shenron and grants whoever summons him any wish. The journey leads Goku to meeting Master Roshi and a confrontation with the shape-shifting pig Oolong, as well as a desert bandit named Yamcha and his companion Pu'ar, and the Ox-King, who all later become allies; Chi-Chi, whom Goku unknowingly agrees to marry; and Emperor Pilaf, a blue-skinned imp who seeks the Dragon Balls to fulfill his desire for world domination. Oolong stops Pilaf from getting his wish by wishing for a pair of perfect panties. After each wish, all the Dragon Balls are scattered all over the world and take one full year to take on their distinctive appearance. World Martial Arts Tournament Saga After finding the Dragon Balls and using them, Goku undergoes rigorous training under world renowned martial artist Master Roshi in order to fight in the , a competitive fighting tournament that attracts fighters from all around the world. A monk named Krillin becomes Goku's training partner and rival, but they quickly become best friends. After training with Master Roshi for a few months, Goku and Krillin start in the tournament, which is held every five years. They battle through with various opponents and Yamcha fights a mysterious man named Jackie Chun, who looks and fights oddly similar to Master Roshi. As the tournament continues, Goku and Jackie Chun are the final fighters, and after hours of battle, Jackie Chun realizes Goku is mimicking all of his moves. Recognizing that Goku is shorter, he lunges a flying kick at Goku. Knowing that Goku will do one right back, Jackie Chun's longer leg can reliably reach Goku and knock him out, defeating him. Red Ribbon Army Saga After the tournament, Goku sets out on his own to recover the Dragon Ball his deceased grandfather left him and encounters a terrorist organization known as the Red Ribbon Army, whose diminutive leader, Commander Red, wants to collect the Dragon Balls so he can use them to become taller. Goku mostly single-handedly defeats the entire group, including Mercenary Tao, a feared assassin the Red Ribbon hired; whom Goku originally loses to, but after training under the hermit Korin, easily beats. After defeating Tao, Goku sets his sights on the Red Ribbon Army headquarters, where he plans to take the two Dragon Balls in the army's possession. After defeating the Red Ribbon Army, Goku reunites with his friends and they go to Fortuneteller Baba to locate the last remaining Dragon Ball in order to resurrect Upa's father, who was defeated by Tao, but they have to defeat all five of Baba's fighters first. After defeating Baba's fighters and finding the last Dragon Ball, Goku resurrects Upa's father, Bora, and sets out on his own to train for three years. King Piccolo Saga Goku and his friends reunite at the World Martial Arts Tournament three years later and meet Master Roshi's rival and Tao's brother, Master Shen, and his students Tien Shinhan and Chiaotzu, who vow to exact revenge for Tao's apparent death at the hands of Goku. Krillin is murdered after the tournament and Goku tracks down and is defeated by his killer, Tambourine, and the evil Demon King Piccolo, who was freed by Emperor Pilaf after being sealed away by Master Mutaito after destroying and trying to take over the world. Goku meets the overweight samurai Yajirobe, who takes Goku to Korin after being defeated by Tambourine and receives healing and a power boost. Meanwhile, Piccolo kills both Master Roshi and Chiaotzu, and uses the Dragon Balls to give himself eternal youth before destroying Shenron, which results in the Dragon Balls' destruction. As King Piccolo prepares to destroy West City as a show of force, Tien Shinhan arrives to confront him, but is defeated and nearly killed by one of Piccolo's spawns. Goku arrives in time to save Tien and then kills King Piccolo by blasting a hole through his chest. Piccolo Junior Saga Just before Piccolo dies, he spawns his final son, Piccolo Junior. Korin informs Goku that Kami, the creator of the Dragon Balls, might be able to restore Shenron and the Dragon Balls so that Goku can wish his fallen friends back to life, which he does. He also stays and trains under Kami for the next three years, once again reuniting with his friends for the World Martial Arts Tournament, as well as a now-teenaged Chi-Chi and a revived cyborg Mercenary Tao. Piccolo Junior enters the tournament to avenge his father, leading to the final battle between him and Goku. After Goku narrowly wins and defeats Piccolo Junior, he leaves with Chi-Chi and they get married, leading to the events of Dragon Ball Z. Production Kazuhiko Torishima, Toriyama's editor for Dr. Slump and the first half of Dragon Ball, said that because the Dr. Slump anime was not successful in his opinion, he and Shueisha were a lot more hands on for the Dragon Ball anime. Before production even began, they created a huge "bible" for the series detailing even merchandise. He himself studied the best way to present anime and its business side, discussing it with the Shogakukan team for Doraemon. Toriyama had some involvement in the production of the anime. When it began he did mention to the staff that they seemed to be making it too colorful by forcing the color palette of Dr. Slump on it. He also listened to the voice actors' audition tapes before choosing Masako Nozawa to play Goku. He would go on to state that he would hear Nozawa's voice in his head when writing the manga. Toriyama specified Kuririn's voice actress be Mayumi Tanaka after hearing her work as the main character Giovanni in Night on the Galactic Railroad. Tōru Furuya remarked that there were not many auditions for the characters because the cast was made up of veteran voice actors. Performing the roles was not without its difficulties, Toshio Furukawa, the voice of Piccolo, said it was difficult to constantly perform with a low voice because his normal lighter voice would break through if he broke concentration. Shunsuke Kikuchi composed the score for Dragon Ball. The opening theme song for all of the episodes is performed by Hiroki Takahashi in Japanese and Jimi Tunnell in English. The ending theme is performed by Ushio Hashimoto in Japanese and Daphne Gere in English. Feeling that the Dragon Ball anime's ratings were gradually declining because it had the same producer that worked on Dr. Slump, who had a "cute and funny" image connected to Toriyama's work and was missing the more serious tone, Torishima asked the studio to change the producer. Impressed with their work on Saint Seiya, he asked its director Kōzō Morishita and writer Takao Koyama to help "reboot" Dragon Ball; which coincided with the beginning of Dragon Ball Z. English localization and broadcasting In 1989 and 1990, Harmony Gold USA licensed the series for an English-language release in North America. In the voice dubbing of the series, Harmony Gold renamed almost all of the characters, including the protagonist Goku, who was renamed "Zero." This dub consisting of 5 episodes and one movie (an 80-minute feature featuring footage of movies 1 and 3 edited together) was cancelled shortly after being test marketed in several US cities and was never broadcast to the general public, thus earning the fan-coined term "The Lost Dub." A subtitled Japanese version of the series was first broadcast in the United States by the Hawaii-based Nippon Golden Network. The series aired in a 6AM slot on Tuesdays from 1992 to 1994, before the network moved on to Dragon Ball Z. In 1995, Funimation (founded a year earlier in California) acquired the license for the distribution of Dragon Ball in the United States as one of its first imports. Licensing director Bob Brennan firmly believed he had found the Japanese equivalent of Mickey Mouse but had trouble convincing Americans of this. They contracted Josanne B. Lovick Productions and voice actors from Ocean Productions to create an English version for the anime and first movie in Vancouver, British Columbia. The dubbed episodes were edited for content, and contained different music. Thirteen episodes aired in first-run syndication during the fall of 1995 before Funimation canceled the project due to low ratings and moved on to Dragon Ball Z. In March 2001, due to the success of their dub of Dragon Ball Z, Funimation announced the return of the original Dragon Ball series to American television, featuring a new English version produced in-house with slightly less editing for broadcast (though the episodes remained uncut for home video releases), and they notably left the original background music intact. The re-dubbed episodes aired on Cartoon Network from August 20, 2001, to December 1, 2003. Funimation also broadcast the series on Colours TV and their own Funimation Channel starting in 2006. This English dub was also broadcast in Australia and New Zealand. In Canada and Europe, an alternative dubbed version was produced by AB Groupe (in association with Blue Water Studios) and was aired in those territories instead of the Funimation version. Content edits The US version of Dragon Ball was aired on Cartoon Network with numerous digital cosmetic changes, which were done to remove nudity and blood, and dialogue edits, such as when Puar says why Oolong was expelled from shapeshifting school, instead of saying that he stole the teacher's panties, it was changed to him stealing the teacher's papers. Some scenes were deleted altogether, either to save time or remove strong violence. Nudity was also covered up; for Goku's bathing scene, Funimation drew a chair to cover his genitals where it was uncensored previously. References to alcohol and drugs were removed, for example, when Jackie Chun (Master Roshi) uses Drunken Fist Kung Fu in the 21st Tenkaichi Budokai, Funimation called it the "Mad Cow Attack." Also, the famous "No Balls!" scene was deleted from episode 2, and when Bulma places panties on the fishing hook to get Oolong (in fish form), they digitally painted away the panties and replaced it with some money. Changes also led to confusing context and the content of the scenes; such as when Bulma helps Goku take a bath. In the Japanese version, the two characters do not cover their privates because Goku is innocent of the differences in gender and Bulma believes Goku to be a little boy. While bathing Bulma asks Goku his age and only when Goku reveals himself to be fourteen does Bulma throw things at Goku before kicking him out of the bath. In the BLT and Funimation versions, the dialogue was changed; with Goku remarking that Bulma did not have a tail and it must be inconvenient for her when bathing. Other media Home media In Japan, Dragon Ball did not receive a proper home video release until July 7, 2004, fifteen years after its broadcast. Pony Canyon announced a remastering of the series in a single 26-disc DVD box set, that was made-to-order only, referred to as a "Dragon Box". Since then, Pony Canyon content of this set began being released on mass-produced individual 6-episode DVDs on April 4, 2007, and finished with the 26th volume on December 5, 2007. Original releases Dragon Ball'''s initial VHS release for North America was never completed. Funimation released their initial dub, the edited and censored first thirteen episodes, on six tapes from September 24, 1996, to February 28, 1998, together with Trimark Pictures. These episodes and the first movie were later released in a VHS or DVD box set on October 24, 2000. Funimation began releasing their in-house dub beginning with episode 14 by themselves on December 5, 2001, in both edited and uncut formats, only to cease VHS releases two years later on June 1, 2003, in favor for the DVD box sets. Including the initial 1996-1998 releases with Trimark, 86 episodes of Dragon Ball across 28 volumes were produced on VHS for North America. Funimation released their own in-house dub to ten two-disc DVD box sets between January 28 and August 19, 2003. Each box set, spanning an entire "saga" of the series, included the English and Japanese audio tracks with optional English subtitles, and uncut video and audio. However, they were unable to release the first thirteen episodes at the time, due to Lions Gate Entertainment holding the home video rights to their previous dub of the same episodes, having acquired them from Trimark after the company became defunct. After Lions Gate Family Entertainment's license and home video distribution rights to the first thirteen episodes expired in 2009, Funimation has released and remastered the complete Dragon Ball series to DVD in five individual uncut season box sets, with the first set released on September 15, 2009, and the final on July 27, 2010. Funimation's English dub of Dragon Ball has been distributed in other countries by third parties. Madman Entertainment released the first thirteen episodes of Dragon Ball and the first movie uncut in Australasia in a DVD set on March 10, 2004. They produced two box sets containing the entire series in 2006 and 2007. Manga Entertainment began releasing Funimation's five remastered sets in the United Kingdom in 2014.Dragon Ball: Yo! Son Goku and His Friends Return!! (ドラゴンボール オッス!帰ってきた孫悟空と仲間たち!! Doragon Bōru: Ossu! Kaette Kita Son Gokū to Nakama-tachi!!) is the second Dragon Ball Z OVA and features the first Dragon Ball animation in nearly a decade, following a short story arc in the remade Dr. Slump anime series featuring Goku and the Red Ribbon Army in 1999. The film premiered in Japan on September 21, 2008, at the Jump Super Anime Tour in honor of Weekly Shōnen Jump's fortieth anniversary. Yo! Son Goku and His Friends Return!! is also in the extra DVD included in the Dragon Ball Z: Battle of Gods limited edition, which was released on September 13, 2013. Manga Films During the anime's broadcast, three theatrical animated Dragon Ball films were produced. The first was Curse of the Blood Rubies in 1986, followed by Sleeping Princess in Devil's Castle in 1987, and Mystical Adventure in 1988. In 1996 The Path to Power was produced in order to commemorate the anime's tenth anniversary. Video games Several video games based on Dragon Ball have been created, beginning with Dragon Daihikyō in 1986. Shenlong no Nazo, produced that same year, was the first to be released outside Japan. 1988's North American version was titled Dragon Power and was heavily Americanized with all references to Dragon Ball removed; characters' names and appearances were changed. Additional games based on the series include Advanced Adventure, Dragon Ball: Origins, its sequel, and Revenge of King Piccolo. SoundtracksDragon Ball has been host to several soundtrack releases, the first being Dragon Ball: Music Collection in 1986. Dragon Ball: Saikyō e no Michi Original Soundtrack is composed entirely of music from the tenth anniversary film. In 1995 Dragon Ball: Original USA TV Soundtrack Recording was released featuring the music from the Funimation/Ocean American broadcast. Reception The show's initial U.S. broadcast run in 1995 met with mediocre ratings. In 2000 satellite TV channel Animax together with Brutus, a men's lifestyle magazine, and Tsutaya, Japan's largest video rental chain, conducted a poll among 200,000 fans on the top anime series, with Dragon Ball coming in fourth. TV Asahi conducted two polls in 2005 on the Top 100 Anime, Dragon Ball came in second in the nationwide survey conducted among multiple age-groups and third in the online poll. On several occasions the Dragon Ball anime has topped Japan's DVD sales.Otaku USAs Joseph Luster called Dragon Ball "one of the most memorable animated action/comedy series of all time." He cited the comedy as a key component to the show, noting that this might surprise those only familiar with Z. Todd Douglass of DVD Talk referred to it as "a classic among classics [that] stands as a genre defining kind of show." and wrote that "It's iconic in so many ways and should be standard watching for otaku in order to appreciate the genius of Akira Toriyama." He had strong praise for the "deep, insightful, and well-developed" characters, writing "Few shows can claim to have a cast quite like Dragon Ball's, and that's a testament to the creative genius of Toriyama." T.H.E.M. Anime Reviews' Tim Jones gave the show four out of five stars, referring to it as a forerunner to modern fighting anime and still one of the best. He also stated that it has much more character development than its successors Dragon Ball Z and Dragon Ball GT. Carl Kimlinger of Anime News Network summed up Dragon Ball as "an action-packed tale told with rare humor and something even rarer—a genuine sense of adventure." Kimlinger and Theron Martin, also of Anime News Network, noted Funimation's reputation for drastic alterations of the script, but praised the dub. The positive impact of Dragon Ball'''s characters has manifested itself in the personal messages Masako Nozawa sent to children as taped messages in the voice of Goku. Nozawa takes pride in her role and sends words of encouragement that have resulted in children in comas responding to the voice of the characters. Notes References External links Dragon Ball anime 1986 Japanese television series debuts 1986 anime television series debuts 1989 Japanese television series endings Adventure anime and manga Anime series based on manga Bruceploitation Chinese mythology in anime and manga Comedy anime and manga Crunchyroll anime Fantasy anime and manga Fiction about size change First-run syndicated television programs in the United States Fuji Television original programming Funimation Japanese martial arts television series Japanese mythology in anime and manga Martial arts anime and manga Shunsuke Kikuchi Television shows based on Journey to the West Toei Animation television
Dragon Ball (TV series)
[ "Physics", "Mathematics" ]
4,093
[ "Fiction about size change", "Quantity", "Physical quantities", "Size" ]
559,066
https://en.wikipedia.org/wiki/Cobra%20probe
A Cobra probe is a device to measure the pressure and velocity components of a moving fluid. It is a multi-holed pressure probe with rotational axis of the probe shaft coplanar with the measurement plane of the instrument. Because of this geometry, when the instrument is rotated around the shaft's axis, the measurement elements of the probe remain in the same location. The name cobra probe comes from the shape of the probe head which gives it this property. Cobra probes come in three-, four-, and five-hole configurations, the former used for two-dimensional flow measurement, the latter two for three-dimensional flow measurement. In the three-hole kind of instrument, there are two yaw direction tubes which are chamfered and silver soldered symmetrically on the two sides of a pitot tube. It is otherwise similar to the other kinds of yawmeters. In the four- and five-hole configurations, the central pitot tube is surrounded by three or four chamfered tubes, respectively. References Hydraulic engineering
Cobra probe
[ "Physics", "Engineering", "Environmental_science" ]
215
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
559,155
https://en.wikipedia.org/wiki/Metric%20modulation
In music, metric modulation is a change in pulse rate (tempo) and/or pulse grouping (subdivision) which is derived from a note value or grouping heard before the change. Examples of metric modulation may include changes in time signature across an unchanging tempo, but the concept applies more specifically to shifts from one time signature/tempo (metre) to another, wherein a note value from the first is made equivalent to a note value in the second, like a pivot or bridge. The term "modulation" invokes the analogous and more familiar term in analyses of tonal harmony, wherein a pitch or pitch interval serves as a bridge between two keys. In both terms, the pivoting value functions differently before and after the change, but sounds the same, and acts as an audible common element between them. Metric modulation was first described by Richard Franko Goldman while reviewing the Cello Sonata of Elliott Carter, who prefers to call it tempo modulation. Another synonymous term is proportional tempi. Determination of the new tempo The following formula illustrates how to determine the tempo before or after a metric modulation, or, alternatively, how many of the associated note values will be in each measure before or after the modulation: Thus if the two half notes in time at a tempo of quarter note = 84 are made equivalent with three half notes at a new tempo, that tempo will be: Example taken from Carter's Eight Etudes and a Fantasy for woodwind quartet (1950), Fantasy, mm. 16-17. Note that this tempo, quarter note = 126, is equal to dotted-quarter note = 84 (( = ) = ( = )). A tempo (or metric) modulation causes a change in the hierarchical relationship between the perceived beat subdivision and all potential subdivisions belonging to the new tempo. Benadon has explored some compositional uses of tempo modulations, such as tempo networks and beat subdivision spaces. Three challenges arise when performing metric modulations: Grouping notes of the same speed differently on each side of the barline, ex: (quintuplet =sextuplet ) with sixteenth notes before and after the barline Subdivision used on one side of the barline and not the other, ex: (triplet =) with triplets before and quarter notes after the barline Subdivision used on neither side of the barline but used to establish the modulation, ex: (quintuplet =) with quarter notes before and after the barline Examples of the use of metric modulation include Carter's Cello Sonata (1948), A Symphony of Three Orchestras (1976), and Björk's "Desired Constellation" (=). Beethoven used metric modulation in his Trio for 2 oboes & English horn, Op. 87, 1794. Score notation Metric modulations are generally notated as 'note value' = 'note value'. For example, = This notation is also normally followed by the new tempo in parentheses. Before the modern concept and notation of metric modulations composers used the terms doppio piu mosso and doppio piu lento for double and half-speed, and later markings such as: (Adagio)=(Allegro) indicating double speed, which would now be marked (=). The phrase l'istesso tempo was used for what may now be notated with metric modulation markings. For example: to (), will be marked l'istesso tempo, indicating the beat is the same speed. See also Tuplet References Sources Further reading Arlin, Mary I. (2000). "Metric Mutation and Modulation: The Nineteenth-Century Speculations of F.-J. Fétis". Journal of Music Theory 44, no. 2 (Fall): 261–322. Bernard, Jonathan W. (1988). "The Evolution of Elliott Carter's Rhythmic Practice". Perspectives of New Music 26, no. 2: (Summer): 164–203. Braus, Ira Lincoln (1994). "An Unwritten Metrical Modulation in Brahms's Intermezzo in E minor, op. 119, no. 2". Brahms Studies 1:161–169. Everett, Walter (2009). "Any Time at All: The Beatles' Free Phrase Rhythms". In The Cambridge Companion to the Beatles, edited by Kenneth Womack, 183–199. Cambridge and New York: Cambridge University Press. (cloth); (pbk). Reese, Kirsten (1999). "Ruhelos: Annäherung an Johanna Magdalena Beyer". MusikTexte: Zeitschrift für Neue Musik, nos. 81–82 (December) 6–15. External links , Conor Guilfoyle , Conor Guilfoyle Musical techniques Rhythm and meter
Metric modulation
[ "Physics" ]
972
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
559,212
https://en.wikipedia.org/wiki/Entrez
The Entrez () Global Query Cross-Database Search System is a federated search engine, or web portal that allows users to search many discrete health sciences databases at the National Center for Biotechnology Information (NCBI) website. The NCBI is a part of the National Library of Medicine (NLM), which is itself a department of the National Institutes of Health (NIH), which in turn is a part of the United States Department of Health and Human Services. The name "Entrez" (a greeting meaning "Come in" in French) was chosen to reflect the spirit of welcoming the public to search the content available from the NLM. Entrez Global Query is an integrated search and retrieval system that provides access to all databases simultaneously with a single query string and user interface. Entrez can efficiently retrieve related sequences, structures, and references. The Entrez system can provide views of gene and protein sequences and chromosome maps. Some textbooks are also available online through the Entrez system. Features The Entrez front page provides, by default, access to the global query. All databases indexed by Entrez can be searched via a single query string, supporting Boolean operators and search term tags to limit parts of the search statement to particular fields. This returns a unified results page, that shows the number of hits for the search in each of the databases, which are also linked to actual search results for that particular database. Entrez also provides a similar interface for searching each particular database and for refining search results. The Limits feature allows the user to narrow a search, a web forms interface. The History feature gives a numbered list of recently performed queries. Results of previous queries can be referred to by number and combined via Boolean operators. Search results can be saved temporarily in a Clipboard. Users with a MyNCBI account can save queries indefinitely, and also choose to have updates with new search results e-mailed for saved queries of most databases. It is widely used in the field of biotechnology as a reference tool for students and professionals alike. Databases Entrez searches the following databases: PubMed: biomedical literature citations and abstracts, including Medline—articles from (mainly medical) journals, often including abstracts. Links to PubMed Central and other full-text resources are provided for articles from the 1990s. PubMed Central: free, full-text journal articles Site Search: NCBI web and FTP web sites Books: online books Online Mendelian Inheritance in Man (OMIM) Nucleotide: sequence database (GenBank) Protein: sequence database (GenPept) Genome: whole genome sequences and mapping Structure: three-dimensional macromolecular structures Taxonomy: organisms in GenBank Taxonomy dbSNP: single nucleotide polymorphism Gene: gene-centered information HomoloGene: eukaryotic homology groups PubChem Compound: unique small molecule chemical structures PubChem Substance: deposited chemical substance records Genome Project: genome project information UniGene: gene-oriented clusters of transcript sequences CDD: conserved protein domain database PopSet: population study data sets (epidemiology) GEO Profiles: expression and molecular abundance profiles GEO DataSets: experimental sets of GEO data Sequence read archive: high-throughput sequencing data Cancer Chromosomes: cytogenetic databases PubChem BioAssay: bioactivity screens of chemical substances Probe: sequence-specific reagents NLM Catalog: NLM bibliographic data for over 1.2 million journals, books, audiovisuals, computer software, electronic resources, and other materials resident in LocatorPlus (updated every weekday). Access In addition to using the search engine forms to query the data in Entrez, NCBI provides the Entrez Programming Utilities (eUtils) for more direct access to query results. The eUtils are accessed by posting specially formed URLs to the NCBI server, and parsing the XML response. There was also an eUtils SOAP interface which was terminated in July 2015. History In 1991, Entrez was introduced in CD form. In 1993, a client-server version of the software provided connectivity with the internet. In 1994, NCBI established a website, and Entrez was a part of this initial release. In 2001, Entrez bookshelf was released and in 2003, the Entrez Gene database was developed. References External links Entrez search engine form Entrez Help Internet properties established in 1993 1991 establishments in the United States Online databases Entrez Biological databases Government-owned websites of the United States Scholarly search services
Entrez
[ "Biology" ]
947
[ "Bioinformatics", "Biological databases" ]
559,220
https://en.wikipedia.org/wiki/Captopril
Captopril, sold under the brand name Capoten among others, is an angiotensin-converting enzyme (ACE) inhibitor used for the treatment of hypertension and some types of congestive heart failure. Captopril was the first oral ACE inhibitor found for the treatment of hypertension. It does not cause fatigue as associated with beta-blockers. Due to the adverse drug event of causing hyperkalemia, as seen with most ACE Inhibitors, the medication is usually paired with a diuretic. Captopril was patented in 1976 and approved for medical use in 1980. Structure–activity relationship Captopril has an L-proline group which allows it to be more bioavailable in oral formulations. The thiol moiety within the molecule has been associated with two significant adverse effects: the hapten or immune response. This immune response, also known as agranulocytosis, can explain the adverse drug events which may be seen in captopril with the allergic response which includes hives, severe stomach pain, difficulty breathing, swelling of the face, lips, tongue or throat. In terms of interaction with the enzyme, the molecule's thiol moiety will attach to the binding site of the ACE enzyme. This will inhibit the port at which the angiotensin-1 molecule would normally bind, therefore inhibiting the downstream effects within the renin-angiotensin system. Medical uses Captopril's main uses are based on its vasodilation and inhibition of some renal function activities. These benefits are most clearly seen in: Hypertension Cardiac conditions such as congestive heart failure and after myocardial infarction Preservation of kidney function in diabetic nephropathy. Additionally, it has shown mood-elevating properties in some patients. This is consistent with the observation that animal screening models indicate putative antidepressant activity for this compound, although one study has been negative. Formal clinical trials in depressed patients have not been reported. It has also been investigated for use in the treatment of cancer. Captopril stereoisomers were also reported to inhibit some metallo-β-lactamases. Adverse effects Adverse effects of captopril include cough due to increase in the plasma levels of bradykinin, angioedema, agranulocytosis, proteinuria, hyperkalemia, taste alteration, teratogenicity, postural hypotension, acute renal failure, and leukopenia. Except for postural hypotension, which occurs due to the short and fast mode of action of captopril, most of the side effects mentioned are common for all ACE inhibitors. Among these, cough is the most common adverse effect. Hyperkalemia can occur, especially if used with other drugs which elevate potassium level in blood, such as potassium-sparing diuretics. Other side effects are: Itching Headache Tachycardia Chest pain Palpitations Dysgeusia Weakness The adverse drug reaction (ADR) profile of captopril is similar to other ACE inhibitors, with cough being the most common ADR. However, captopril is also commonly associated with rash and taste disturbances (metallic or loss of taste), which are attributed to the unique thiol moiety. Overdose ACE inhibitor overdose can be treated with naloxone. History In the late 1960s, John Vane of the Royal College of Surgeons of England was working on mechanisms by which the body regulates blood pressure. He was joined by Sérgio Henrique Ferreira of Brazil, who had been studying the venom of a Brazilian pit viper, the jararaca (Bothrops jararaca), and brought a sample of the viper's venom. Vane's team found that one of the venom's peptides selectively inhibited the action of angiotensin-converting enzyme (ACE), which was thought to function in blood pressure regulation; the snake venom functions by severely depressing blood pressure. During the 1970s, ACE was found to elevate blood pressure by controlling the release of water and salts from the kidneys. Captopril, an analog of the snake venom's ACE-inhibiting peptide, was first synthesized in 1975 by three researchers at the U.S. drug company E.R. Squibb & Sons Pharmaceuticals (now Bristol-Myers Squibb): Miguel Ondetti, Bernard Rubin, and David Cushman. Squibb filed for U.S. patent protection on the drug in February 1976, which was granted in September 1977, and captopril was approved for medical use in 1980. It was the first ACE inhibitor developed and was considered a breakthrough both because of its mechanism of action and also because of the development process. In the 1980s, Vane received the Nobel prize and was knighted for his work and Ferreira received the National Order of Scientific Merit from Brazil. The development of captopril was among the earliest successes of the revolutionary concept of structure-based drug design. The renin–angiotensin–aldosterone system had been extensively studied in the mid-20th century, and this system presented several opportune targets in the development of novel treatments for hypertension. The first two targets that were attempted were renin and ACE. Captopril was the culmination of efforts by Squibb's laboratories to develop an ACE inhibitor. Ondetti, Cushman, and colleagues built on work that had been done in the 1960s by a team of researchers led by John Vane at the Royal College of Surgeons of England. The first breakthrough was made by Kevin K.F. Ng in 1967, when he found the conversion of angiotensin I to angiotensin II took place in the pulmonary circulation instead of in the plasma. In contrast, Sergio Ferreira found bradykinin disappeared in its passage through the pulmonary circulation. The conversion of angiotensin I to angiotensin II and the inactivation of bradykinin were thought to be mediated by the same enzyme. In 1970, using bradykinin potentiating factor (BPF) provided by Sergio Ferreira, Ng and Vane found the conversion of angiotensin I to angiotensin II was inhibited during its passage through the pulmonary circulation. BPF was later found to be a peptide in the venom of a lancehead viper (Bothrops jararaca), which was a “collected-product inhibitor” of the converting enzyme. Captopril was developed from this peptide after it was found via QSAR-based modification that the terminal sulfhydryl moiety of the peptide provided a high potency of ACE inhibition. Captopril gained FDA approval on April 6, 1981. The drug became a generic medicine in the U.S. in February 1996, when the market exclusivity held by Bristol-Myers Squibb for captopril expired. Chemical synthesis A chemical synthesis of captopril by treatment of L-proline with (2S)-3-acetylthio-2-methylpropanoyl chloride under basic conditions (NaOH), followed by aminolysis of the protective acetyl group to unmask the drug's free thiol, is depicted in the figure at right. Procedure 2 taken out of patent US4105776. See examples 28, 29a and 36. Mechanism of action Captopril blocks the conversion of angiotensin I to angiotensin II and prevents the degradation of vasodilatory prostaglandins, thereby inhibiting vasoconstriction and promoting systemic vasodilation. Pharmacokinetics Unlike the majority of ACE inhibitors, captopril is not administered as a prodrug (the only other being lisinopril). About 70% of orally administered captopril is absorbed. Bioavailability is reduced by presence of food in stomach. It is partly metabolised and partly excreted unchanged in urine. Captopril also has a relatively poor pharmacokinetic profile. The short half-life necessitates dosing two or three times per day, which may reduce patient compliance. Captopril has a short half-life of 2–3 hours and a duration of action of 12–24 hours. See also Captopril challenge test Captopril suppression test References External links U.S. Patent 4,046,889 The story of the discovery of Captopril drugdesign.org ACE inhibitors Carboxamides Carboxylic acids Enantiopure drugs Pyrrolidines Thiols Drugs developed by Bristol Myers Squibb Daiichi Sankyo
Captopril
[ "Chemistry" ]
1,785
[ "Stereochemistry", "Thiols", "Functional groups", "Enantiopure drugs", "Carboxylic acids", "Organic compounds" ]
559,305
https://en.wikipedia.org/wiki/Ritualization
Ritualization refers to the process by which a sequence of non-communicating actions or an event is invested with cultural, social or religious significance. This definition emphasizes the transformation of everyday actions into rituals that carry deeper meaning within a cultural or religious context. Rituals are symbolic, repetitive, and often prescribed activities that hold religious or cultural significance for a certain group of people. They serve various purposes: promoting social solidarity by expressing shared values, facilitating the transmission of cultural knowledge and regulating emotions. History of ritualization The concept of ritualization was first described by Edmund Selous in 1901 and later named by Julian Huxley in 1914 as ritualization (Dissanayake 2006; Lorenz 1966). It has been studied in various fields, including animal behavior, anthropology, psychology, sociology and even cognitive sciences. In the field of animal behavior, ritualization refers to the evolutionary process by which non-communicative behaviors are transformed into communicative behaviors. Niko Tinbergen expanded the concept of ritualization in his 1951 paper "The Study of Instinct," in which he described how certain animal behaviors, such as courtship and aggression, become more effective forms of communication through a gradual process of selection and refinement. In the social sciences, the study of ritualization can be dated back to the 19th century. Émile Durkheim argued that rituals serve as a means of reinforcing social solidarity(otherwise known as social cohesion) and promoting a shared sense of identity among members of a community. Max Weber focused on the role of ritual in religion and suggested that it played a crucial role in shaping beliefs and values. In the 20th century, the study of ritual became increasingly interdisciplinary, with scholars from anthropology, psychology, and other fields exploring its various dimensions. Victor Turner emphasized the symbolic and cultural aspects of ritual, while Randall Collins explored its psychological and emotional dimensions. In recent years, scholars have continued to study rituals from a variety of perspectives, including the cognitive, evolutionary, and neuroscientific. These studies have resolved the origins, functions, and effects of ritual behavior and opened up new ways for understanding its role in human society and culture. In non-human animals Ritualization is a behavior that occurs typically in a member of a given species in a highly stereotyped fashion and independent of any direct physiological significance. It is found, in differing forms, both in non-human animals and in humans. Konrad Lorenz, working with greylag geese and other animals such as water shrews, showed that ritualization was an important process in their development. He showed that the geese obsessively displayed a reflexive motor pattern of egg retrieval when stimulated by the sight of an egg outside their nest. Similarly, in the shrews, Lorenz showed that once they had become used to jumping over a stone in their path, they went on jumping at that place after the stone was taken away. This sort of behaviour is analogous to obsessive-compulsive disorder in humans. Oskar Heinroth in 1910 and Lorenz from 1935 onwards studied the triumph ceremony in geese; Lorenz described it as becoming a fixed ritual. It involves a rolling behaviour (of the head and neck) and cackling with the head stretched forward, and occurs only among geese that know each other, meaning within a family or between mates. The triumph ceremony appears in varied situations, such as when mates meet after having been separated, when disturbed, or after an attack. The behaviour is now known also in other species, such as Canada goose. In humans Functions of ritualization Previous studies mentioned several main functions of ritualization: Social Solidarity Ritualization fosters social solidarity by bringing people together and strengthening social bonds. They create a sense of belonging, shared identity, and unity among participants, contributing to the overall stability of a society. Cultural Transmission Ritualization facilitates the transmission of cultural knowledge, values, and traditions across generations. They help preserve cultural heritage and maintain continuity with the past. By participating in rituals, individuals learn about their culture, internalize its norms, and pass it on to future generations. Emotional Expression and Regulation Rituals provide a structured way for individuals to express and regulate their emotions. They offer a context for processing complex emotions, such as grief, joy, or gratitude, and can help people cope with significant life events, transitions, or loss. Connecting the function to previous literature Émile Durkheim's social solidarity theory In Durkheim's famous writing “The Elementary Forms of the Religious Life (1912)”, he theorized the distinction between traditional and modern societies in terms of social solidarity. He stated social solidarity is the ensemble of beliefs, which acts as the glue that holds society together. Traditional societies and modern societies differ fundamentally in terms of their structure and function and this is where the significance of ritualization becomes apparent. Traditional societies are bound by mechanical solidarity, characterized by a collective conscience. This collective conscience is a shared mindset among all members of the society, forming a moral community. The core of this type of society is a sacred collective ideal that embodies the group's virtues and serves as a source of identity. Consequently, individuals in these societies are united by shared values, norms, and beliefs, which are reinforced through ritualization. In traditional societies, there is a belief in a single, correct way of living, and any deviations are deemed sinful. Ritualization is crucial for maintaining mechanical solidarity. Rituals allow group members to experience the power of the group over the self. Additionally, ritualization in the form of punishment for deviance serves as a potent method for curbing deviant behavior in traditional societies. By enforcing moral boundaries, ritual punishment helps to preserve social cohesion and unity within the group. Later, his supporters, Victor Turner and Randall Collins expanded the theory of ritualization in different directions through their own research papers. Turner expands on Durkheim's ideas by focusing on the roles rituals play in social structure and transition. He emphasizes the importance of “communitas,” a state of social unity and cohesion that emerges during rituals or other shared experiences, transcending the ordinary divisions and hierarchies within society. On this basis, individuals participating in rituals temporarily set aside their social roles and come together as equals. In Collins’ paper, he builds upon Durkheim's ideas and proposes that rituals generate emotional energy, which in turn fosters social solidarity. Through a series of “interaction ritual chains,” individuals feel connected to one another and experience a sense of belonging. Structural ritualization theory Ritualization is associated with the work of Catherine Bell. Bell, drawing on the Practice Theory of Pierre Bourdieu, has taken a less functional view of ritual with her elaboration of ritualization. Recent studies More recently scholars interested in the cognitive science of religion such as Pascal Boyer, Pierre Liénard, and William W. McCorkle, Jr. have been involved in experimental, ethnographic, and archival research on how ritualized actions might inform the study of ritualization and ritual forms of action. Boyer, Liénard, and McCorkle argue that ritualized compulsions are in relation to an evolved cognitive architecture where social, cultural, and environmental selection pressures stimulate "hazard-precaution" systems such as predation, contagion, and disgust in human minds. McCorkle argued that these ritualized compulsions (especially in regard to dead bodies vis-à-vis, mortuary behavior) were turned into ritual scripts by professional guilds only several thousand years ago with advancement in technology such as the domestication of plants and animals, literacy, and writing. Future insights Ritualization is a crucial process that transforms ordinary actions, behaviors, and events into rituals imbued with cultural, social or religious significance. Understanding the concept of ritualization and its various functions provides valuable insights into human societies and cultural practices. Future research can take a closer look at the psychological and physiological responses involved in the process and their interactions, thereby broadening the scope of ritualization studies. References Sociological Theory: Emile Durkhiem and Social Solidarity, [© Dan Krier] |url=https://www.youtube.com/watch?v=3VwoihGP_i8 Behavioral ecology Ritual
Ritualization
[ "Biology" ]
1,675
[ "Behavior", "Behavioral ecology", "Behavioural sciences", "Ritual", "Ethology", "Human behavior" ]
559,361
https://en.wikipedia.org/wiki/Precancel
A precanceled stamp, or precancel for short, is a postage stamp that has been legitimately cancelled before being affixed to mail. A number of nations of the world use precancels, typically in the form of an overprint on definitive series stamps. Use Precanceled stamps are typically used by mass mailers, who can save the postal system time and effort by prearranging to use the precancels, and delivering the stamped mail ready for sorting. The postal administration will typically offer an incentive in the form of a reduced price for precanceled stamps in volume. Precancels cannot normally be purchased by the general public, although they are often seen in one's daily mail. History Canada Canada used precancels from 1889 to 1982. Initially, they consisted only of waves and bars applied with ink roller, but the town and province was added in 1903, similar to in the United States. In 1922, the precancel was changed to three pairs of horizontal bars. In the 1930s, town names were replaced with a corresponding numeral of either four numbers or three numbers preceded by an 'X'. France Widespread French use of precancels began in 1920 with cancels including the year and city. This was scrapped in 1922 in favour of a standard overprint in a semicircle reading AFFRANCHts. POSTES (Affranchissements Postes). During their time as French colonies, Algeria and Tunisia also issued precancels. Monaco Monaco has issued precancels since 1943, with an identical overprint to that of France. United Nations The United Nations Postal Administration has only issued one precancel, a 1½ cent stamp used from 1952 to 1959. United States The first use of precancels (both in the US and globally) was by Hale & Co., an independent mail company in the United States in the 1840s which undercut the expensive United States Post Office Department (POD). The first precancels were created in 1843 or early 1844 and their complexity varies; most were "crude straight lines" across the stamps, but examples from Portsmouth, New Hampshire were precanceled with "P / N.H." in block letters. Hale & Co., along with all other independent mail carriers, was shut down by an 1845 act of Congress. The POD authorised precanceling of stamps in 1887, and produced standardised guidelines on their design in May 1903. US precancels are generally divided into two groups: Bureaus and Locals. Locals, used unofficially since the 1840s, were prepared by postmasters using stamps and equipment they had on hand. Bureaus are those manufactured by the Bureau of Engraving and Printing, which came into use in the 1910s. Precancels are known from over 20,000 towns across all 50 states, Guam, Puerto Rico and American Samoa. Postal training Around the early twentieth century, some U.S. business colleges used specially pre-cancelled stamps or stamp-like labels to train students in the handling of stamps. Precancels were also used to train Post Office employees in the United Kingdom. Study The Precancel Gazette, a magazine for precancel collectors, was first published in 1919 and the Black Book, a catalog of US precancel stamps, was first published in 1940. The Precancel Stamp Society, formed in 1922 from two previously-existing clubs, specializes in the study of precancels. A number of catalogs list all the types of precancels issued in the countries that use them. Gallery Notes References Postal markings Postal systems Philatelic terminology
Precancel
[ "Technology" ]
748
[ "Transport systems", "Postal systems" ]
559,368
https://en.wikipedia.org/wiki/Lactococcus%20virus%20P008
Lactococcus virus P008 is a phage specific to Lactococcus lactis, a lactic acid bacteria used in the first stage of making cheese. P008 and related species are responsible for important loss each year in cheese factories. References Bacteriophages
Lactococcus virus P008
[ "Biology" ]
60
[ "Virus stubs", "Viruses" ]
559,622
https://en.wikipedia.org/wiki/Level%20set
In mathematics, a level set of a real-valued function of real variables is a set where the function takes on a given constant value , that is: When the number of independent variables is two, a level set is called a level curve, also known as contour line or isoline; so a level curve is the set of all real-valued solutions of an equation in two variables and . When , a level set is called a level surface (or isosurface); so a level surface is the set of all real-valued roots of an equation in three variables , and . For higher values of , the level set is a level hypersurface, the set of all real-valued roots of an equation in variables. A level set is a special case of a fiber. Alternative names Level sets show up in many applications, often under different names. For example, an implicit curve is a level curve, which is considered independently of its neighbor curves, emphasizing that such a curve is defined by an implicit equation. Analogously, a level surface is sometimes called an implicit surface or an isosurface. The name isocontour is also used, which means a contour of equal height. In various application areas, isocontours have received specific names, which indicate often the nature of the values of the considered function, such as isobar, isotherm, isogon, isochrone, isoquant and indifference curve. Examples Consider the 2-dimensional Euclidean distance: A level set of this function consists of those points that lie at a distance of from the origin, that make a circle. For example, , because . Geometrically, this means that the point lies on the circle of radius 5 centered at the origin. More generally, a sphere in a metric space with radius centered at can be defined as the level set . A second example is the plot of Himmelblau's function shown in the figure to the right. Each curve shown is a level curve of the function, and they are spaced logarithmically: if a curve represents , the curve directly "within" represents , and the curve directly "outside" represents . Level sets versus the gradient Theorem: If the function is differentiable, the gradient of at a point is either zero, or perpendicular to the level set of at that point. To understand what this means, imagine that two hikers are at the same location on a mountain. One of them is bold, and decides to go in the direction where the slope is steepest. The other one is more cautious and does not want to either climb or descend, choosing a path which stays at the same height. In our analogy, the above theorem says that the two hikers will depart in directions perpendicular to each other. A consequence of this theorem (and its proof) is that if is differentiable, a level set is a hypersurface and a manifold outside the critical points of . At a critical point, a level set may be reduced to a point (for example at a local extremum of ) or may have a singularity such as a self-intersection point or a cusp. Sublevel and superlevel sets A set of the form is called a sublevel set of f (or, alternatively, a lower level set or trench of f). A strict sublevel set of f is Similarly is called a superlevel set of f (or, alternatively, an upper level set of f). And a strict superlevel set of f is Sublevel sets are important in minimization theory. By Weierstrass's theorem, the boundness of some non-empty sublevel set and the lower-semicontinuity of the function implies that a function attains its minimum. The convexity of all the sublevel sets characterizes quasiconvex functions. See also Epigraph Level-set method Level set (data structures) References Multivariable calculus Implicit surface modeling
Level set
[ "Mathematics" ]
813
[ "Multivariable calculus", "Calculus" ]
559,640
https://en.wikipedia.org/wiki/Pat%20Roy%20Mooney
Pat Mooney (born in 1947) has worked with civil society organizations on international trade and development issues related to agriculture, biodiversity and emerging technologies for over 40 years. Career Pat Mooney had no formal university training, but, together with Cary Fowler and Hope Shand, he began working on the 'Seeds' issue - the problem that legislation was enabling agribusiness corporations to control access to the seeds to grow the decreasing variety of crops that supported global food supply - in the 1970s. In 1984, the three co-founded RAFI (Rural Advancement Foundation International), whose name was changed to ETC Group (pronounced "etcetera" group) in 2001. ETC Group is a small international CSO addressing the impact of new technologies on vulnerable communities. Mooney’s more recent work has focused on geoengineering, nanotechnology, synthetic biology and global governance of these technologies as well as corporate involvement in their development. He is a member of the International Panel of Experts on Sustainable Food Systems, and led their Long Food Movement project. Awards and recognition 1985 - Right Livelihood Award (with Cary Fowler) for "working to save the world's genetic plant heritage." 1998 - Pearson Medal of Peace Giraffe Heroes award for "people who have the courage to stick their necks out for the common good" 2017 - Honorary Doctorate of Laws from the University of Waterloo, Canada 2017 - Doctor Honoris Causa from 17, Instituto de Estudios Criticos, Mexico. Selected works Personal life Rooney lived on the Canadian prairies for many years; he now resides just outside the village of Wakefield, Quebec with his wife in retirement. He has six children and nine grandchildren. References External links Website of Action Group ETC Pat Mooney on the Dangers of Geoengineering to Combat Climate Change - video report by Democracy Now! 1947 births Canadian environmentalists Canadian non-fiction writers Living people Synthetic biologists
Pat Roy Mooney
[ "Biology" ]
386
[ "Synthetic biology", "Synthetic biologists" ]
559,764
https://en.wikipedia.org/wiki/Roentgen%20equivalent%20man
The roentgen equivalent man (rem) is a CGS unit of equivalent dose, effective dose, and committed dose, which are dose measures used to estimate potential health effects of low levels of ionizing radiation on the human body. Quantities measured in rem are designed to represent the stochastic biological risk of ionizing radiation, which is primarily radiation-induced cancer. These quantities are derived from absorbed dose, which in the CGS system has the unit rad. There is no universally applicable conversion constant from rad to rem; the conversion depends on relative biological effectiveness (RBE). The rem has been defined since 1976 as equal to 0.01 sievert, which is the more commonly used SI unit outside the United States. Earlier definitions going back to 1945 were derived from the roentgen unit, which was named after Wilhelm Röntgen, a German scientist who discovered X-rays. The unit name is misleading, since 1 roentgen actually deposits about 0.96 rem in soft biological tissue, when all weighting factors equal unity. Older units of rem following other definitions are up to 17% smaller than the modern rem. Doses greater than 100 rem received over a short time period are likely to cause acute radiation syndrome (ARS), possibly leading to death within weeks if left untreated. Note that the quantities that are measured in rem were not designed to be correlated to ARS symptoms. The absorbed dose, measured in rad, is a better indicator of ARS. A rem is a large dose of radiation, so the millirem (mrem), which is one thousandth of a rem, is often used for the dosages commonly encountered, such as the amount of radiation received from medical x-rays and background sources. Usage The rem and millirem are CGS units in widest use among the U.S. public, industry, and government. However, the SI unit the sievert (Sv) is the normal unit outside the United States, and is increasingly encountered within the US in academic, scientific, and engineering environments, and have now virtually replaced the rem. The conventional units for dose rate is mrem/h. Regulatory limits and chronic doses are often given in units of mrem/yr or rem/yr, where they are understood to represent the total amount of radiation allowed (or received) over the entire year. In many occupational scenarios, the hourly dose rate might fluctuate to levels thousands of times higher for a brief period of time, without infringing on the annual total exposure limits. The annual conversions to a Julian year are: 1 mrem/h = 8,766 mrem/yr 0.1141 mrem/h = 1,000 mrem/yr The International Commission on Radiological Protection (ICRP) once adopted fixed conversion for occupational exposure, although these have not appeared in recent documents: 8 h = 1 day 40 h = 1 week 50 week = 1 yr Therefore, for occupation exposures of that time period, 1 mrem/h = 2,000 mrem/yr 0.5 mrem/h = 1,000 mrem/yr The U.S. National Institute of Standards and Technology (NIST) strongly discourages Americans from expressing doses in rem, in favor of recommending the SI unit. The NIST recommends defining the rem in relation to the SI in every document where this unit is used. Health effects Ionizing radiation has deterministic and stochastic effects on human health. The deterministic effects that can lead to acute radiation syndrome only occur in the case of high doses (> ~10 rad or > 0.1 Gy) and high dose rates (> ~10 rad/h or > 0.1 Gy/h). A model of deterministic risk would require different weighting factors (not yet established) than are used in the calculation of equivalent and effective dose. To avoid confusion, deterministic effects are normally compared to absorbed dose in units of rad, not rem. Stochastic effects are those that occur randomly, such as radiation-induced cancer. The consensus of the nuclear industry, nuclear regulators, and governments, is that the incidence of cancers caused by ionizing radiation can be modeled as increasing linearly with effective dose at a rate of 0.055% per rem (5.5%/Sv). Individual studies, alternate models, and earlier versions of the industry consensus have produced other risk estimates scattered around this consensus model. There is general agreement that the risk is much higher for infants and fetuses than adults, higher for the middle-aged than for seniors, and higher for women than for men, though there is no quantitative consensus about this. There is much less data, and much more controversy, regarding the possibility of cardiac and teratogenic effects, and the modelling of internal dose. The ICRP recommends limiting artificial irradiation of the public to an average of 100 mrem (1 mSv) of effective dose per year, not including medical and occupational exposures. For comparison, radiation levels inside the United States Capitol are 85 mrem/yr (0.85 mSv/yr), close to the regulatory limit, because of the uranium content of the granite structure. The NRC sets the annual total effective dose of full body radiation, or total body radiation (TBR), allowed for radiation workers 5,000 mrem (5 rem). History The concept of the rem first appeared in literature in 1945 and was given its first definition in 1947. The definition was refined in 1950 as "that dose of any ionizing radiation which produces a relevant biological effect equal to that produced by one roentgen of high-voltage x-radiation." Using data available at the time, the rem was variously evaluated as 83, 93, or 95 erg/gram. Along with the introduction of the rad in 1953, the ICRP decided to continue the use of the rem. The US National Committee on Radiation Protection and Measurements noted in 1954 that this effectively implied an increase in the magnitude of the rem to match the rad (100 erg/gram). The ICRP introduced and then officially adopted the rem in 1962 as the unit of equivalent dose to measure the way different types of radiation distribute energy in tissue and began recommending values of relative biological effectiveness (RBE) for various types of radiation. In practice, the unit of rem was used to denote that an RBE factor had been applied to a number which was originally in units of rad or roentgen. The International Committee for Weights and Measures (CIPM) adopted the sievert in 1980 but never accepted the use of the rem. The NIST recognizes that this unit is outside the SI but temporarily accepts its use in the U.S. with the SI. The rem remains in widespread use as an industry standard in the U.S. The United States Nuclear Regulatory Commission still permits the use of the units curie, rad, and rem alongside SI units. Radiation-related quantities The following table shows radiation quantities in SI and non-SI units: See also Roentgen equivalent physical Banana equivalent dose Health threat from cosmic rays Orders of magnitude (radiation) References Units of radiation dose Radiation health effects Radiobiology Non-SI metric units Equivalent Equivalent units
Roentgen equivalent man
[ "Chemistry", "Materials_science", "Mathematics", "Biology" ]
1,526
[ "Radiation health effects", "Equivalent quantities", "Units of measurement", "Non-SI metric units", "Radiobiology", "Quantity", "Units of radiation dose", "Equivalent units", "Radiation effects", "Radioactivity" ]
559,838
https://en.wikipedia.org/wiki/Backtick
The backtick is a typographical mark used mainly in computing. It is also known as backquote, grave, or grave accent. The character was designed for typewriters to add a grave accent to a (lower-case) base letter, by overtyping it atop that letter. On early computer systems, however, this physical dead key+overtype function was rarely supported, being functionally replaced by precomposed characters. Consequently, this ASCII symbol was rarely (if ever) used in computer systems for its original aim and became repurposed for many unrelated uses in computer programming. The sign is located on the left-top of a US or UK layout keyboard, next to the key. On older keyboards, the Escape key was at this location, and the backtick key was somewhere on the right side of the layout. Provision (if any) of the backtick on other keyboards varies by national keyboard layout and keyboard mapping. History Typewriters On typewriters designed for languages that routinely use diacritics (accent marks), there are two possible solutions. Keys can be dedicated to pre-composed characters or alternatively a dead key mechanism can be provided. With the latter, a mark is made when a dead key is typed but, unlike normal keys, the paper carriage does not move on and thus, the next letter to be typed is printed under the accent. Incorporation into ISO 646 and ASCII The incorporation of the grave symbol into ASCII is a consequence of this prior existence on typewriters. This symbol did not exist independently as a type or hot-lead printing character. Thus, ISO646 was born and the ASCII standard updated to include the backtick and other symbols. As surrogate of apostrophe or (opening) single quote Some early typewriters and ASCII peripherals designed the backtick and apostrophe to be mirror images of each other. This allowed them to be used as matching pairs of open and close quotes, and also as grave and acute accents, and allowed the apostrophe to be used as a prime. None of these were considered typographically correct. However, the use of apostrophe for opening quotes, the need on some typewriters to overprint apostrophe and period to get an exclamation mark, and the lack of a mirrored double-quote character tended to change the apostrophe to the modern "typewriter" design that is vertical. Unicode now provides separate characters for opening and closing quotes. This has style remained in use in certain situations, such as in output generated by some UNIX console programs, rendering of man pages within some environments, and old technical documentation. This style is falling out of favor over time, and some institutions that traditionally used it have since abandoned it. Computing Command-line interface languages Many command-line interface languages and the scripting (programming) languages like Perl, PHP, Ruby and Julia (though see below) use pairs of backticks to indicate command substitution. A command substitution is the standard output from one command, into an embedded line of text within another command. For example, using $ as the symbol representing a terminal prompt, the code line: In all POSIX shells (including Bash and Zsh), the use of backticks for command substitution is now largely deprecated in favor of the notation $(...), so that the example above would be re-written: The new syntax allows nesting, for example: Markup languages It is sometimes used in source code comments to indicate code, e.g., /* Use the `printf()` function. */ This is also the format the Markdown formatter uses to indicate code. Some variations of Markdown support "fenced code blocks" that span multiple lines of code, starting (and ending) with three backticks in a row (```). TeX: The backtick character represents curly opening quotes. For example, ` is rendered as single opening curly quote () and `` is a double curly opening quote (). It also supplies the numeric ASCII value of an ASCII character wherever a number is expected. Programming languages BBC BASIC: The backtick character is valid at the beginning of or within a variable, structure, procedure or function name. D and Go: The backtick surrounds a raw string literal. F#: Surrounding an identifier with double backticks allows the use of identifiers that would not otherwise be allowed, such as keywords, or identifiers containing punctuation or spaces. Haskell: Surrounding a function name by backticks makes it an infix operator. JavaScript: ECMAScript 6 standard introduced a "backtick" character which indicated a string or template literal. Its applications include (but are not limited to): string interpolation (substitution), embedded expressions, and multi-line strings. In the following example name and pet variable's values get substituted into the string enclosed by grave accent characters: const name = "Mary", pet = "lamb"; // Set variables let temp = `${name} has a little ${pet}!`; console.log(temp); // => "Mary has a little lamb!"; Lisp macro systems: The backtick character (called quasiquote in Scheme) introduces a quoted expression in which comma-substitution may occur. It is identical to the plain quote, except that a nested expression prefixed with a comma is replaced with the value of that nested expression. If the nested expression happens to be a symbol (that is, a variable name in Lisp), the symbols' value is used. If the expression happens to be program code, the first value returned by that code is inserted at the respective location instead of the comma-prefixed code. This is roughly analogous to the Bourne shell's variable interpolation with $ inside double quotes. Julia: Backticks make a command object, Cmd, that can be run, with run function, like run(`echo Hello world!`). You can interpolate Julia variables, but only indirectly shell environment variables. m4: A backtick together with an apostrophe quotes strings (to suppress or defer macro expansion). MySQL/MariaDB: A backtick in queries is a delimiter for column, table, and database identifiers. OCaml: The backtick indicates polymorphic variants. Pico: The backtick indicates comments in the programming language. PowerShell: The backtick is used as the escape character. For example, a newline character is denoted `n. Most common programming languages use a backslash as the escape character (e.g., \n), but because Windows allows the backslash as a path separator, it is impractical for PowerShell to use backslash for a different purpose. Two backticks produce the ` character itself. For example, the nullable boolean of .NET is specified in PowerShell as [Nullable``1[System.Boolean]]. Python : Prior to version 3.0, backticks were a synonym for the repr() function, which converts its argument to a string suitable for a programmer to view. However, this feature was removed in Python 3.0. Backticks also appear extensively in the reStructuredText plain text markup language (implemented in the Python docutils package). R: The backtick is used to surround non-syntactic variable names. This includes variable names containing special characters or reserved words, among others. Racket: The backtick or "Quasiquote" is used to begin creating lists. Scala: An identifier may also be formed by an arbitrary string between backticks. The identifier then is composed of all characters excluding the backticks themselves. Tom: The backtick creates a new term or to calls an existing term. Unlambda: The backtick character denotes function application. Verilog HDL: The backtick is used at the beginning of compiler's directives. Games In many PC-based computer games in the US and UK, the key is used to open the console so the user can execute script commands via its CLI. This is true for games such as Factorio, Battlefield 3, Half-Life, Halo CE, Quake, Half-Life 2, Blockland, Soldier of Fortune II: Double Helix, Unreal, Counter-Strike, Crysis, Morrowind, Oblivion, Skyrim, Fallout: New Vegas, Fallout 3, Fallout 4, RuneScape, and games based on the Quake engine or Source engine. While not necessarily the original progenitor of the console key concept, Quake is still widely associated with any usage of the key as a toggle for a drop-down console, often being referred to as the "Quake Key". In 2021, Windows Terminal introduced a "Quake Mode" which enables a global shortcut of + that opens a terminal window pinned to the top half of the screen. See also Tilde Notes References Typographical symbols
Backtick
[ "Mathematics" ]
1,911
[ "Symbols", "Typographical symbols" ]
559,858
https://en.wikipedia.org/wiki/BioJava
BioJava is an open-source software project dedicated to provide Java tools to process biological data. BioJava is a set of library functions written in the programming language Java for manipulating sequences, protein structures, file parsers, Common Object Request Broker Architecture (CORBA) interoperability, Distributed Annotation System (DAS), access to AceDB, dynamic programming, and simple statistical routines. BioJava supports a range of data, starting from DNA and protein sequences to the level of 3D protein structures. The BioJava libraries are useful for automating many daily and mundane bioinformatics tasks such as to parsing a Protein Data Bank (PDB) file, interacting with Jmol and many more. This application programming interface (API) provides various file parsers, data models and algorithms to facilitate working with the standard data formats and enables rapid application development and analysis. Additional projects from BioJava include rcsb-sequenceviewer, biojava-http, biojava-spark, and rcsb-viewers. Features BioJava provides software modules for many of the typical tasks of bioinformatics programming. These include: Accessing nucleotide and peptide sequence data from local and remote databases Transforming formats of database/ file records Protein structure parsing and manipulation Manipulating individual sequences Searching for similar sequences Creating and manipulating sequence alignments History and publications The BioJava project grew out of work by Thomas Down and Matthew Pocock to create an API to simplify development of Java-based Bioinformatics tools. BioJava is an active open source project that has been developed over more than 12 years and by more than 60 developers. BioJava is one of a number of Bio* projects designed to reduce code duplication. Examples of such projects that fall under Bio* apart from BioJava are BioPython, BioPerl, BioRuby, EMBOSS etc. In October 2012, the first paper on BioJava was published. This paper detailed BioJava's modules, functionalities, and purpose. As of November 2018 Google Scholar counts more than 130 citations. The most recent paper on BioJava was written in February 2017. This paper detailed a new tool named BioJava-ModFinder. This tool can be used for identification and subsequent mapping of protein modifications to 3D in the Protein Data Bank (PBD). The package was also integrated with the RCSB PDB web application and added protein modification annotations to the sequence diagram and structure display. More than 30,000 structures with protein modifications were identified by using BioJava-ModFinder and can be found on the RCSB PDB website. In the year 2008, BioJava's first Application note was published. It was migrated from its original CVS repository to GitHub in April 2013. The project has been moved to a separate repository, BioJava-legacy, and is still maintained for minor changes and bug fixes. Version 3 was released in December 2010. It was a major update to the prior versions. The aim of this release was to rewrite BioJava so that it could be modularized into small, reusable components. This allowed developers to contribute more easily and reduced dependencies. The new approach seen in BioJava 3 was modeled after the Apache Commons. Version 4 was released in January 2015. This version brought many new features and improvements to the packages biojava-core, biojava-structure, biojava-structure-gui, biojava-phylo, as well as others. BioJava 4.2.0 was the first release to be available using Maven from the Maven Central. Version 5 was released in March 2018. This represents a major milestone for the project. BioJava 5.0.0 is the first released based on Java 8 which introduces the use of lambda functions and streaming API calls. There were also major changes to biojava-structure module. Also, the previous data models for macro-molecular structures have been adapted to more closely represent the mmCIF data model. This was the first release in over two years. Some of the other improvements include optimizations in the biojava-structure module to improve symmetry detection and added support for MMTF formats. Other general improvements include Javadoc updates, dependency versions, and all tests are now Junit4. The release contains 1,170 commits from 19 contributors. Modules During 2014-2015, large parts of the original code base were rewritten. BioJava 3 is a clear departure from the version 1 series. It now consists of several independent modules built using an automation tool called Apache Maven. These modules provide state-of-the-art tools for protein structure comparison, pairwise and multiple sequence alignments, working with DNA and protein sequences, analysis of amino acid properties, detecting protein modifications, predicting disordered regions in proteins, and parsers for common file formats using a biologically meaningful data model. The original code has been moved into a separate BioJava legacy project, which is still available for backward compatibility. BioJava 5 introduced new features to two modules, biojava-alignment and biojava-structure. The following sections will describe several of the new modules and highlight some of the new features that are included in the latest version of BioJava. Core Module This module provides Java classes to model amino acid or nucleotide sequences. The classes were designed so that the names are familiar and make sense to biologists and also provide a concrete representation of the steps in going from a gene sequence to a protein sequence for computer scientists and programmers. A major change between the legacy BioJava project and BioJava3 lies in the way framework has been designed to exploit then-new innovations in Java. A sequence is defined as a generic interface allowing the rest of the modules to create any utility that operates on all sequences. Specific classes for common sequences such as DNA and proteins have been defined in order to improve usability for biologists. The translation engine really leverages this work by allowing conversions between DNA, RNA and amino acid sequences. This engine can handle details such as choosing the codon table, converting start codons to methionine, trimming stop codons, specifying the reading frame and handing ambiguous sequences. Special attention has been paid to designing the storage of sequences to minimize space needs. Special design patterns such as the Proxy pattern allowed the developers to create the framework such that sequences can be stored in memory, fetched on demand from a web service such as UniProt, or read from a FASTA file as needed. The latter two approaches save memory by not loading sequence data until it is referenced in the application. This concept can be extended to handle very large genomic datasets, such as NCBI GenBank or a proprietary database. Protein structure modules The protein structure modules provide tools to represent and manipulate 3D biomolecular structures. They focus on protein structure comparison. The following algorithms have been implemented and included in BioJava. FATCAT algorithm for flexible and rigid body alignment. The standard Combinatorial Extension (CE) algorithm. A new version of CE that can detect circular permutations in proteins. These algorithms are used to provide the RCSB Protein Data Bank (PDB) Protein Comparison Tool as well as systematic comparisons of all proteins in the PDB on a weekly basis. Parsers for PDB and mmCIF file formats allow the loading of structure data into a reusable data model. This feature is used by the SIFTS project to map between UniProt sequences and PDB structures. Information from the RCSB PDB can be dynamically fetched without the need to manually download data. For visualization, an interface to the 3D viewer Jmol is provided. Genome and Sequencing modules This module is focused on the creation of gene sequence objects from the core module. This is realized by supporting the parsing of the following popular standard file formats generated by open source gene prediction applications: GTF files generated by GeneMark GFF2 files generated by GeneID GFF3 files generated by Glimmer Then the gene sequence objects are written out as a GFF3 format and is imported into GMOD. These file formats are well defined but what gets written in the file is very flexible. For providing input-output support for several common variants of the FASTQ file format from the next generation sequencers, a separate sequencing module is provided. For samples on how to use this module please go to this link. Alignment module This module contains several classes and methods that allow users to perform pairwise and multiple sequence alignment. Sequences can be aligned in both a single and multi-threaded fashion. BioJava implements the Needleman-Wunsch algorithm for optimal global alignments and the Smith and Waterman's algorithm for local alignments. The outputs of both local and global alignments are available in standard formats. In addition to these two algorithms, there is an implementation of Guan–Uberbacher algorithm which performs global sequence alignment very efficiently since it only uses linear memory. For Multiple Sequence Alignment, any of the methods discussed above can be used to progressively perform a multiple sequence alignment. ModFinder module The ModFinder module provides new methods to identify and classify protein modifications in protein 3D structures. Over 400 different types of protein modifications such as phosphorylation, glycosylation, disulfide bonds metal chelation etc. were collected and curated based on annotations in PSI-MOD, RESID and RCSB PDB. The module also provides an API for detecting pre-, co-, and post-translational protein modifications within protein structures. This module can also identify phosphorylation and print all pre-loaded modifications from a structure. Amino acid properties module This module attempts to provide accurate physio-chemical properties of proteins. The properties that can calculated using this module are as follows: Molecular mass Extinction coefficient Instability index Aliphatic index Grand average of hydropathy Isoelectric point Amino acid composition The precise molecular weights for common isotopically labelled amino acids are included in this module. There also exists flexibility to define new amino acid molecules with their molecular weights using simple XML configuration files. This can be useful where the precise mass is of high importance such as mass spectrometry experiments. Protein disorder module The goal of this module is to provide users ways to find disorders in protein molecules. BioJava includes a Java implementation of the RONN predictor. The BioJava 3.0.5 makes use of Java's support for multithreading to improve performance by up to 3.2 times, on a modern quad-core machine, as compared to the legacy C implementation. There are two ways to use this module: Using library function calls Using command line Some features of this module include: Calculating the probability of disorder for every residue in a sequence Calculating the probability of disorder for every residue in the sequence for all proteins from a FASTA input file Get the disordered regions of the protein for a single protein sequence or for all the proteins from a FASTA input file Web service access module As per the current trends in bioinformatics, web based tools are gaining popularity. The web service module allows bioinformatics services to be accessed using REST protocols. Currently, two services are implemented: NCBI Blast through the Blast URLAPI (previously known as QBlast) and the HMMER web service. Comparisons with other alternatives The need for customized software in the field of bioinformatics has been addressed by several groups and individuals. Similar to BioJava, open-source software projects such as BioPerl, BioPython, and BioRuby all provide tool-kits with multiple functionality that make it easier to create customized pipelines or analysis. As the names suggest, the projects mentioned above use different programming languages. All of these APIs offer similar tools so on what criteria should one base their choice? For programmers who are experienced in only one of these languages, the choice is straightforward. However, for a well-rounded bioinformaticist who knows all of these languages and wants to choose the best language for a job, the choice can be made based on the following guidelines given by a software review done on the Bio* tool-kits. In general, for small programs (<500 lines) that will be used by only an individual or small group, it is hard to beat Perl and BioPerl. These constraints probably cover the needs of 90 per cent of personal bioinformatics programming. For beginners, and for writing larger programs in the Bio domain, especially those to be shared and supported by others, Python’s clarity and brevity make it very attractive. For those who might be leaning towards a career in bioinformatics and who want to learn only one language, Java has the widest general programming support, very good support in the Bio domain with BioJava, and is now the de facto language of business (the new COBOL, for better or worse). Apart from these Bio* projects there is another project called STRAP which uses Java and aims for similar goals. The STRAP-toolbox, similar to BioJava is also a Java-toolkit for the design of Bioinformatics programs and scripts. The similarities and differences between BioJava and STRAP are as follows: Similarities Both provide comprehensive collections of methods for protein sequences. Both are used by Java programmers to code bioinformatics algorithms. Both separate implementations and definitions by using java interfaces. Both are open source projects. Both can read and write many sequence file formats. Differences BioJava is applicable to nucleotide and peptide sequences and can be applied for entire genomes. STRAP cannot cope with single sequences as long as an entire chromosome. Instead STRAP manipulates peptide sequences and 3D- structures of the size of single proteins. Nevertheless, it can hold a high number of sequences and structures in memory. STRAP is designed for protein sequences but can read coding nucleotide files, which are then translated to peptide sequences. STRAP is very fast since the graphical user interface must be highly responsive. BioJava is used where speed is less critical. BioJava is well designed in terms of type safety, ontology and object design. BioJava uses objects for sequences, annotations and sequence positions. Even single amino acids or nucleotides are object references. To enhance speed, STRAP avoids frequent object instantiations and invocation of non-final object-methods. In BioJava peptide sequences and nucleotide sequences are lists of symbols. The symbols can be retrieved one after the other with an iterator or sub-sequences can be obtained. The advantages are that the entire sequence does not necessarily reside in memory and that programs are less susceptible to programming errors. Symbol objects are immutable elements of an alphabet. In STRAP however simple byte arrays are used for sequences and float arrays for coordinates. Besides speed the low memory consumption is an important advantage of basic data types. Classes in Strap expose internal data. Therefore, programmers might commit programming errors like manipulating byte arrays directly instead of using the setter methods. Another disadvantage is that no checks are performed in STRAP whether the characters in sequences are valid with respect to an underlying alphabet. In BioJava sequence positions are realized by the class Location. Discontiguous Location objects are composed of several contiguous RangeLocation objects or PointLocation objects. For the class StrapProtein however, single residue positions are indicated by integer numbers between 0 and countResidues()-1. Multiple positions are given by boolean arrays. True at a given index means selected whereas false means not selected. BioJava throws exceptions when methods are invoked with invalid parameters. STRAP avoids the time-consuming creation of Throwable objects. Instead, errors in methods are indicated by the return values NaN, -1 or null. From the point of program design however Throwable objects are nicer. In BioJava a Sequence object is either a peptide sequence or a nucleotide sequence. A StrapProtein can hold both at the same time if a coding nucleotide sequence was read and translated into protein. Both, the nucleotide sequence and the peptide sequence are contained in the same StrapProtein object. The coding or non-coding regions can be changed and the peptide sequence alters accordingly. Projects using BioJava The following projects make use of BioJava. Metabolic Pathway Builder: Software suite dedicated to the exploration of connections among genes, proteins, reactions and metabolic pathways DengueInfo : a Dengue genome information portal that uses BioJava in the middleware and talks to a biosql database. Dazzle: A BioJava based DAS server. BioSense: A plug-in for the InforSense Suite, an analytics software platform by IDBS that unitizes BioJava. Bioclipse: A free, open source, workbench for chemo- and bioinformatics with powerful editing and visualizing abilities for molecules, sequences, proteins, spectra, etc. PROMPT: A free, open source framework and application for the comparison and mapping of protein sets. Uses BioJava for handling most input data formats. Cytoscape: An open source bioinformatics software platform to visualize molecular interaction networks. BioWeka: An open source biological data mining application. Geneious: A molecular biology toolkit. MassSieve: An open source application to analyze mass spec proteomics data. STRAP: A tool for multiple sequence alignment and sequence-based structure alignment. Jstacs: A Java framework for statistical analysis and classification of biological sequences jLSTM: "Long Short-Term Memory" for protein classification LaJolla: An open source structural alignment tool for RNA and proteins using an index structure for fast alignment of thousands of structures; includes an easy-to-use command line interface. GenBeans: A rich client platform for bioinformatics primarily focused on molecular biology and sequence analysis. JEnsembl: A version-aware Java API to Ensembl data systems. MUSI: An integrated system to identify multiple specificity from very large peptide or nucleic acid data sets. Bioshell: A utility library for structural bioinformatics See also Open Bioinformatics Foundation BioPerl, Biopython, BioRuby Bioclipse Comparison of software for molecular mechanics modeling References External links Bioinformatics software Java platform software Free bioinformatics software
BioJava
[ "Biology" ]
3,792
[ "Bioinformatics", "Bioinformatics software" ]
559,864
https://en.wikipedia.org/wiki/BioPerl
BioPerl is a collection of Perl modules that facilitate the development of Perl scripts for bioinformatics applications. It has played an integral role in the Human Genome Project. Background BioPerl is an active open source software project supported by the Open Bioinformatics Foundation. The first set of Perl codes of BioPerl was created by Tim Hubbard and Jong Bhak at MRC Centre Cambridge, where the first genome sequencing was carried out by Fred Sanger. MRC Centre was one of the hubs and birthplaces of modern bioinformatics as it had a large quantity of DNA sequences and 3D protein structures. Hubbard was using the th_lib.pl Perl library, which contained many useful Perl subroutines for bioinformatics. Bhak, Hubbard's first PhD student, created jong_lib.pl. Bhak merged the two Perl subroutine libraries into Bio.pl. The name BioPerl was coined jointly by Bhak and Steven Brenner at the Centre for Protein Engineering (CPE). In 1995, Brenner organized a BioPerl session at the Intelligent Systems for Molecular Biology conference, held in Cambridge. BioPerl had some users in coming months including Georg Fuellen who organized a training course in Germany. Fuellen's colleagues and students greatly extended BioPerl; this was further expanded by others, including Steve Chervitz who was actively developing Perl codes for his yeast genome database. The major expansion came when Cambridge student Ewan Birney joined the development team. The first stable release was on 11 June 2002; the most recent stable (in terms of API) release is 1.7.2 from 7 September 2017. There are also developer releases produced periodically. Version series 1.7.x is considered to be the most stable (in terms of bugs) version of BioPerl and is recommended for everyday use. In order to take advantage of BioPerl, the user needs a basic understanding of the Perl programming language including an understanding of how to use Perl references, modules, objects, and methods. Features and examples BioPerl provides software modules for many of the typical tasks of bioinformatics programming. These include: Accessing nucleotide and peptide sequence data from local and remote databases Example of accessing GenBank to retrieve a sequence: Transforming formats of database/ file records Example code for transforming formats Manipulating individual sequences Example of gathering statistics for a given sequence Searching for similar sequences Creating and manipulating sequence alignments Searching for genes and other structures on genomic DNA Developing machine-readable sequence annotations Usage In addition to being used directly by end-users, BioPerl has also provided the base for a wide variety of bioinformatic tools, including amongst others: SynBrowse GeneComber TFBS MIMOX BioParser Degenerate primer design Querying the public databases Current Comparative Table New tools and algorithms from external developers are often integrated directly into BioPerl itself: Dealing with phylogenetic trees and nested taxa FPC Web tools Advantages BioPerl was one of the first biological module repositories that increased its usability. It has very easy to install modules, along with a flexible global repository. BioPerl uses good test modules for a large variety of processes. Disadvantages There are many ways to use BioPerl, from simple scripting to very complex object programming. This makes the language not clear and sometimes hard to understand. For as many modules that BioPerl has, some do not always work the way they are intended. Related libraries in other programming languages Several related bioinformatics libraries implemented in other programming languages exist as part of the Open Bioinformatics Foundation, including: Biopython BioJava BioRuby BioPHP BioJS Bioconductor References Perl software Free bioinformatics software Bioinformatics software
BioPerl
[ "Biology" ]
801
[ "Bioinformatics", "Bioinformatics software" ]
559,868
https://en.wikipedia.org/wiki/Biopython
The Biopython project is an open-source collection of non-commercial Python tools for computational biology and bioinformatics, created by an international association of developers. It contains classes to represent biological sequences and sequence annotations, and it is able to read and write to a variety of file formats. It also allows for a programmatic means of accessing online databases of biological information, such as those at NCBI. Separate modules extend Biopython's capabilities to sequence alignment, protein structure, population genetics, phylogenetics, sequence motifs, and machine learning. Biopython is one of a number of Bio* projects designed to reduce code duplication in computational biology. History Biopython development began in 1999 and it was first released in July 2000. It was developed during a similar time frame and with analogous goals to other projects that added bioinformatics capabilities to their respective programming languages, including BioPerl, BioRuby and BioJava. Early developers on the project included Jeff Chang, Andrew Dalke and Brad Chapman, though over 100 people have made contributions to date. In 2007, a similar Python project, namely PyCogent, was established. The initial scope of Biopython involved accessing, indexing and processing biological sequence files. While this is still a major focus, over the following years added modules have extended its functionality to cover additional areas of biology (see Key features and examples). As of version 1.77, Biopython no longer supports Python 2. Design Wherever possible, Biopython follows the conventions used by the Python programming language to make it easier for users familiar with Python. For example, Seq and SeqRecord objects can be manipulated via slicing, in a manner similar to Python's strings and lists. It is also designed to be functionally similar to other Bio* projects, such as BioPerl. Biopython is able to read and write most common file formats for each of its functional areas, and its license is permissive and compatible with most other software licenses, which allow Biopython to be used in a variety of software projects. Key features and examples Sequences A core concept in Biopython is the biological sequence, and this is represented by the Seq class. A Biopython Seq object is similar to a Python string in many respects: it supports the Python slice notation, can be concatenated with other sequences and is immutable. In addition, it includes sequence-specific methods and specifies the particular biological alphabet used. >>> # This script creates a DNA sequence and performs some typical manipulations >>> from Bio.Seq import Seq >>> dna_sequence = Seq("AGGCTTCTCGTA", IUPAC.unambiguous_dna) >>> dna_sequence Seq('AGGCTTCTCGTA', IUPACUnambiguousDNA()) >>> dna_sequence[2:7] Seq('GCTTC', IUPACUnambiguousDNA()) >>> dna_sequence.reverse_complement() Seq('TACGAGAAGCCT', IUPACUnambiguousDNA()) >>> rna_sequence = dna_sequence.transcribe() >>> rna_sequence Seq('AGGCUUCUCGUA', IUPACUnambiguousRNA()) >>> rna_sequence.translate() Seq('RLLV', IUPACProtein()) Sequence annotation The SeqRecord class describes sequences, along with information such as name, description and features in the form of SeqFeature objects. Each SeqFeature object specifies the type of the feature and its location. Feature types can be ‘gene’, ‘CDS’ (coding sequence), ‘repeat_region’, ‘mobile_element’ or others, and the position of features in the sequence can be exact or approximate. >>> # This script loads an annotated sequence from file and views some of its contents. >>> from Bio import SeqIO >>> seq_record = SeqIO.read("pTC2.gb", "genbank") >>> seq_record.name 'NC_019375' >>> seq_record.description 'Providencia stuartii plasmid pTC2, complete sequence.' >>> seq_record.features[14] SeqFeature(FeatureLocation(ExactPosition(4516), ExactPosition(5336), strand=1), type='mobile_element') >>> seq_record.seq Seq("GGATTGAATATAACCGACGTGACTGTTACATTTAGGTGGCTAAACCCGTCAAGC...GCC", IUPACAmbiguousDNA()) Input and output Biopython can read and write to a number of common sequence formats, including FASTA, FASTQ, GenBank, Clustal, PHYLIP and NEXUS. When reading files, descriptive information in the file is used to populate the members of Biopython classes, such as SeqRecord. This allows records of one file format to be converted into others. Very large sequence files can exceed a computer's memory resources, so Biopython provides various options for accessing records in large files. They can be loaded entirely into memory in Python data structures, such as lists or dictionaries, providing fast access at the cost of memory usage. Alternatively, the files can be read from disk as needed, with slower performance but lower memory requirements. >>> # This script loads a file containing multiple sequences and saves each one in a different format. >>> from Bio import SeqIO >>> genomes = SeqIO.parse("salmonella.gb", "genbank") >>> for genome in genomes: ... SeqIO.write(genome, genome.id + ".fasta", "fasta") Accessing online databases Through the Bio.Entrez module, users of Biopython can download biological data from NCBI databases. Each of the functions provided by the Entrez search engine is available through functions in this module, including searching for and downloading records. >>> # This script downloads genomes from the NCBI Nucleotide database and saves them in a FASTA file. >>> from Bio import Entrez >>> from Bio import SeqIO >>> output_file = open("all_records.fasta", "w") >>> Entrez.email = "my_email@example.com" >>> records_to_download = ["FO834906.1", "FO203501.1"] >>> for record_id in records_to_download: ... handle = Entrez.efetch(db="nucleotide", id=record_id, rettype="gb") ... seqRecord = SeqIO.read(handle, format="gb") ... handle.close() ... output_file.write(seqRecord.format("fasta")) Phylogeny The Bio.Phylo module provides tools for working with and visualising phylogenetic trees. A variety of file formats are supported for reading and writing, including Newick, NEXUS and phyloXML. Common tree manipulations and traversals are supported via the Tree and Clade objects. Examples include converting and collating tree files, extracting subsets from a tree, changing a tree's root, and analysing branch features such as length or score. Rooted trees can be drawn in ASCII or using matplotlib (see Figure 1), and the Graphviz library can be used to create unrooted layouts (see Figure 2). Genome diagrams The GenomeDiagram module provides methods of visualising sequences within Biopython. Sequences can be drawn in a linear or circular form (see Figure 3), and many output formats are supported, including PDF and PNG. Diagrams are created by making tracks and then adding sequence features to those tracks. By looping over a sequence's features and using their attributes to decide if and how they are added to the diagram's tracks, one can exercise much control over the appearance of the final diagram. Cross-links can be drawn between different tracks, allowing one to compare multiple sequences in a single diagram. Macromolecular structure The Bio.PDB module can load molecular structures from PDB and mmCIF files, and was added to Biopython in 2003. The Structure object is central to this module, and it organises macromolecular structure in a hierarchical fashion: Structure objects contain Model objects which contain Chain objects which contain Residue objects which contain Atom objects. Disordered residues and atoms get their own classes, DisorderedResidue and DisorderedAtom, that describe their uncertain positions. Using Bio.PDB, one can navigate through individual components of a macromolecular structure file, such as examining each atom in a protein. Common analyses can be carried out, such as measuring distances or angles, comparing residues and calculating residue depth. Population genetics The Bio.PopGen module adds support to Biopython for Genepop, a software package for statistical analysis of population genetics. This allows for analyses of Hardy–Weinberg equilibrium, linkage disequilibrium and other features of a population's allele frequencies. This module can also carry out population genetic simulations using coalescent theory with the fastsimcoal2 program. Wrappers for command line tools Many of Biopython's modules contain command line wrappers for commonly used tools, allowing these tools to be used from within Biopython. These wrappers include BLAST, Clustal, PhyML, EMBOSS and SAMtools. Users can subclass a generic wrapper class to add support for any other command line tool. See also Open Bioinformatics Foundation BioPerl BioRuby BioJS BioJava References External links Biopython Tutorial and Cookbook (PDF) Biopython source code on GitHub Articles with example Python (programming language) code Bioinformatics software Computational science Python (programming language) scientific libraries Free bioinformatics software
Biopython
[ "Mathematics", "Biology" ]
2,236
[ "Computational science", "Applied mathematics", "Bioinformatics", "Bioinformatics software" ]
559,871
https://en.wikipedia.org/wiki/Open%20Bioinformatics%20Foundation
The Open Bioinformatics Foundation is a non-profit, volunteer-run organization focused on supporting open source programming in bioinformatics. The mission of the foundation is to support the development of open source toolkits for bioinformatics, organise developer-centric hackathon events and generally assist in the development and promotion of open source software development in the life sciences. The foundation also organises and runs the annual Bioinformatics Open Source Conference, a satellite meeting of the Intelligent Systems for Molecular Biology conference. The foundation participates in the Google Summer of Code, acting as an umbrella organisation for individual bioinformatics-related projects. The Open Bioinformatics Foundation was started in 2001, arising from the BioJava, BioPerl and BioPython projects. A formal membership for the foundation was created in 2005. In October 2012, the foundation began an association with Software in the Public Interest (SPI), a US-based non-profit which aids other organizations in the creation and distribution of free and open-source software. The association with SPI allows financial donations to the foundation (these are 501(c)3 tax-exempt in the US). The foundation is governed by a board of directors, representing various Bio* projects. As of 2019, the OBF President is Peter Cock (BioPython). Previous OBF presidents include Ewan Birney and Hilmar Lapp (NESCent), previous Board members include Steven E. Brenner. Projects The foundation hosts servers for mailing lists, websites, and code repositories for a number of bioinformatics-related open source projects, including: BioJava – Java toolkit BioMOBY – Data and application execution through web services BioPerl – Perl toolkit BioPython – Python toolkit BioRuby – Ruby toolkit BioPHP EMBOSS – Sequence analysis toolkit. See also List of open-source bioinformatics software Generic Model Organism Database References External links Open Bioinformatics Foundation website Bioinformatics organizations Free software project foundations
Open Bioinformatics Foundation
[ "Biology" ]
432
[ "Bioinformatics", "Bioinformatics organizations" ]
559,970
https://en.wikipedia.org/wiki/Jetty
A jetty is a man-made structure that protrudes from land out into water. A jetty may serve as a breakwater, as a walkway, or both; or, in pairs, as a means of constricting a channel. The term derives from the French word , "thrown", signifying something thrown out. For regulating rivers Wing dams One form of jetties, wing dams, are extended out, opposite one another, from each bank of a river, at intervals, to contract a wide channel, and concentrate the current to deepen the channel. At the outlet of tideless rivers Jetties have been constructed on each side of the outlet river of some of the rivers flowing into the Baltic, with the objective of prolonging the scour of the river and protecting the channel from being shoaled by the littoral drift along the shore. Another application of parallel jetties is in lowering the bar in front of one of the mouths of a deltaic river flowing into a tide — a virtual prolongation of its less sea, by extending the scour of the river out to the bar by banks. Jetties prolonging the Sulina branch of the Danube into the Black Sea, and the south pass of the Mississippi River into the Gulf of Mexico, formed of rubble stone and concrete blocks, and respectively, have enabled the discharge of these rivers to scour away the bars obstructing the access to them; and they have also carried the sediment-bearing waters sufficiently far out to come under the influence of littoral currents, which, by conveying away some of the sediment, postpone the eventual formation of a fresh bar farther out (see river engineering). At the mouth of tidal rivers Where a river is narrow near its mouth, has a generally feeble discharge and a small tidal range, the sea is liable on an exposed coast to block up its outlet during severe storms. The river is thus forced to seek another exit at a weak spot of the beach, which along a low coast may be at some distance off; and this new outlet in its turn may be blocked up, so that the river from time to time shifts the position of its mouth. This inconvenient cycle of changes may be stopped by fixing the outlet of the river at a suitable site, by carrying a jetty on each side of this outlet across the beach, thereby concentrating its discharge in a definite channel and protecting the mouth from being blocked up by littoral drift. This system was long ago applied to the shifting outlet of the river Yare to the south of Yarmouth, and has also been successfully employed for fixing the wandering mouth of the Adur near Shoreham, and of the Adour flowing into the Bay of Biscay below Bayonne. When a new channel was cut across the Hook of Holland to provide a straighter and deeper outlet channel for the river Meuse, forming the approach channel to Rotterdam, low, broad, parallel jetties, composed of fascine mattresses weighted with stone, were carried across the foreshore into the sea on either side of the new mouth of the river, to protect the jetty channel from littoral drift, and cause the discharge of the river to maintain it out to deep water. The channel, also, beyond the outlet of the river Nervion into the Bay of Biscay has been regulated by jetties; and by extending the south-west jetty out for nearly with a curve concave towards the channel the outlet has not only been protected to some extent from the easterly drift, but the bar in front has been lowered by the scour produced by the discharge of the river following the concave bend of the southwest jetty. As the outer portion of this jetty was exposed to westerly storms from the Bay of Biscay before the outer harbour was constructed, it has been given the form and strength of a breakwater situated in shallow water. For berthing at docks Where docks are given sloping sides, openwork timber jetties are generally carried across the slope, at the ends of which vessels can lie in deep water or more solid structures are erected over the slope for supporting coal-tips. Pilework jetties are also constructed in the water outside the entrances to docks on each side, so as to form an enlarging trumpet-shaped channel between the entrance, lock or tidal basin and the approach channel, in order to guide vessels in entering or leaving the docks. Solid jetties, moreover, lined with quay walls, are sometimes carried out into a wide dock, at right angles to the line of quays at the side, to enlarge the accommodation; and they also serve, when extended on a large scale from the coast of a tideless sea under shelter of an outlying breakwater, to form the basins in which vessels lie when discharging and taking in cargoes in such a port as Marseille. At entrances to jetty harbors The approach channel to some ports situated on sandy coasts is guided and protected across the beach by parallel jetties. In some cases, these are made solid up to a little above low water of neap tides, on which open timber-work is erected, provided with a planked platform at the top raised above the highest tides. In other cases, they consist entirely of solid material without timber-work. The channel between the jetties was originally maintained by tidal scour from low-lying areas close to the coast, and subsequently by the current from sluicing basins; but it is now often considerably deepened by sand-pump dredging. It is protected to some extent by the solid portion of the jetties from the inroad of sand from the adjacent beach, and from the levelling action of the waves; while the upper open portion serves to indicate the channel and to guide the vessels, if necessary (see harbor). The bottom part of the older jetties, in such long-established jetty ports as Calais, Dunkirk and Ostend, was composed of clay or rubble stone, covered on the top by fascine-work or pitching, but the deepening of the jetty channel by dredging and the need that arose for its enlargement led to the reconstruction of the jetties at these ports. The new jetties at Dunkirk were founded in the sandy beach, by the aid of compressed air, at a depth of below low water of spring tides; and their solid masonry portion, on a concrete foundation was raised . above low water of neap tides. At lagoon outlets A small tidal rise spreading tidal water over a large expanse of lagoon or inland backwater causes the influx and efflux of the tide to maintain a deep channel through a narrows no longer confined by a bank on each side, becomes dispersed, and owing to the reduction of its scouring force, is no longer able at a moderate distance from the shore effectually to resist the action of tending to form a continuous beach in front of the outlet. Hence a bar is produced that diminishes the available depth in the approach channel. By carrying out a solid jetty over the bar, however on each side of the outlet, the tidal currents are concentrated in the channel across the bar, and lower it by scour. Thus the available depth of the approach channels to Venice through the Malamocco and Lido outlets from the Venetian Lagoon have been deepened several feet (metres) over their bars by jetties of rubble, carried out across the foreshore into deep water on both sides of the channel. Other examples are provided by the long jetties extended into the sea in front of the entrance to Charleston harbour, formerly constructed of fascines weighed down with stone and logs, but subsequently of rubble stone, and by the two converging rubble jetties carried out from each shore of Dublin Bay for deepening the approach to Dublin harbour. Jetties have the adverse effect of endangering Surf Culture as a whole with their ability to destroy surf breaks. See also Breakwater Dock (maritime) Groyne Jettied floors in medieval houses Mole Pier Port Spiral Jetty Wharf Citations General and cited references Humboldt Bay Recreation & Conservation District: Humboldt Bay Harbor Setting External links Coastal construction Water transport
Jetty
[ "Engineering" ]
1,671
[ "Construction", "Coastal construction" ]
560,061
https://en.wikipedia.org/wiki/Psilocybe%20semilanceata
Psilocybe semilanceata, commonly known as the liberty cap, is a species of fungus which produces the psychoactive compounds psilocybin, psilocin and baeocystin. It is both one of the most widely distributed psilocybin mushrooms in nature, and one of the most potent. The mushrooms have a distinctive conical to bell-shaped cap, up to in diameter, with a small nipple-like protrusion on the top. They are yellow to brown, covered with radial grooves when moist, and fade to a lighter color as they mature. Their stipes tend to be slender and long, and the same color or slightly lighter than the cap. The gill attachment to the stipe is adnexed (narrowly attached), and they are initially cream-colored before tinting purple to black as the spores mature. The spores are dark purplish-brown en masse, ellipsoid in shape, and measure 10.5–15 by 6.5–8.5 micrometres. The mushroom grows in grassland habitats, especially wetter areas. But unlike P. cubensis, the fungus does not grow directly on dung; rather, it is a saprobic species that feeds off decaying grass roots. It is widely distributed in the temperate areas of the Northern Hemisphere, particularly in Europe, and has been reported occasionally in temperate areas of the Southern Hemisphere as well. The earliest reliable history of P. semilanceata intoxication dates back to 1799 in London, and in the 1960s the mushroom was the first European species confirmed to contain psilocybin. The possession or sale of psilocybin mushrooms is illegal in many countries. Taxonomy and naming The species was first described by Elias Magnus Fries as Agaricus semilanceatus in his 1838 work Epicrisis Systematis Mycologici. Paul Kummer transferred it to Psilocybe in 1871 when he raised many of Fries's sub-groupings of Agaricus to the level of genus. Panaeolus semilanceatus, named by Jakob Emanuel Lange in both 1936 and 1939 publications, is a synonym. According to the taxonomical database MycoBank, several taxa once considered varieties of P. semilanceata to be synonymous with the species now known as Psilocybe strictipes: the caerulescens variety described by Pier Andrea Saccardo in 1887 (originally named Agaricus semilanceatus var. coerulescens by Mordecai Cubitt Cooke in 1881), the microspora variety described by Rolf Singer in 1969, and the obtusata variety described by Marcel Bon in 1985. Several molecular studies published in the 2000s demonstrated that Psilocybe, as it was defined then, was polyphyletic. The studies supported the idea of dividing the genus into two clades, one consisting of the bluing, hallucinogenic species in the family Hymenogastraceae, and the other the non-bluing, non-hallucinogenic species in the family Strophariaceae. However, the generally accepted lectotype (a specimen later selected when the original author of a taxon name did not designate a type) of the genus as a whole was Psilocybe montana, which is a non-bluing, non-hallucinogenic species. If the non-bluing, non-hallucinogenic species in the study were to be segregated, it would have left the hallucinogenic clade without a valid name. To resolve this dilemma, several mycologists proposed in a 2005 publication to conserve the name Psilocybe, with P. semilanceata as the type. As they explained, conserving the name Psilocybe in this way would prevent nomenclatural changes to a well-known group of fungi, many species of which are "linked to archaeology, anthropology, religion, alternate life styles, forensic science, law enforcement, laws and regulation". Further, the name P. semilanceata had historically been accepted as the lectotype by many authors in the period 1938–68. The proposal to conserve the name Psilocybe, with P. semilanceata as the type was accepted unanimously by the Nomenclature Committee for Fungi in 2009. The mushroom takes its common name from the Phrygian cap, also known as the "liberty cap", which it resembles; P. semilanceata shares its common name with P. pelliculosa, a species from which it is more or less indistinguishable in appearance. The Latin word for Phrygian cap is pileus, nowadays the technical name for what is commonly known as the "cap" of a fungal fruit body. In the 18th century, Phrygian caps were placed on Liberty poles, which resemble the stipe of the mushroom. The generic name is derived from Ancient Greek psilos (ψιλός) 'smooth, bare' and Byzantine Greek kubê (κύβη) 'head'. The specific epithet comes from Latin semi 'half, somewhat' and lanceata, from lanceolatus 'spear-shaped'. Description The cap of P. semilanceata is in diameter and tall. It varies in shape from sharply conical to bell-shaped, often with a prominent papilla (a nipple-shaped structure), and does not change shape considerably as it ages. The cap margin is initially rolled inward but unrolls to become straight or even curled upwards in maturity. The cap is hygrophanous, meaning it assumes different colors depending on its state of hydration. When it is moist, the cap is ochraceous to pale brown to dark chestnut brown, but darker in the center, often with a greenish-blue tinge. When moist, radial grooves (striations) can be seen on the cap that correspond to the positions of the gills underneath. When the cap is dry, it becomes much paler, a light yellow-brown color. Moist mushrooms have sticky surfaces that result from a thin gelatinous film called a pellicle. This film becomes apparent if a piece of the cap is broken by bending it back and peeling away the piece. When the cap dries from exposure to the sun, the film turns whitish and is no longer peelable. On the underside of the mushroom's cap, there are between 15 and 27 individual narrow gills that are moderately crowded together, and they have a narrowly adnexed to almost free attachment to the stipe. Their color is initially pale brown, but becomes dark gray to purple-brown with a lighter edge as the spores mature. The slender yellowish-brown stipe is long by thick, and usually slightly thicker towards the base. The mushroom has a thin cobweb-like partial veil that does not last long before disappearing; sometimes, the partial veil leaves an annular zone on the stipe that may be darkened by spores. The flesh is thin and membrane-like, and roughly the same color as the surface tissue. It has a farinaceous (similar to freshly ground flour) odor and taste. All parts of the mushroom will stain a bluish color if handled or bruised, and it may naturally turn blue with age. Microscopic characteristics In deposit, the spores are a deep reddish purple-brown color. The use of an optical microscope can reveal further details: the spores are oblong when seen in side view, and oblong to oval in frontal view, with dimensions of 10.5–15 by 6.5–8.5 μm. The basidia (spore bearing cells of the hymenium), are 20–31 by 5–9 μm, four-spored, and have clamps at their bases; there are no basidia found on the sterile gill edge. The cheilocystidia (cystidia on the gill edge) measure 15–30 by 4–7 μm, and are flask-shaped with long thin necks that are 1–3.5 μm wide. P. semilanceata does not have pleurocystidia (cystidia on the gill face). The cap cuticle is up to 90 μm thick, and is made of a tissue layer called an ixocutis—a gelatinized layer of hyphae lying parallel to the cap surface. The hyphae comprising the ixocutis are cylindrical, hyaline, and 1–3.5 μm wide. Immediately under the cap cuticle is the subpellis, made of hyphae that are 4–12 μm wide with yellowish-brown encrusted walls. There are clamp connections present in the hyphae of all tissues. Other forms The anamorphic form of P. semilanceata is an asexual stage in the fungus's life cycle involved in the development of mitotic diaspores (conidia). In culture, grown in a petri dish, the fungus forms a white to pale orange cottony or felt-like mat of mycelia. The conidia formed are straight to curved, measuring 2.0–8.0 by 1.1–2.0 μm, and may contain one to several small intracellular droplets. Although little is known of the anamorphic stage of P. semilanceata beyond the confines of laboratory culture, in general, the morphology of the asexual structures may be used as classical characters in phylogenetic analyses to help understand the evolutionary relationships between related groups of fungi. Scottish mycologist Roy Watling described sequestrate (truffle-like) or secotioid versions of P. semilanceata he found growing in association with regular fruit bodies. These versions had elongated caps, long and wide at the base, with the inward curved margins closely hugging the stipe from the development of membranous flanges. Their gills were narrow, closely crowded together, and anastomosed (fused together in a vein-like network). The color of the gills was sepia with a brownish vinaceous (red wine-colored) cast, and a white margin. The stipes of the fruit bodies were long by thick, with about of stipe length covered by the extended cap. The thick-walled ellipsoid spores were 12.5–13.5 by 6.5–7 μm. Despite the significant differences in morphology, molecular analysis showed the secotioid version to be the same species as the typical morphotype. Similar species There are several other Psilocybe species that may be confused with P. semilanceata due to similarities in physical appearance. P. strictipes is a slender grassland species that is differentiated macroscopically from P. semilanceata by the lack of a prominent papilla. P. mexicana, commonly known as the "Mexican liberty cap", is also similar in appearance, but is found in manure-rich soil in subtropical grasslands in Mexico. It has somewhat smaller spores than P. semilanceata, typically 8–9.9 by 5.5–7.7 μm. Another lookalike species is P. samuiensis, found in Thailand, where it grows in well-manured clay-like soils or among paddy fields. This mushroom can be distinguished from P. semilanceata by its smaller cap, up to in diameter, and its rhomboid-shaped spores. P. pelliculosa is physically similar to such a degree that it may be indistinguishable in the field. It differs from P. semilanceata by virtue of its smaller spores, measuring 9–13 by 5–7 μm. P. semilanceata has also been confused with the toxic muscarine-containing species Inocybe geophylla, a whitish mushroom with a silky cap, yellowish-brown to pale grayish gills, and a dull yellowish-brown spore print. Ecology and habitat Psilocybe semilanceata fruits solitarily or in groups on rich and acidic soil, typically in grasslands, such as meadows, pastures, or lawns. It is often found in pastures that have been fertilized with sheep or cow dung, although it does not typically grow directly on the dung. P. semilanceata, like all others species of the genus Psilocybe, is a saprobic fungus, meaning it obtains nutrients by breaking down organic matter. The mushroom is also associated with sedges in moist areas of fields, and it is thought to live on the decaying root remains. Like some other grassland psilocybin mushroom species such as P. mexicana, P. tampanensis and Conocybe cyanopus, P. semilanceata may form sclerotia, a dormant form of the fungus, which affords it some protection from wildfires and other natural disasters. Laboratory tests have shown P. semilanceata to suppress the growth of the soil-borne water mold Phytophthora cinnamomi, a virulent plant pathogen that causes the disease root rot. When grown in dual culture with other saprobic fungi isolated from the rhizosphere of grasses from its habitat, P. semilanceata significantly suppresses their growth. This antifungal activity, which can be traced at least partly to two phenolic compounds it secretes, helps it compete successfully with other fungal species in the intense competition for nutrients provided by decaying plant matter. Using standard antimicrobial susceptibility tests, Psilocybe semilanceata was shown to strongly inhibit the growth of the human pathogen methicillin-resistant Staphylococcus aureus (MRSA). The source of the antimicrobial activity is unknown. Distribution Psilocybe authority Gastón Guzmán, in his 1983 monograph on psilocybin mushrooms, considered Psilocybe semilanceata the world's most widespread psilocybin mushroom species, as it has been reported on 18 countries. In Europe, P. semilanceata has a widespread distribution, and is found in Austria, Belarus, Belgium, Bulgaria, the Channel Islands, Czech republic, Denmark, Estonia, the Faroe Islands, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, India, Ireland, Italy, Latvia, Lithuania, the Netherlands, Norway, Poland, Romania, Russia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, the United Kingdom and Ukraine. It is generally agreed that the species is native to Europe; Watling has demonstrated that there exists little difference between specimens collected from Spain and Scotland, at both the morphological and genetic level. The mushroom also has a widespread distribution in North America. In Canada it has been collected from British Columbia, New Brunswick, Newfoundland, Nova Scotia, Prince Edward Island, Ontario and Quebec. In the United States, it is most common in the Pacific Northwest, west of the Cascade Mountains, where it fruits abundantly in autumn and early winter; fruiting has also been reported to occur infrequently during spring months. Charles Horton Peck reported the mushroom to occur in New York in the early 20th century, and consequently, much literature published since then has reported the species to be present in the eastern United States. Guzmán later examined Peck's herbarium specimen, and in his comprehensive 1983 monograph on Psilocybe, concluded that Peck had misidentified it with the species now known as Panaeolina foenisecii. P. semilanceata is much less common in South America, where it has been recorded in Chile. It is also known in Australia (where it may be an introduced species) and New Zealand, where it grows in high-altitude grasslands. In 2000, it was reported from Golaghat, in the Indian state of Assam. In 2017, it was reported from Charsadda, in the Pakistani province of Khyber Pakhtunkhwa. Psychoactive use The first reliably documented report of Psilocybe semilanceata intoxication involved a British family in 1799, who prepared a meal with mushrooms they had picked in London's Green Park. According to the chemist Augustus Everard Brande, the father and his four children experienced typical symptoms associated with ingestion, including pupil dilation, spontaneous laughter and delirium. The identification of the species responsible was made possible by James Sowerby's 1803 book Coloured Figures of English Fungi or Mushrooms, which included a description of the fungus, then known as Agaricus glutinosus (originally described by Moses Ashley Curtis in 1780). According to German mycologist Jochen Gartz, the description of the species is "fully compatible with current knowledge about Psilocybe semilanceata." In the early 1960s, the Swiss scientist Albert Hofmann—known for the synthesis of the psychedelic drug LSD—chemically analyzed P. semilanceata fruit bodies collected in Switzerland and France by the botanist Roger Heim. Using the technique of paper chromatography, Hofmann confirmed the presence of 0.25% (by weight) psilocybin in dried samples. Their 1963 publication was the first report of psilocybin in a European mushroom species; previously, it had been known only in Psilocybe species native to Mexico, Asia and North America. This finding was confirmed in the late 1960s with specimens from Scotland and England, Czechoslovakia (1973), Germany (1977), Norway (1978), and Belgium and Finland (1984). In 1965, forensic characterization of psilocybin-containing mushrooms seized from college students in British Columbia identified P. semilanceata—the first recorded case of intentional recreational use of the mushroom in Canada. The presence of the psilocybin analog baeocystin was confirmed in 1977. Several studies published since then support the idea that the variability of psilocybin content in P. semilanceata is low, regardless of country of origin. Properties Several studies have quantified the amounts of hallucinogenic compounds found in the fruit bodies of Psilocybe semilanceata. In 1993, Gartz reported an average of 1% psilocybin (expressed as a percentage of the dry weight of the fruit bodies), ranging from a minimum of 0.2% to a maximum of 2.37% making it one of the most potent species (but significantly less potent than panaeolus cyanescens). In an earlier analysis, Tjakko Stijve and Thom Kuyper (1985) found a high concentration in a single specimen (1.7%) in addition to a relatively high concentration of baeocystin (0.36%). Smaller specimens tend to have the highest percent concentrations of psilocybin, but the absolute amount is highest in larger mushrooms. A Finnish study assayed psilocybin concentrations in old herbarium specimens, and concluded that although psilocybin concentration decreased linearly over time, it was relatively stable. They were able to detect the chemical in specimens that were 115 years old. Michael Beug and Jeremy Bigwood, analyzing specimens from the Pacific Northwest region of the United States, reported psilocybin concentrations ranging from 0.62% to 1.28%, averaging 1.0 ±0.2%. They concluded that the species was one of the most potent, as well as the most constant in psilocybin levels. In a 1996 publication, Paul Stamets defined a "potency rating scale" based on the total content of psychoactive compounds (including psilocybin, psilocin, and baeocystin) in 12 species of Psilocybe mushrooms. Although there are certain caveats with this technique—such as the erroneous assumption that these compounds contribute equally to psychoactive properties—it serves as a rough comparison of potency between species. Despite its small size, Psilocybe semilanceata is considered a "moderately active to extremely potent" hallucinogenic mushroom (meaning the combined percentage of psychoactive compounds is typically between 0.25% to greater than 2%), and of the 12 mushrooms they compared, only 3 were more potent: P. azurescens, P. baeocystis, and P. bohemica. however this data has become obsolete over the years as more potent cultivars have been discovered for numerous species, especially panaeolus cyanescens which holds the current world record for most potent mushrooms described in published research. According to Gartz (1995), P. semilanceata is Europe's most popular psychoactive species. Several reports have been published in the literature documenting the effects of consumption of P. semilanceata. Typical symptoms include visual distortions of color, depth and form, progressing to visual hallucinations. The effects are similar to the experience following consumption of LSD, although milder. Common side effects of mushroom ingestion include pupil dilation, increased heart rate, unpleasant mood, and overresponsive reflexes. As is typical of the symptoms associated with psilocybin mushroom ingestion, "the effect on mood in particular is dependent on the subject's pre-exposure personality traits", and "identical doses of psilocybin may have widely differing effects in different individuals." Although most cases of intoxication resolve without incident, there have been isolated cases with severe consequences, especially after higher dosages or persistent use. In one case reported in Poland in 1998, an 18-year-old man developed Wolff–Parkinson–White syndrome, arrhythmia, and suffered myocardial infarction after ingesting P. semilanceata frequently over the period of a month. The cardiac damage and myocardial infarction was suggested to be a result of either coronary vasoconstriction, or because of platelet hyperaggregation and occlusion of small coronary arteries. Danger of misidentification One danger of attempting to consume hallucinogenic or other wild mushrooms, especially for novice mushroom hunters, is the possibility of misidentification with toxic species. In one noted case, an otherwise healthy young Austrian man mistook the poisonous Cortinarius rubellus for P. semilanceata. As a result, he suffered end-stage kidney failure, and required a kidney transplant. In another instance, a young man developed cardiac abnormalities similar to those seen in Takotsubo cardiomyopathy, characterized by a sudden temporary weakening of the myocardium. A polymerase chain reaction-based test to specifically identity P. semilanceata was reported by Polish scientists in 2007. Poisonous Psathyrella species can easily be misidentified as liberty caps. Legal status The legal status of psilocybin mushrooms varies worldwide. Psilocybin and psilocin are listed as Class A (United Kingdom) or Schedule I (US) drugs under the United Nations 1971 Convention on Psychotropic Substances. The possession and use of psilocybin mushrooms, including P. semilanceata, is therefore prohibited by extension. Although many European countries remained open to the use and possession of hallucinogenic mushrooms after the US ban, starting in the 2000s (decade) there has been a tightening of laws and enforcements. In the Netherlands, where the drug was once routinely sold in licensed cannabis coffee shops and smart shops, laws were instituted in October 2008 to prohibit the possession or sale of psychedelic mushrooms—the final European country to do so. They are legal in Jamaica and Brazil and decriminalised in Portugal. In the United States, the city of Denver, Colorado, voted in May 2019 to decriminalize the use and possession of psilocybin mushrooms. In November 2020, voters passed Oregon Ballot Measure 109, making Oregon the first state to both decriminalize psilocybin and also legalize it for therapeutic use. Ann Arbor, Michigan, and the county it resides in have decriminalized magic mushrooms. Possession, sale and use are now legal within the county. In 2021, the City Councils of Somerville, Northampton, Cambridge, Massachusetts, and Seattle, Washington, voted for decriminalization. Sweden The Riksdag added Psilocybe semilanceata to Narcotic Drugs Punishments Act under Swedish schedule I ("substances, plant materials and fungi which normally do not have medical use") as of 1 October 1997, published by Medical Products Agency (MPA) in regulation LVFS 1997:12 listed as Psilocybe semilanceata (toppslätskivling). See also List of Psilocybe species Mushroom hunting References Cited texts Entheogens Fungi described in 1838 Fungi of Asia Fungi of Australia Fungi of Europe Fungi of New Zealand Fungi of North America Fungi of South America Fungi of Sweden Fungi of Finland semilanceata Psychedelic tryptamine carriers Psychoactive fungi Taxa named by Elias Magnus Fries Fungi of Iceland Fungus species
Psilocybe semilanceata
[ "Biology" ]
5,126
[ "Fungi", "Fungus species" ]
560,165
https://en.wikipedia.org/wiki/Psilocybe%20cyanescens
Psilocybe cyanescens, commonly known as the wavy cap or potent psilocybe, is a species of potent psychedelic mushroom. The main compounds responsible for its psychedelic effects are psilocybin and psilocin. It belongs to the family Hymenogastraceae. A formal description of the species was published by Elsie Wakefield in 1946 in the Transactions of the British Mycological Society, based on a specimen she had recently collected at Kew Gardens. She had begun collecting the species as early as 1910. The mushroom is not generally regarded as being physically dangerous to adults. Since all the psychoactive compounds in P. cyanescens are water-soluble, the fruiting bodies can be rendered non-psychoactive through parboiling, allowing their culinary use. However, since most people find them overly bitter and they are too small to have great nutritive value, this is not frequently done. Psilocybe cyanescens can sometimes fruit in colossal quantities; more than 100,000 individual mushrooms were found growing in a single patch at a racetrack in England. Description Appearance Psilocybe cyanescens has a hygrophanous pileus (cap) that is caramel to chestnut-brown when moist, fading to pale buff or slightly yellowish when dried. Caps generally measure from 1.5–5 cm (½" to 2") across, and are normally distinctly wavy in maturity. The color of the pileus is rarely seen in mushrooms outside of the P. cyanescens species complex. Most parts of the mushroom, including the cap and Lamellae (gills, underneath the cap) can stain blue when touched or otherwise disturbed, probably due to the oxidation of psilocin. The lamellae are adnate, and light brown to dark purple brown in maturity, with lighter gill edges. There is no distinct annulus, but immature P. cyanescens specimens do have a cobwebby veil which may leave an annular zone in maturity. Both the odor and taste are farinaceous. P. cyanescens has elliptical spores which measure 9–12 x 5–8 μm. According to some authors, the holotype collection of the species from Kew Gardens featured no pleurocystidia, but North American collections are characterized by common clavate-mucronate pleurocystidia. However, pleurocystidia are present in the holotype collection (but not easily to observe since hymenium is collapsed). In European collections of P. cyanescens, pleurocystidia are common and their shape is identical to those known from the United States. In 2012, an epitype from Hamburg, Germany was designated. Fresh sporocarps and mycelia of P. cyanescens generally bruise blueish or blue-green where damaged, and the staining remains visible after drying. This staining is most noticeable on the stem (which is white when undisturbed) but can also occur on other parts of the mushroom, including the gills, cap, and mycelium. This staining is due primarily to the oxidation of psilocin. (Psilocybin cannot be oxidized directly, but is quickly converted via enzymatic action to psilocin at injury sites which can then be oxidized, so even specimens with little psilocin still generally stain blue.) Related species Other related species may include P. weraroa, and these relatives are collectively referred to as the "Psilocybe cyanescens complex" or as the "caramel-capped psilocybe complex," due to their extremely similar appearance and habit. There is phylogenetic evidence that there are two distinct clades in the complex, one consisting of P. cyanescens and P. azurescens and allies, and the other consisting of P. serbica and allies (European taxa). It has also been shown that Psilocybe weraroa (previously known as Weraroa novae-zelandiae) is very closely related to P. cyanescens despite its vastly dissimilar appearance. A very close relative of P. cyanescens is Psilocybe allenii (described in 2012), formerly known as Psilocybe cyanofriscosa, a mushroom found in California and Washington It can be distinguished by macromorphological features and/or sequencing of rDNA ITS molecular marker. It is often difficult or impossible to distinguish between members of the P. cyanescens complex except by range without resorting to microscopic or molecular characters. Although not closely related, Psilocybe cyanescens has been at least occasionally confused with Galerina marginata with fatal results. The two mushrooms have generally similar habits and appearances, and bear a superficial resemblance to each other such that inexperienced mushroom-seekers may confuse the two. The two species can grow side-by-side, which may add to the chance of confusion. The two mushrooms have different colored spores, making a spore print essential to proper identification. Habitat and distribution Psilocybe cyanescens grows today primarily on wood chips, especially in and along the perimeter of mulched plant beds in urban areas, but can also grow on other lignin-rich substrates. P. cyanescens does not grow on substrate that is not lignin-rich. Fruitings have been reported in natural settings previously (although most appear to be migrations from mulched plant beds.) The species does not typically grow on mulch that is made from bark. In the United States, P. cyanescens occurs mainly in the Pacific Northwest, stretching south to the San Francisco Bay Area. It can also be found in areas such as New Zealand, Western Europe, Central Europe, and parts of west Asia (Iran). The range in which P. cyanescens occurs is rapidly expanding, especially in areas where it is not native as the use of mulch to control weeds has been popularized. This rapid expansion of range may be due in part to the simple expedient of P. cyanescens mycelium having colonized the distribution network of woodchip suppliers and thus being distributed on a large scale with commercial mulch. It has been documented to fruit in Spring on the East Coast of the United States. Although it has been speculated that P. cyanescens''' native habitat is the coniferous woodlands of the north-western United States or coastal dunes in the PNW, and the type specimen was described from mulch beds in Kew Gardens, the natural distribution of P. cyanescens in the wild remains unknown. Fruiting is dependent on a drop in temperature. In the San Francisco Bay Area, this means that fruiting typically occurs between late October and February, and fruiting in other areas generally occurs in fall, when temperatures are between .Psilocybe cyanescens often fruits gregariously or in cespitose clusters, sometimes in great numbers. 100,000 P. cyanescens fruits were once found growing on a racetrack in the south of England. Solitary fruits are sometimes also found. Indole content The fruits of P. cyanescens have been shown to contain many different indole alkaloids including psilocybin, psilocin, and baeocystin. It has also been shown that P. cyanescens mycelium will contain detectable levels of psilocin and psilocybin, but only after the formation of primordia. Indole content has been shown to be higher in North American specimens of P. cyanescens than in European ones. This was, however, caused by the fact that Gartz did not analyze the genuine P. cyanescens but P. serbica. North American fruiting bodies of P. cyanescens have been shown to have between 0.66% and 1.96% total indole content by dry weight. European fruiting bodies have been shown to have between 0.39% and 0.75% total indole content by dry weight. North American specimens of P. cyanescens are among the most potent of psychedelic mushrooms. Its potency means that it is widely sought after by users of recreational drugs in those areas where it grows naturally. Cultivation Fruiting begins with simulation of a fall environment at temperatures between .Psilocybe cyanescens, like many other psilocybin containing mushrooms, is sometimes cultivated. Due to the fruiting requirements of the species, it is challenging but possible to get P. cyanescens to produce fruits indoors. Outdoor cultivation in an appropriate climate is relatively easy. Yield per pound of substrate is low when compared to other psilocybin containing mushrooms for both indoor and outdoor cultivation. The combination of poor yield and difficulty may explain why P. cyanescens is grown less frequently than some other psilocybin containing mushrooms.Psilocybe cyanescens mycelium is easier to grow than actual fruits are, can be grown indoors, and is robust enough that it can be transplanted in order to start new patches. Mycelium can also be propagated via stem butt transplantation. Many of the cultivation techniques used with other members of the genus Psilocybe can be used to grow P. cyanescens as well. Cultivated P. cyanescens contain approximately the same concentration of psilocin and psilocybin as natural examples do. Legal statusPsilocybe cyanescens specimens do not fall under the Convention on Psychotropic Substances because the convention does not cover naturally occurring plants or fungi that incidentally contain a scheduled drug. However, many countries choose to prohibit possession of psilocybin containing mushrooms, including P. cyanescens, under their domestic laws. Countries that have banned or severely regulated the possession of P. cyanescens include the United States, Germany, New Zealand, and many others. Although this is difficult to enforce since no species of Psilocybe mushroom has spores containing psilocybin or psilocin. Because of this, Psilocybe cyanescens'' spores are not illegal to possess in many US states. (It is illegal to possess spores in Georgia and Idaho, and illegal to possess them with the intent to produce mushrooms in California.) Gallery References External links Psilocybe cyanescens at MykoWeb Entheogens Psychoactive fungi cyanescens Psychedelic tryptamine carriers Fungi of Europe Fungi of North America Fungus species
Psilocybe cyanescens
[ "Biology" ]
2,190
[ "Fungi", "Fungus species" ]
560,402
https://en.wikipedia.org/wiki/Fang%20Lizhi
Fang Lizhi (; February 12, 1936 – April 6, 2012) was a Chinese astrophysicist, vice-president of the University of Science and Technology of China, and activist whose liberal ideas inspired the pro-democracy student movement of 1986–87 and, finally, the Tiananmen Square protests of 1989. Fang was considered as one of the leaders of the New Enlightenment in the 1980s. Because of his activism, he was expelled from the Chinese Communist Party in January 1987. For his work, Fang was a recipient of the Robert F Kennedy Human Rights Award in 1989, given each year. He was elected an academician of the Chinese Academy of Sciences in 1980, but his position was revoked after 1989. Life and career in China Fang was born on 12 February 1936 in Beijing. His father worked on the railway. In 1948, a year before the People's Liberation Army took over the city, as a student of the Beijing No. 4 High School, he joined an underground youth organization that was associated with the Chinese Communist Party (CCP). One of his extracurricular activities was assembling radio receivers from used parts. In 1952, he enrolled in the Physics Department at Peking University, where he met his future wife, Li Shuxian (). Both Fang and Li were among the top students in their class. After graduating, he joined the CCP, started working at the Institute of Modern Physics and became involved in China's secret atomic bomb program, while Li stayed at Peking University as a junior faculty member. In 1957, during the Hundred Flowers Campaign, people were strongly encouraged by the CCP to openly express their opinions and criticisms. As party members, Li, Fang and another person in the physics department planned to write a letter to the party to offer their suggestions on education. This letter was still unfinished by the time the Hundred Flowers Campaign abruptly came to an end and the Anti-Rightist Campaign started. The opinions and criticisms solicited during the earlier campaign were then interpreted as "attacks on the party", and those who had expressed such opinions were labelled "rightist" and persecuted. Although no one knew about the unfinished letter, out of loyalty to the party, Fang, Li and their friend confessed to writing it; Li also confessed her doubts about the party. Li was expelled from the CCP, and sentenced to hard labour in Zhaitang, a town near Beijing. Fang was not immediately expelled from the party, because he played a lesser role in writing the letter, and also because he had left Peking University, where the punishment was particularly severe. However, he was removed from the nuclear program and sent to do hard labour in Zanhuang, Hebei province from December 1957 to August 1958. Out of political pressure, Li and Fang put their relationship on hold until early 1959, when Fang was also expelled from the CCP. In August 1958, Fang was reassigned to the faculty of the University of Science and Technology of China (USTC), which was located in Beijing at the time. In 1961 he married Li, who remained a faculty member of Peking University. In spite of his experience in the anti-Rightist campaign, Fang published an article in the Guangming Daily, encouraging the independent thinking of students. Fang published his first research paper on nuclear physics in Acta Physica Sinica 17, p. 57 (1961) under the pseudonym Wang Yunran, since as a rightist he was not entitled to publish research papers. Later, on the recommendation of Qian Linzhao, he became an associated member of a research group led by Li Yinyuan at the Institute of Physics, Chinese Academy of Sciences. Since Li's group was at a different institute, this arrangement took advantage of a loophole in management rules, allowing him to publish papers under his own name. In the late 1950s and early 1960s, Fang conducted research in particle physics, solid state physics and laser physics. By 1965, he had published 13 research papers and was considered one of the most productive physics researchers in China. That year, as part of the effort of cleansing Beijing of "undesirable elements", Fang was to be removed from the faculty of USTC and sent to work in an electronics factory in Liaoning province. Learning about this, vice president Yan Jici intervened on Fang's behalf; he pleaded the case to the party secretary of USTC at the time, Liu Da, who cancelled the cleansing order for Fang and other faculty members of USTC. Academic activities were interrupted when the Cultural Revolution broke out in 1966. In 1969, along with other universities and research institutes, USTC was ordered to be evacuated out of Beijing, ostensibly in anticipation of an impending invasion by the Soviet Union. USTC was moved to Hefei, the capital of Anhui Province, where it remains to this day. Upon arriving in Hefei in 1969, Fang, along with other "problematic members" of the faculty, were sent to do hard labour for "re-education by the worker class" in a coal mine. Fang secretly brought with him one physics book, the "Classical Theory of Fields" by Lev Landau and learned the theory of general relativity by reading this book in the evening. Later, in 1971, along with a number of other faculty members, he was assigned to do labour work in a brick factory, which produced the bricks for constructing the new USTC campus buildings. Research in astrophysics and cosmology In 1972, the worst chaos of the Cultural Revolution was over and scientific research resumed. Fang found an opportunity to read some recent astrophysics papers in western journals, and soon wrote his first paper on cosmology, "A Cosmological Solution in Scalar-tensor Theory with Mass and Blackbody Radiation", which was published on the journal Wu Li (Physics), Vol. 1, 163 (1972). This was the first modern cosmological research paper in mainland China. Fang assembled a group of young faculty members of USTC around him to conduct astrophysics research. At the time, conducting research on relativity theory and cosmology in China was very risky politically, because these theories were considered to be "idealistic" theories in contradiction with dialectical materialism, a central component of the Communist Party's ideology. According to the dialectical materialism philosophy, both time and space must be infinite, while the Big Bang theory allows the possibility of the finiteness of space and time. During the Cultural Revolution, campaigns were waged against Albert Einstein and the Theory of Relativity in Beijing and Shanghai. Once Fang published his theory, some of the critics of the Theory of Relativity, especially a group based in Shanghai, prepared to attack Fang politically. However, by this time the "leftist" line was declining in the Chinese academia. Professor Dai Wensai, the most well-known Chinese astronomer at the time and chair of the Astronomy Department of Nanjing University, also supported Fang. Many of the members of the "Theory of Relativity Criticism Group" changed to study the theory and conduct research in it. Subsequently, Fang was regarded as the father of cosmological research in China. Fang published a large number of papers on astrophysics and cosmology. In the late 1970s, he and his group used the luminosity of selected radio quasars to measure the Hubble diagram, and with data available at the time, suggested that the universe may be closed (Fang et al., Acta Astronomica Sinica 17, 134 (1977)). This work was noticed by researchers outside China; a Nature article noted that it obtained similar results to, but appeared earlier than, the paper by Davidsen et al., Nature 269, 203 (1977). Fang also carried out research on topics including neutron stars, black holes, inflation and quantum cosmology. He soon gained international recognition, and as China began to open up in the late 1970s, he was invited to international conferences outside the country. In 1985, together with H. Sato of Kyoto University, Japan, Fang won the first prize of the Gravity Research Foundation essay competition by proposing that the periodic distribution of quasars observed can be explained if the Universe is multiply-connected, i.e. has a non-trivial topology. He was elected as the youngest member of the Chinese Academy of Science in 1980. His membership was, however, revoked after the Tiananmen Square protest of 1989. He helped promote international academic exchange in China. Together with Remo Ruffini, he organized the first major international scientific conference in China: the 3rd Marcel Grossmann meeting in 1982. During this meeting, Tsvi Piran and T.G. Horowitz became the first two Israeli scientists to enter the People's Republic of China; at the time, there were no diplomatic relations between China and Israel. He invited Stephen Hawking to visit China in 1985, and organized the International Astronomical Union conference IAU-124 on "Observational Cosmology" in Beijing in 1986. Fang also trained many younger colleagues and students in the field of astrophysics and cosmology; he was considered an excellent teacher. Fang and Li coauthored "Introduction to Mechanics", an introductory book on Newtonian mechanics and special theory of relativity. This book has been considered a classic by many teachers and students, although few students are aware of it in recent years. Fang was also the first scientist in China to write popular accounts of contemporary astrophysical developments, such as cosmology and black holes. Fang's book, "Creation of the Universe" (Yuzhou de chuangsheng in Chinese) which was published in 1987, introduced basic cosmological ideas, and influenced a large number of physics and astronomy students growing up in the 1980s in China. Political activism During the Anti-Rightist Campaign, Fang was expelled from the Chinese Communist Party for his "reactionary activities", viz. publishing an article critical of the government's policies on science education. He was rehabilitated after the reform of China in late 1970s, and resumed his party membership. During this time, he held many academic positions, including the director of the astrophysics research group of USTC, and director of the science history research group, chief editor of the USTC academic journal, chair of the Chinese society of gravity and relativistic astrophysics. In 1984, Fang was appointed as the vice president of the USTC under president Guan Weiyan. Fang was very active in this role; for example he helped to set up the telex service for USTC. He was very popular among the students. Fang also begin to write essays for publication in popular magazines, and give lectures on a variety of topics in universities, though usually not in USTC. Many such essays and lectures expressed his liberal view on politics, reflections on history, and criticisms on CCP dogma. He also emphasized social responsibility of intellectuals. In late 1986, Fang, together with Xu Liangying and Liu Binyan, wrote letters to a number of well-known "Rightists" from the 1957 Anti-Rightist campaign, suggesting a meeting in memory of that event. In December 1986, college students demonstrated in over a dozen Chinese cities in demanding greater economic and political freedoms. Fang was against the student demonstration, believing it would be suppressed by the CCP; he tried to persuade the USTC students not to go off-campus. After two straight weeks of student demonstrations, believing that the student movement was a result of "bourgeois liberalization", Deng Xiaoping named three Communist Party members to be expelled: Fang, Liu Binyan and Wang Ruowang. Deng directed then-CCP General Secretary Hu Yaobang to expel them from the Party, but Hu refused. Because of his refusal, Hu Yaobang was dismissed from his position as General Secretary in January 1987, effectively ending his period of influence within the Chinese government. Fang was again expelled from Chinese Communist Party in January 1987, and removed from his position as the vice president of the university. He was moved to Beijing as a research scientist at the Beijing Astronomical Observatory, now a part of the National Astronomical Observatory of China, and reunited with his wife, Li Shuxian, a professor at Peking University. He gained fame and notoriety after his essays were collected by the Chinese Communist Party and distributed to many of its regional offices, with the directive to its members to criticize the essays. 1989 democracy movement and exile In February 1989, Fang mobilized a number of well known intellectuals to write an open letter to Deng Xiaoping, requesting amnesty for the human right activist Wei Jingsheng who was then in prison. His wife, Li, was elected to become the people's representative of the Haidian District where Peking University is located. Fang and his wife had exchanged ideas about Chinese politics with some students of Peking University, including Wang Dan and Liu Gang. Some of those students became student leaders during the Tiananmen Square protests of 1989, though Fang and Li did not actively participate in the protest itself. During U.S. President George H.W. Bush's February 1989 visit to China, the U.S. embassy invited Fang to a banquet that Bush was hosting at the Sheraton Great Wall Hotel in honor of Deng Xiaoping. Deng had a negative view of Fang. Public Security stopped Fang on his way to the banquet and prevented him from attending. On June 5, 1989, the day after the government cracked down on the Tiananmen Square protests, Fang and Li sought asylum at the U.S. embassy in Beijing, accompanied by U.S. academic Perry Link. Fang and Li were initially turned away, but Jeffrey A. Bader, then acting director of the Office of Chinese and Mongolian Affairs at the State Department, used very strong language to order the embassy to reverse their decision. That night, Fang and his family were smuggled into the embassy in the back of a van. The Chinese government put Fang and Li at the top of the "wanted" list of the people involved in the protest. During his time in the U.S. embassy, Fang wrote an essay titled The Chinese Amnesia, criticizing the Chinese Communist Party's repression of human rights and the outside world's turning a blind eye to it. Fang's continued presence in the US Embassy following the protests became, according to U.S. Ambassador James Lilley, "a living symbol of our [US] conflict with China over human rights." Fang and his wife remained in the US Embassy until 25 June 1990, when they were allowed by Chinese authorities to leave the embassy and board a U.S. Air Force C-135 transport plane to Britain. This resolution partly came about after confidential negotiations between Henry Kissinger, acting on behalf of US President George H. W. Bush, and Deng. Other factors were a false confession by Fang, an attempted intervention by US National Security Adviser Brent Scowcroft, and an offer from the Japanese government to resume loans to the PRC in return for the resolution of "the Fang Lizhi problem." In 1989, he was a recipient of the Robert F. Kennedy Human Rights Award. In 1991, he gave a conference on the issue of Tibet in New York, one of the first open dialogues between Chinese and Tibetans. He also was an advisor for the International Campaign for Tibet. Later life in the US After some time at Cambridge University and Princeton, Fang later moved to Tucson, Arizona, where he worked as Professor of Physics at the University of Arizona. In campus speeches, Fang spoke on topics such as human rights and democracy as matters of social responsibility. He also served as a board member and co-chair of the New York-based organization Human Rights in China. Fang continued to do research in astrophysics and cosmology. He published research papers even during his stay in the US Embassy in Beijing. His later research includes the study of non-Gaussianity in the cosmic microwave background anisotropy, Lyman alpha forest, application of wavelet in cosmology, turbulence in intergalactic medium, and the 21cm radiation during the Reionization. He continued to train students and younger scientists who visited him from China and was very active in research to the end of his life, publishing multiple research papers each year. Death He died in his home in Tucson on April 6, 2012, aged 76, from undisclosed causes. He was buried at East Lawn Palms Mortuary & Cemetery on April 14. Further reading Essays: Memoir: , translated by Perry Link. See also List of Chinese dissidents Richard Baum References External links Personal Homepage Scientific Articles of Li-Zhi Fang (Fang Lizhi) Since 1989 Collection of Articles by Li-Zhi Fang (Fang Lizhi), maintained by his former students 1936 births 2012 deaths Chinese dissidents University of Arizona faculty Chinese astrophysicists Relativity theorists Chinese emigrants to the United States Chinese human rights activists Chinese democracy activists University of Science and Technology of China alumni Academic staff of the University of Science and Technology of China Writers from Beijing 20th-century Chinese science writers Educators from Beijing Physicists from Beijing Expelled members of the Chinese Communist Party Victims of the Anti-Rightist Campaign Members of the Chinese Academy of Sciences Robert F. Kennedy Human Rights Award laureates Charter 08 signatories
Fang Lizhi
[ "Physics" ]
3,513
[ "Relativity theorists", "Theory of relativity" ]
560,502
https://en.wikipedia.org/wiki/Chromosomal%20translocation
In genetics, chromosome translocation is a phenomenon that results in unusual rearrangement of chromosomes. This includes balanced and unbalanced translocation, with two main types: reciprocal, and Robertsonian translocation. Reciprocal translocation is a chromosome abnormality caused by exchange of parts between non-homologous chromosomes. Two detached fragments of two different chromosomes are switched. Robertsonian translocation occurs when two non-homologous chromosomes get attached, meaning that given two healthy pairs of chromosomes, one of each pair "sticks" and blends together homogeneously. A gene fusion may be created when the translocation joins two otherwise-separated genes. It is detected on cytogenetics or a karyotype of affected cells. Translocations can be balanced (in an even exchange of material with no genetic information extra or missing, and ideally full functionality) or unbalanced (where the exchange of chromosome material is unequal resulting in extra or missing genes). Reciprocal translocations Reciprocal translocations are usually an exchange of material between non-homologous chromosomes and occur in about 1 in 491 live births. Such translocations are usually harmless, as they do not result in a gain or loss of genetic material, though they may be detected in prenatal diagnosis. However, carriers of balanced reciprocal translocations may create gametes with unbalanced chromosome translocations during meiotic chromosomal segregation. This can lead to infertility, miscarriages or children with abnormalities. Genetic counseling and genetic testing are often offered to families that may carry a translocation. Most balanced translocation carriers are healthy and do not have any symptoms. It is important to distinguish between chromosomal translocations that occur in germ cells, due to errors in meiosis (i.e. during gametogenesis), and those that occur in somatic cells, due to errors in mitosis. The former results in a chromosomal abnormality featured in all cells of the offspring, as in translocation carriers. Somatic translocations, on the other hand, result in abnormalities featured only in the affected cell and its progenitors, as in chronic myelogenous leukemia with the Philadelphia chromosome translocation. Nonreciprocal translocation Nonreciprocal translocation involves the one-way transfer of genes from one chromosome to another nonhomologous chromosome. Robertsonian translocations Robertsonian translocation is a type of translocation caused by breaks at or near the centromeres of two acrocentric chromosomes. The reciprocal exchange of parts gives rise to one large metacentric chromosome and one extremely small chromosome that may be lost from the organism with little effect because it contains few genes. The resulting karyotype in humans leaves only 45 chromosomes, since two chromosomes have fused together. This has no direct effect on the phenotype, since the only genes on the short arms of acrocentrics are common to all of them and are present in variable copy number (nucleolar organiser genes). Robertsonian translocations have been seen involving all combinations of acrocentric chromosomes. The most common translocation in humans involves chromosomes 13 and 14 and is seen in about 0.97 / 1000 newborns. Carriers of Robertsonian translocations are not associated with any phenotypic abnormalities, but there is a risk of unbalanced gametes that lead to miscarriages or abnormal offspring. For example, carriers of Robertsonian translocations involving chromosome 21 have a higher risk of having a child with Down syndrome. This is known as a 'translocation Downs'. This is due to a mis-segregation (nondisjunction) during gametogenesis. The mother has a higher (10%) risk of transmission than the father (1%). Robertsonian translocations involving chromosome 14 also carry a slight risk of uniparental disomy 14 due to trisomy rescue. Role in disease Some human diseases caused by translocations are: Cancer: Several forms of cancer are caused by acquired translocations (as opposed to those present from conception); this has been described mainly in leukemia (acute myelogenous leukemia and chronic myelogenous leukemia). Translocations have also been described in solid malignancies such as Ewing's sarcoma. Infertility: One of the would-be parents carries a balanced translocation, where the parent is asymptomatic but conceived fetuses are not viable. Down syndrome is caused in a minority (5% or less) of cases by a Robertsonian translocation of the chromosome 21 long arm onto the long arm of chromosome 14. Chromosomal translocations between the sex chromosomes can also result in a number of genetic conditions, such as XX male syndrome: caused by a translocation of the SRY gene from the Y to the X chromosome By chromosome Denotation The International System for Human Cytogenetic Nomenclature (ISCN) is used to denote a translocation between chromosomes. The designation t(A;B)(p1;q2) is used to denote a translocation between chromosome A and chromosome B. The information in the second set of parentheses, when given, gives the precise location within the chromosome for chromosomes A and B respectively—with p indicating the short arm of the chromosome, q indicating the long arm, and the numbers after p or q refers to regions, bands and sub-bands seen when staining the chromosome with a staining dye. See also the definition of a genetic locus. The translocation is the mechanism that can cause a gene to move from one linkage group to another. Examples of translocations on human chromosomes History In 1938, Karl Sax, at the Harvard University Biological Laboratories, published a paper entitled "Chromosome Aberrations Induced by X-rays", which demonstrated that radiation could induce major genetic changes by affecting chromosomal translocations. The paper is thought to mark the beginning of the field of radiation cytology, and led him to be called "the father of radiation cytology". DNA double-strand break repair The initiating event in the formation of a translocation is generally a double-strand break in chromosomal DNA. A type of DNA repair that has a major role in generating chromosomal translocations is the non-homologous end joining pathway. When this pathway functions appropriately it restores a DNA double-strand break by reconnecting the originally broken ends, but when it acts inappropriately it may join ends incorrectly resulting in genomic rearrangements including translocations. In order for the illegitimate joining of broken ends to occur, the exchange partners DNAs need to be physically close to each other in the 3D genome. See also Accipitridae Aneuploidy Chromosome abnormalities DbCRID Fusion gene Pseudodiploid Takifugu rubripes References External links Chromosomal abnormalities Cytogenetics Modification of genetic information
Chromosomal translocation
[ "Biology" ]
1,468
[ "Modification of genetic information", "Molecular genetics" ]
560,782
https://en.wikipedia.org/wiki/Yerkes%E2%80%93Dodson%20law
The Yerkes–Dodson law is an empirical relationship between arousal and performance, originally developed by psychologists Robert M. Yerkes and John Dillingham Dodson in 1908. The law dictates that performance increases with physiological or mental arousal, but only up to a point. When levels of arousal become too high, performance decreases. The process is often illustrated graphically as a bell-shaped curve which increases and then decreases with higher levels of arousal. The original paper (a study of the Japanese house mouse, described as the "dancing mouse") was only referenced ten times over the next half century, yet in four of the citing articles, these findings were described as a psychological "law". Levels of arousal Researchers have found that different tasks require different levels of arousal for optimal performance. For example, difficult or intellectually demanding tasks may require a lower level of arousal (to facilitate concentration), whereas tasks demanding stamina or persistence may be performed better with higher levels of arousal (to increase motivation). Because of task differences, the shape of the curve can be highly variable. For simple or well-learned tasks, the relationship is monotonic, and performance improves as arousal increases. For complex, unfamiliar, or difficult tasks, the relationship between arousal and performance reverses after a point, and performance thereafter declines as arousal increases. The effect of task difficulty led to the hypothesis that the Yerkes–Dodson Law can be decomposed into two distinct factors as in a bathtub curve. The upward part of the inverted U can be thought of as the energizing effect of arousal. The downward part is caused by negative effects of arousal (or stress) on cognitive processes like attention (e.g., "tunnel vision"), memory, and problem-solving. There has been research indicating that the correlation suggested by Yerkes and Dodson exists (such as that of Broadhurst (1959), Duffy (1957), and Anderson et al (1988)), but a cause of the correlation has not yet successfully been established (Anderson, Revelle, & Lynch, 1989). Alternative models Other theories and models of arousal do not affirm the Hebb or Yerkes-Dodson curve. The widely supported theory of optimal flow presents a less simplistic understanding of arousal and skill-level match. Reversal theory actively opposes the Yerkes-Dodson law by demonstrating how the psyche operates on the principle bistability rather than homeostasis. Relationship to glucocorticoids A 2007 review by Lupien at al of the effects of stress hormones (glucocorticoids, GC) and human cognition revealed that memory performance vs. circulating levels of glucocorticoids does manifest an upside-down U-shaped curve, and the authors noted the resemblance to the Yerkes–Dodson curve. For example, long-term potentiation (LTP) (the process of forming long-term memories) is optimal when glucocorticoid levels are mildly elevated, whereas significant decreases of LTP are observed after adrenalectomy (low GC state) or after exogenous glucocorticoid administration (high GC state). This review also revealed that in order for a situation to induce a stress response, it has to be interpreted as one or more of the following: novel unpredictable not controllable by the individual a social evaluative threat (negative social evaluation possibly leading to social rejection). It has also been shown that elevated levels of glucocorticoids enhance memory for emotionally arousing events but lead more often than not to poor memory for material unrelated to the source of stress/emotional arousal. See also Drive theory Emotion Emotion and memory Flashbulb memory Low arousal theory References External links Behavioral concepts
Yerkes–Dodson law
[ "Biology" ]
772
[ "Behavior", "Behavioral concepts", "Behaviorism" ]
560,807
https://en.wikipedia.org/wiki/Feedlot
A feedlot or feed yard is a type of animal feeding operation (AFO) which is used in intensive animal farming, notably beef cattle, but also swine, horses, sheep, turkeys, chickens or ducks, prior to slaughter. Large beef feedlots are called concentrated animal feeding operations (CAFO) in the United States and intensive livestock operations (ILOs) or confined feeding operations (CFO) in Canada. They may contain thousands of animals in an array of pens. The basic purpose of the feedlot is to increase the amount of fat gained by each animal as quickly as possible; if animals are kept in confined quarters rather than being allowed to range freely over grassland, they will gain weight more quickly and efficiently with the added benefit of economies of scale. Regulation Most feedlots require some type of governmental approval to operate, which generally consists of an agricultural site permit. Feedlots also would have an environmental plan in place to deal with the large amount of waste that is generated from the numerous livestock housed. The environmental farm plan is set in place to raise awareness about the environment and covers 23 different aspects around the farm that may affect the environment. The Environmental Protection Agency has authority under the Clean Water Act to regulate all animal feeding operations in the United States. This authority is delegated to individual states in some cases. In Canada, regulation of feedlots is shared between all levels of government. Certain provinces are required by law to have a nutrient management plan, which looks at everything the farm is going to feed to their animals, down to the minerals. New farms are required to complete and obtain a license under the livestock operations act, which looks at proper manure storage as well as proper distance away from other farms or dwellings. A mandatory RFID tag is required in every animal that passes through a Canadian feedlot, these are called CCIA tags (Canadian Cattle Identification Agency) which is controlled by the Canadian Food Inspection Agency CFIA. In Australia this role is handled by the National Feedlot Accreditation Scheme (NFAS). Scheduling The cattle industry works in sequence with one another, prior to entering a feedlot, young calves are born typically in the spring where they spend the summer with their mothers in a pasture or on rangeland. These producers are called cow-calf operations and are essential for feedlot operations to run. Once the young calves reach a weight between they are rounded up and either sold directly to feedlots, or sent to cattle auctions for feedlots to bid on them. Once transferred to a feedlot, they are housed and looked after for the next six to eight months where they are fed a total mixed ration to gain weight. Feedlot diets encourage growth of muscle mass and the distribution of some fat (known as marbling in butchered meat). The marbling is desirable to consumers, as it contributes to flavour and tenderness. These animals may gain an additional 400-600 pounds (180 kg) during its approximate 200 days in the feedlot, depending on its entrance weight into the lot, and also how well the animal gains muscle. Once cattle are fattened up to their finished weight, the fed cattle are transported to a slaughterhouse. Diet Typically the total mixed ration (TMR) consist of forage, grains, minerals, and supplements to benefit the animals' health and to maximize feed efficiency. These rations are also known to contain various other forms of feed such as a specialized animal feed which consists of corn, corn byproducts (some of which is derived from ethanol and high fructose corn syrup production), milo, barley, and various grains. Some rations may also contain roughage such as corn stalks, straw, sorghum, or other hay, cottonseed meal, premixes which may contain but not limited to antibiotics, fermentation products, micro & macro minerals and other essential ingredients that are purchased from mineral companies, usually in sacked form, for blending into commercial rations. Many feed companies are able to be prescribed a drug to be added into a farms feed if required by a vet. Farmers generally work with nutritionists who aid in the formulation of these rations to ensure their animals are getting the recommended levels of minerals and vitamins, but also to make sure the animals are not wasting feed in their manure. In the American northwest and Canada, barley, low grade durum wheat, chick peas (garbanzo beans), oats and occasionally potatoes are used as feed. In a typical feedlot, a cow's diet is roughly 62% roughage, 31% grain, 5% supplements (minerals and vitamins), and 2% premix. High-grain diets lower the pH in the animals' rumen. Due to the stressors of these conditions, and due to some illnesses, it may be necessary to give the animals antibiotics on occasion. Animal health and welfare A feedlot is highly dependent on the health of its livestock, as disease can have a great impact on the animals, and controlling sickness can be difficult with numerous animals living together. Many feedlots will have an entrance protocol in which new animals entering the lot are given vaccines to protect them against potential sickness that may arise in the first few weeks in the feedlot. These entrance protocols are usually discussed and created with the farm's veterinarian, as there are numerous factors that can impact the health of feedlot cattle. One challenging but crucial role on a feedlot is to identify any sick cattle, and treat them in order to rebound them back to health. Knowing when an animal is sick is sometimes difficult as cattle are prey animals and will try and hide their weakness from potential threats. A sick animal will generally look gaunt, may have a snotty nose and/or dry nose, and will have droopy ears, catching these symptoms early may be the key to successfully treating an animal. The best indicator of health is the body temperature of a cow, but this is not always possible when looking over many animals per day. The diet of the animals and the different ingredients within the ration are controversial. Cattle in feedlots are fed grain rather than more natural forage. This is designed to make them gain weight faster, but it leads to internal abscesses and discomfort. Grain-based diets can also lead to the growth of harmful bacteria such as Clostridium perfringens and E. coli. Too much grain in the diet can cause cattle to have issues such as bloating, diarrhea and digestive discomfort, which is why close monitoring of the animals, as well as working with ruminant nutritionists is very important for farmers. Animal welfare is a major controversy towards farms today as consumers have shown their concern for the welfare of these animals. Indoor feedlots with concrete surfaces can cause leg problems including swollen joints. On outdoor feedlots, welfare issues include mud in rainy areas; heat stress in feedlots that are not shaded; insufficient water to drink; excessive cold, and problems with cattle handling (e.g. electric prods). Water troughs shared among many cattle can increase the spread of diseases including bovine respiratory disease. Waste recycling There are a few common methods of waste recycling within feedlots, with the most common being spreading it back on the cropping fields used to feed the livestock. Generally, feedlots provide bedding for their animals such as straw, sawdust, wood shavings, or other byproducts from crops (soybean chaff, corn chaff), which are then mixed in with the manure as the livestock use the bedding. Once the bedding has outlasted its use, the manure is either spread directly on the fields or stock piled to breakdown and begin composting. A less common type of recycling in the feedlot industry is liquid manure which is where minimal bedding is found in the manure, so it stays a liquid and is then spread on the fields in a liquid form. Increasing numbers of cattle feedlots are utilizing out-wintering pads made of timber residue bedding in their operations. Nutrients are retained in the waste timber and livestock effluent and can be recycled within the farm system after use. Biogas plants are also able to use livestock manure to create biofuels, and these anaerobic digestion systems are known to capture methane in a usable form, while concentrating nitrogen, a valuable nutrient found in the manure which they then use to spread on their fields. History Cattle feeding on a large scale was first introduced in the early 60's, when a demand for higher quality beef in large quantities emerged. Farmers started becoming familiar with the finishing of beef, but also showed interest in various other aspects associated with the feedlot such as soil health, crop management, and how to manage labour costs. From the early 60's to the 90's feeding beef cattle in the feedlot style showed immense growth, and even today the feedlot industry is constantly being upgraded with new knowledge and science as well as technology. In the early 20th century, feeder operations were separate from all other related operations and feedlots were non-existent. They appeared in the 1950s and 1960s as a result of hybrid grains and irrigation techniques; the ensuing larger grain crops led to abundant grain harvests. It was suddenly possible to feed large numbers of cattle in one location and so, to cut transportation costs, grain farms and feedlot locations merged. Cattle were no longer sent from all across the southern states to places like California, where large slaughter houses were located. In the 1980s, meat packers followed the path of feedlots and are now located close by to them as well. Marketing There are many methods used to sell cattle to meat packers. Spot, or cash, marketing is the traditional and most commonly used method. Prices are influenced by current supply & demand and are determined by live weight or per head. Similar to this is forward contracting, in which prices are determined the same way but are not directly influenced by market demand fluctuations. Forward contracts determine the selling price between the two parties negotiating for a set amount of time. However, this method is the least used because it requires some knowledge of production costs and the willingness of both sides to take a risk in the futures market. Another method, formula pricing, is becoming the most popular process, as it more accurately represents the value of meat received by the packer. This requires trust between the packers and feedlots though, and is under criticism from the feedlots because the amount paid to the feedlots is determined by the packers’ assessment of the meat received. Finally, live- or carcass-weight based formula pricing is most common. Other types include grid pricing and boxed beef pricing. The most controversial marketing method stems from the vertical integration of packer-owned feedlots, which still represents less than 10% of all methods, but has been growing over the years. Alternatives The alternative to feedlots is to allow cattle to graze on grass throughout their lives, but this is not efficient and can be very challenging. For Canada and the Northern USA, year round grazing is not possible due to the severe winter weather conditions. Controlled grazing methods of this sort necessitate higher beef prices and the cattle take longer to reach market weight. See also Intensive fish farm Golden Triangle of Meat-packing Livestock Managed intensive grazing Temple Grandin References Further reading Encyclopedia of Oklahoma History and Culture – Feedlots External links Canada Beef Inc Texas Cattle Feeders Association Clean Water and Factory Farms – Inhumane Treatment of Farm Animals Australian Lot Feeders Association "Power Steer", Michael Pollan, New York Times, March 31, 2002 Broken Bow South Lot, possibly the world's largest capacity Livestock Meat industry Intensive farming Cruelty to animals
Feedlot
[ "Chemistry" ]
2,401
[ "Eutrophication", "Intensive farming" ]
560,876
https://en.wikipedia.org/wiki/Pharmaceutical%20industry
The pharmaceutical industry is a medical industry that discovers, develops, produces and markets pharmaceutical goods such as medications and medical devices. Medications are then administered to (or self-administered by) patients for curing or prevention of disease, as well as alleviating symptoms of illness or injury. Pharmaceutical companies may deal in generic drugs, branded drugs, or both, within different contexts. Generic materials are without the involvement of intellectual property, whereas branded materials are protected by chemical patents. The industry's various subdivisions include distinct areas, such as manufacturing biologics or total synthesis. The industry is subject to a variety of laws and regulations that govern the patenting, efficacy testing, safety evaluation, and marketing of these drugs. The global pharmaceutical market produced treatments worth a total of $1,228.45 billion in 2020. The sector showed a compound annual growth rate (CAGR) of 1.8% in 2021, including the effects of the COVID-19 pandemic. In historical terms, the pharmaceutical industry, as an intellectual concept, arose in the middle to late 1800s in nation-states with developed economies such as Germany, Switzerland, and the United States. Some businesses engaging in synthetic organic chemistry, such as several firms generating dyestuffs derived from coal tar on a large scale, were seeking out new applications of their artificial materials in terms of human health. This trend to increased capital investment occurred in tandem with the scholarly study of pathology as a field advancing significantly, and a variety of businesses set up cooperative relationships with academic laboratories evaluating human injury and disease. Examples of industrial companies with a pharmaceutical focus that have endured to this day after such distant beginnings include Bayer (based out of Germany) and Pfizer (based out of the U.S.). History Mid-1800s–1945 The modern era of the pharmaceutical industry began with local apothecaries that expanded their traditional role of distributing botanical drugs such as morphine and quinine to wholesale manufacture in the mid-1800s. Intentional drug discovery from plants began with the extraction of morphine – an analgesic and sleep-inducing agent – from opium by the German apothecary assistant Friedrich Sertürner somewhere between 1803 and 1805. Sertürner later named this compound after the Greek god of dreams, Morpheus. Multinational corporations including Merck, Hoffman-La Roche, Burroughs-Wellcome (now part of GSK), Abbott Laboratories, Eli Lilly and Upjohn (now part of Pfizer) began as local apothecary shops in the mid-1800s. By the late 1880s, German dye manufacturers had perfected the purification of individual organic compounds from tar and other mineral sources and had also established rudimentary methods in organic chemical synthesis. The development of synthetic chemical methods allowed scientists to systematically vary the structure of chemical substances, and growth in the emerging science of pharmacology expanded their ability to evaluate the biological effects of these structural changes. Epinephrine, norepinephrine, and amphetamine By the 1890s, the profound effect of adrenal extracts on many different tissue types had been discovered, setting off a search both for the mechanism of chemical signaling and efforts to exploit these observations for the development of new drugs. The blood pressure raising and vasoconstrictive effects of adrenal extracts were of particular interest to surgeons as hemostatic agents and as treatment for shock, and a number of companies developed products based on adrenal extracts containing varying purities of the active substance. In 1897, John Abel at the Johns Hopkins University identified the active principle as epinephrine, which he isolated in an impure state as the sulfate salt. Industrial chemist Jōkichi Takamine later developed a method for obtaining epinephrine in a pure state, and licensed the technology to Parke-Davis. Parke-Davis marketed epinephrine under the trade name Adrenalin. Injected epinephrine proved to be especially efficacious for the acute treatment of asthma attacks, and an inhaled version was sold in the United States until 2011 (Primatene Mist). By 1929 epinephrine had been formulated into an inhaler for use in the treatment of nasal congestion. While highly effective, the requirement for injection limited the use of epinephrine and orally active derivatives were sought. A structurally similar compound, ephedrine, was identified by Japanese chemists in the Ma Huang plant and marketed by Eli Lilly as an oral treatment for asthma. Following the work of Henry Dale and George Barger at Burroughs-Wellcome, academic chemist Gordon Alles synthesized amphetamine and tested it in asthma patients in 1929. The drug proved to have only modest anti-asthma effects but produced sensations of exhilaration and palpitations. Amphetamine was developed by Smith, Kline and French as a nasal decongestant under the trade name Benzedrine Inhaler. Amphetamine was eventually developed for the treatment of narcolepsy, post-encephalitic parkinsonism, and mood elevation in depression and other psychiatric indications. It received approval as a New and Nonofficial Remedy from the American Medical Association for these uses in 1937, and remained in common use for depression until the development of tricyclic antidepressants in the 1960s. Discovery and development of the barbiturates In 1903, Hermann Emil Fischer and Joseph von Mering disclosed their discovery that diethylbarbituric acid, formed from the reaction of acid, phosphorus oxychloride and urea, induces sleep in dogs. The discovery was patented and licensed to Bayer pharmaceuticals, which marketed the compound under the trade name Veronal as a sleep aid beginning in 1904. Systematic investigations of the effect of structural changes on potency and duration of action led to the discovery of phenobarbital at Bayer in 1911 and the discovery of its potent anti-epileptic activity in 1912. Phenobarbital was among the most widely used drugs for the treatment of epilepsy through the 1970s, and as of 2014, remains on the World Health Organizations list of essential medications. The 1950s and 1960s saw increased awareness of the addictive properties and abuse potential of barbiturates and amphetamines and led to increasing restrictions on their use and growing government oversight of prescribers. Today, amphetamine is largely restricted to use in the treatment of attention deficit disorder and phenobarbital in the treatment of epilepsy. In 1958, Leo Sternbach discovered the first benzodiazepine, chlordiazepoxide (Librium). Dozens of other benzodiazepines have been developed and are in use, some of the more popular drugs being diazepam (Valium), alprazolam (Xanax), clonazepam (Klonopin), and lorazepam (Ativan). Due to their far superior safety and therapeutic properties, benzodiazepines have largely replaced the use of barbiturates in medicine, except in certain special cases. When it was later discovered that benzodiazepines, like barbiturates, significantly lose their effectiveness and can have serious side effects when taken long-term, Heather Ashton researched benzodiazepine dependence and developed a protocol to discontinue their use. Insulin A series of experiments performed from the late 1800s to the early 1900s revealed that diabetes is caused by the absence of a substance normally produced by the pancreas. In 1869, Oskar Minkowski and Joseph von Mering found that diabetes could be induced in dogs by surgical removal of the pancreas. In 1921, Canadian professor Frederick Banting and his student Charles Best repeated this study and found that injections of pancreatic extract reversed the symptoms produced by pancreas removal. Soon, the extract was demonstrated to work in people, but development of insulin therapy as a routine medical procedure was delayed by difficulties in producing the material in sufficient quantity and with reproducible purity. The researchers sought assistance from industrial collaborators at Eli Lilly and Co. based on the company's experience with large scale purification of biological materials. Chemist George B. Walden of Eli Lilly and Company found that careful adjustment of the pH of the extract allowed a relatively pure grade of insulin to be produced. Under pressure from Toronto University and a potential patent challenge by academic scientists who had independently developed a similar purification method, an agreement was reached for non-exclusive production of insulin by multiple companies. Prior to the discovery and widespread availability of insulin therapy the life expectancy of diabetics was only a few months. Early anti-infective research: salvarsan, prontosil, penicillin and vaccines The development of drugs for the treatment of infectious diseases was a major focus of early research and development efforts; in 1900, pneumonia, tuberculosis, and diarrhea were the three leading causes of death in the United States and mortality in the first year of life exceeded 10%. In 1911 arsphenamine, the first synthetic anti-infective drug, was developed by Paul Ehrlich and chemist Alfred Bertheim of the Institute of Experimental Therapy in Berlin. The drug was given the commercial name Salvarsan. Ehrlich, noting both the general toxicity of arsenic and the selective absorption of certain dyes by bacteria, hypothesized that an arsenic-containing dye with similar selective absorption properties could be used to treat bacterial infections. Arsphenamine was prepared as part of a campaign to synthesize a series of such compounds, and was found to exhibit partially selective toxicity. Arsphenamine proved to be the first effective treatment for syphilis, a disease untl then had been incurable and led inexorably to severe skin ulceration, neurological damage, and death. Ehrlich's approach of systematically varying the chemical structure of synthetic compounds and measuring the effects of these changes on biological activity was pursued broadly by industrial scientists, including Bayer scientists Josef Klarer, Fritz Mietzsch, and Gerhard Domagk. This work, also based on the testing of compounds available from the German dye industry, led to the development of Prontosil, the first representative of the sulfonamide class of antibiotics. Compared to arsphenamine, the sulfonamides had a broader spectrum of activity and were far less toxic, rendering them useful for infections caused by pathogens such as streptococci. In 1939, Domagk received the Nobel Prize in Medicine for this discovery. Nonetheless, the dramatic decrease in deaths from infectious diseases that occurred prior to World War II was primarily the result of improved public health measures such as clean water and less crowded housing, and the impact of anti-infective drugs and vaccines was significant mainly after World War II. In 1928, Alexander Fleming discovered the antibacterial effects of penicillin, but its exploitation for the treatment of human disease awaited the development of methods for its large scale production and purification. These were developed by a U.S. and British government-led consortium of pharmaceutical companies during the world war. There was early progress toward the development of vaccines throughout this period, primarily in the form of academic and government-funded basic research directed toward the identification of the pathogens responsible for common communicable diseases. In 1885, Louis Pasteur and Pierre Paul Émile Roux created the first rabies vaccine. The first diphtheria vaccines were produced in 1914 from a mixture of diphtheria toxin and antitoxin (produced from the serum of an inoculated animal), but the safety of the inoculation was marginal and it was not widely used. The United States recorded 206,000 cases of diphtheria in 1921, resulting in 15,520 deaths. In 1923, parallel efforts by Gaston Ramon at the Pasteur Institute and Alexander Glenny at the Wellcome Research Laboratories (later part of GlaxoSmithKline) led to the discovery that a safer vaccine could be produced by treating diphtheria toxin with formaldehyde. In 1944, Maurice Hilleman of Squibb Pharmaceuticals developed the first vaccine against Japanese Encephalitis. Hilleman later moved to Merck, where he played a key role in the development of vaccines against measles, mumps, chickenpox, rubella, hepatitis A, hepatitis B, and meningitis. Unsafe drugs and early industry regulation Prior to the 20th century, drugs were generally produced by small scale manufacturers with little regulatory control over manufacturing or claims of safety and efficacy. To the extent that such laws did exist, enforcement was lax. In the United States, increased regulation of vaccines and other biological drugs was spurred by tetanus outbreaks and deaths caused by the distribution of contaminated smallpox vaccine and diphtheria antitoxin. The Biologics Control Act of 1902 required that federal government grant premarket approval for every biological drug and for the process and facility producing such drugs. This was followed in 1906 by the Pure Food and Drugs Act, which forbade the interstate distribution of adulterated or misbranded foods and drugs. A drug was considered misbranded if it contained alcohol, morphine, opium, cocaine, or any of several other potentially dangerous or addictive drugs, and if its label failed to indicate the quantity or proportion of such drugs. The government's attempts to use the law to prosecute manufacturers for making unsupported claims of efficacy were undercut by a Supreme Court ruling restricting the federal government's enforcement powers to cases of incorrect specification of the drug's ingredients. In 1937 over 100 people died after ingesting "Elixir Sulfanilamide" manufactured by S.E. Massengill Company of Tennessee. The product was formulated in diethylene glycol, a highly toxic solvent that is now widely used as antifreeze. Under the laws extant at that time, prosecution of the manufacturer was possible only under the technicality that the product had been called an "elixir", which literally implied a solution in ethanol. In response to this episode, the U.S. Congress passed the Federal Food, Drug, and Cosmetic Act of 1938, which for the first time required pre-market demonstration of safety before a drug could be sold, and explicitly prohibited false therapeutic claims. 1945–1970 Further advances in anti-infective research The aftermath of World War II saw an explosion in the discovery of new classes of antibacterial drugs including the cephalosporins (developed by Eli Lilly based on the seminal work of Giuseppe Brotzu and Edward Abraham), streptomycin (discovered during a Merck-funded research program in Selman Waksman's laboratory), the tetracyclines (discovered at Lederle Laboratories, now a part of Pfizer), erythromycin (discovered at Eli Lilly and Co.) and their extension to an increasingly wide range of bacterial pathogens. Streptomycin, discovered during a Merck-funded research program in Selman Waksman's laboratory at Rutgers in 1943, became the first effective treatment for tuberculosis. At the time of its discovery, sanitoriums for the isolation of tuberculosis-infected people were an ubiquitous feature of cities in developed countries, with 50% dying within 5 years of admission. A Federal Trade Commission report issued in 1958 attempted to quantify the effect of antibiotic development on American public health. The report found that over the period 1946–1955, there was a 42% drop in the incidence of diseases for which antibiotics were effective and only a 20% drop in those for which antibiotics were not effective. The report concluded that "it appears that the use of antibiotics, early diagnosis, and other factors have limited the epidemic spread and thus the number of these diseases which have occurred". The study further examined mortality rates for eight common diseases for which antibiotics offered effective therapy (syphilis, tuberculosis, dysentery, scarlet fever, whooping cough, meningococcal infections, and pneumonia), and found a 56% decline over the same period. Notable among these was a 75% decline in deaths due to tuberculosis. During the years 1940–1955, the rate of decline in the U.S. death rate accelerated from 2% per year to 8% per year, then returned to the historical rate of 2% per year. The dramatic decline in the immediate post-war years has been attributed to the rapid development of new treatments and vaccines for infectious disease that occurred during these years. Vaccine development continued to accelerate, with the most notable achievement of the period being Jonas Salk's 1954 development of the polio vaccine under the funding of the non-profit National Foundation for Infantile Paralysis. The vaccine process was never patented but was instead given to pharmaceutical companies to manufacture as a low-cost generic. In 1960 Maurice Hilleman of Merck Sharp & Dohme identified the SV40 virus, which was later shown to cause tumors in many mammalian species. It was later determined that SV40 was present as a contaminant in polio vaccine lots that had been administered to 90% of the children in the United States. The contamination appears to have originated both in the original cell stock and in monkey tissue used for production. In 2004 the National Cancer Institute announced that it had concluded that SV40 is not associated with cancer in people. Other notable new vaccines of the period include those for measles (1962, John Franklin Enders of Children's Medical Center Boston, later refined by Maurice Hilleman at Merck), Rubella (1969, Hilleman, Merck) and mumps (1967, Hilleman, Merck) The United States incidences of rubella, congenital rubella syndrome, measles, and mumps all fell by >95% in the immediate aftermath of widespread vaccination. The first 20 years of licensed measles vaccination in the U.S. prevented an estimated 52 million cases of the disease, 17,400 cases of mental retardation, and 5,200 deaths. Development and marketing of antihypertensive drugs Hypertension is a risk factor for atherosclerosis, heart failure, coronary artery disease, stroke, renal disease, and peripheral arterial disease, and is the most important risk factor for cardiovascular morbidity and mortality, in industrialized countries. Prior to 1940 approximately 23% of all deaths among persons over age 50 were attributed to hypertension. Severe cases of hypertension were treated by surgery. Early developments in the field of treating hypertension included quaternary ammonium ion sympathetic nervous system blocking agents, but these compounds were never widely used due to their severe side effects, because the long-term health consequences of high blood pressure had not yet been established, and because they had to be administered by injection. In 1952 researchers at Ciba discovered the first orally available vasodilator, hydralazine. A major shortcoming of hydralazine monotherapy was that it lost its effectiveness over time (tachyphylaxis). In the mid-1950s Karl H. Beyer, James M. Sprague, John E. Baer, and Frederick C. Novello of Merck and Co. discovered and developed chlorothiazide, which remains the most widely used antihypertensive drug today. This development was associated with a substantial decline in the mortality rate among people with hypertension. The inventors were recognized by a Public Health Lasker Award in 1975 for "the saving of untold thousands of lives and the alleviation of the suffering of millions of victims of hypertension". A 2009 Cochrane review concluded that thiazide antihypertensive drugs reduce the risk of death (RR 0.89), stroke (RR 0.63), coronary heart disease (RR 0.84), and cardiovascular events (RR 0.70) in people with high blood pressure. In the ensuring years other classes of antihypertensive drug were developed and found wide acceptance in combination therapy, including loop diuretics (Lasix/furosemide, Hoechst Pharmaceuticals, 1963), beta blockers (ICI Pharmaceuticals, 1964) ACE inhibitors, and angiotensin receptor blockers. ACE inhibitors reduce the risk of new onset kidney disease [RR 0.71] and death [RR 0.84] in diabetic patients, irrespective of whether they have hypertension. Oral contraceptives Prior to the Second World war, birth control was prohibited in many countries, and in the United States even the discussion of contraceptive methods sometimes led to prosecution under Comstock laws. The history of the development of oral contraceptives is thus closely tied to the birth control movement and the efforts of activists Margaret Sanger, Mary Dennett, and Emma Goldman. Based on fundamental research performed by Gregory Pincus and synthetic methods for progesterone developed by Carl Djerassi at Syntex and by Frank Colton at G.D. Searle & Co., the first oral contraceptive, Enovid, was developed by G.D. Searle & Co. and approved by the FDA in 1960. The original formulation incorporated vastly excessive doses of hormones, and caused severe side effects. Nonetheless, by 1962, 1.2 million American women were on the pill, and by 1965 the number had increased to 6.5 million. The availability of a convenient form of temporary contraceptive led to dramatic changes in social mores including expanding the range of lifestyle options available to women, reducing the reliance of women on men for contraceptive practice, encouraging the delay of marriage, and increasing pre-marital co-habitation. Thalidomide and the Kefauver-Harris amendments In the U.S., a push for revisions of the FD&C Act emerged from Congressional hearings led by Senator Estes Kefauver of Tennessee in 1959. The hearings covered a wide range of policy issues, including advertising abuses, questionable efficacy of drugs, and the need for greater regulation of the industry. While momentum for new legislation temporarily flagged under extended debate, a new tragedy emerged that underscored the need for more comprehensive regulation and provided the driving force for the passage of new laws. On 12 September 1960, an American licensee, the William S. Merrell Company of Cincinnati, submitted a new drug application for Kevadon (thalidomide), a sedative that had been marketed in Europe since 1956. The FDA medical officer in charge of reviewing the compound, Frances Kelsey, believed that the data supporting the safety of thalidomide was incomplete. The firm continued to pressure Kelsey and the FDA to approve the application until November 1961, when the drug was pulled off the German market because of its association with grave congenital abnormalities. Several thousand newborns in Europe and elsewhere suffered the teratogenic effects of thalidomide. Without approval from the FDA, the firm distributed Kevadon to over 1,000 physicians there under the guise of investigational use. Over 20,000 Americans received thalidomide in this "study," including 624 pregnant patients, and about 17 known newborns suffered the effects of the drug. The thalidomide tragedy resurrected Kefauver's bill to enhance drug regulation that had stalled in Congress, and the Kefauver-Harris Amendment became law on 10 October 1962. Manufacturers henceforth had to prove to FDA that their drugs were effective as well as safe before they could go on the US market. The FDA received authority to regulate the advertising of prescription drugs and to establish good manufacturing practices. The law required that all drugs introduced between 1938 and 1962 had to be effective. A collaborative study by the U.S. Food and Drug Administration and the National Academy of Sciences showed that nearly 40 percent of these products were not effective. A similarly comprehensive study of over-the-counter products began ten years later. 1970–1990s Statins In 1971, Akira Endo, a Japanese biochemist working for the pharmaceutical company Sankyo, identified mevastatin (ML-236B), a molecule produced by the fungus Penicillium citrinum, as an inhibitor of HMG-CoA reductase, a critical enzyme used by the body to produce cholesterol. Animal trials showed very good inhibitory effect as in clinical trials, however a long-term study in dogs found toxic effects at higher doses and as a result mevastatin was believed to be too toxic for human use. Mevastatin was never marketed, because of its adverse effects of tumors, muscle deterioration, and sometimes death in laboratory dogs. P. Roy Vagelos, chief scientist and later CEO of Merck & Co, was interested, and made several trips to Japan starting in 1975. By 1978, Merck had isolated lovastatin (mevinolin, MK803) from the fungus Aspergillus terreus, first marketed in 1987 as Mevacor. In April 1994, the results of a Merck-sponsored study, the Scandinavian Simvastatin Survival Study, were announced. Researchers tested simvastatin, later sold by Merck as Zocor, on 4,444 patients with high cholesterol and heart disease. After five years, the study concluded the patients saw a 35% reduction in their cholesterol, and their chances of dying of a heart attack were reduced by 42%. In 1995, Zocor and Mevacor both made Merck over US$1 billion. Endo was awarded the 2006 Japan Prize, and the Lasker-DeBakey Clinical Medical Research Award in 2008. For his "pioneering research into a new class of molecules" for "lowering cholesterol," 21st Century Since several decades, biologics have been rising in importance in comparison with small molecules treatments. The biotech subsector, animal health and the Chinese pharmaceutical sector have also grown substantially. On the organisational side, big international pharma corporations have experienced a substantial decline of their value share. Also, the core generic sector (substitutions for off-patent brands) has been downvalued due to competition. Torreya estimated the pharmaceutical industry to have a market valuation of US$7.03 trillion by February 2021 from which US$6.1 trillion is the value of the publicly traded companies. Small Molecules modality had 58.2% of the valuation share down from 84.6% in 2003. Biologics was up at 30.5% from 14.5%. The valuation share of Chinese Pharma grew from 2003 to 2021 from 1% to 12% overtaking Switzerland who is now ranked number 3 with 7.7%. The United States had still by far the most valued pharmaceutical industry with 40% of global valuation. 2023 was a year of layoffs for at least 10,000 people across 129 public biotech firms globally, albeit most small firms; this was a significant increase in reductions versus 2022 was in part due to worsening global financial conditions and a reduction in investment by "generalist investors". Private firms also saw a significant reduction in venture capital investment in 2023, continuing a downward trend started in 2021, which also led to a reduction in initial public offerings being floated. Impact of mergers and acquisitions A 2022 article articulated this notion succinctly by saying "In the business of drug development, deals can be just as important as scientific breakthroughs", typically referred to as pharmaceutical M&A (for mergers and acquisitions). It highlighted that some of the most impactful of the remedies of the early 21st Century were only made possible through M&A activities, specifically noting Keytruda and Humira. Research and development Drug discovery is the process by which potential drugs are discovered or designed. In the past, most drugs have been discovered either by isolating the active ingredient from traditional remedies or by serendipitous discovery. Modern biotechnology often focuses on understanding the metabolic pathways related to a disease state or pathogen, and manipulating these pathways using molecular biology or biochemistry. A great deal of early-stage drug discovery has traditionally been carried out by universities and research institutions. Drug development refers to activities undertaken after a compound is identified as a potential drug in order to establish its suitability as a medication. Objectives of drug development are to determine appropriate formulation and dosing, as well as to establish safety. Research in these areas generally includes a combination of in vitro studies, in vivo studies, and clinical trials. The cost of late stage development has meant it is usually done by the larger pharmaceutical companies. The pharmaceuticals and biotechnology industry spends more than 15% of its net sales for Research & Development which is in comparison with other industries by far the highest share. Often, large multinational corporations exhibit vertical integration, participating in a broad range of drug discovery and development, manufacturing and quality control, marketing, sales, and distribution. Smaller organizations, on the other hand, often focus on a specific aspect such as discovering drug candidates or developing formulations. Often, collaborative agreements between research organizations and large pharmaceutical companies are formed to explore the potential of new drug substances. More recently, multi-nationals are increasingly relying on contract research organizations to manage drug development. The cost of innovation Drug discovery and development are very expensive; of all compounds investigated for use in humans only a small fraction are eventually approved in most nations by government-appointed medical institutions or boards, who have to approve new drugs before they can be marketed in those countries. In 2010 18 NMEs (New Molecular Entities) were approved and three biologics by the FDA, or 21 in total, which is down from 26 in 2009 and 24 in 2008. On the other hand, there were only 18 approvals in total in 2007 and 22 back in 2006. Since 2001, the Center for Drug Evaluation and Research has averaged 22.9 approvals a year. This approval comes only after heavy investment in pre-clinical development and clinical trials, as well as a commitment to ongoing safety monitoring. Drugs which fail part-way through this process often incur large costs, while generating no revenue in return. If the cost of these failed drugs is taken into account, the cost of developing a successful new drug (new chemical entity, or NCE), has been estimated at US$1.3 billion (not including marketing expenses). Professors Light and Lexchin reported in 2012, however, that the rate of approval for new drugs has been a relatively stable average rate of 15 to 25 for decades. Industry-wide research and investment reached a record $65.3 billion in 2009. While the cost of research in the U.S. was about 34.2 billion between 1995 and 2010, revenues rose faster (revenues rose by 200.4 billion in that time). A study by the consulting firm Bain & Company reported that the cost for discovering, developing and launching (which factored in marketing and other business expenses) a new drug (along with the prospective drugs that fail) rose over a five-year period to nearly $1.7 billion in 2003. According to Forbes, by 2010 development costs were between $4 billion to $11 billion per drug. Some of these estimates also take into account the opportunity cost of investing capital many years before revenues are realized (see Time-value of money). Because of the very long time needed for discovery, development, and approval of pharmaceuticals, these costs can accumulate to nearly half the total expense. A direct consequence within the pharmaceutical industry value chain is that major pharmaceutical multinationals tend to increasingly outsource risks related to fundamental research, which somewhat reshapes the industry ecosystem with biotechnology companies playing an increasingly important role, and overall strategies being redefined accordingly. Some approved drugs, such as those based on re-formulation of an existing active ingredient (also referred to as Line-extensions) are much less expensive to develop. Product approval In the United States, new pharmaceutical products must be approved by the Food and Drug Administration (FDA) as being both safe and effective. This process generally involves submission of an Investigational New Drug (IND) filing with sufficient pre-clinical data to support proceeding with human trials. Following IND approval, three phases of progressively larger human clinical trials may be conducted. Phase I generally studies toxicity using healthy volunteers. Phase II can include pharmacokinetics and dosing in patients, and Phase III is a very large study of efficacy in the intended patient population. Following the successful completion of Phase III testing, a New Drug Application is submitted to the FDA. The FDA reviews the data and if the product is seen as having a positive benefit-risk assessment, approval to market the product in the US is granted. A fourth phase of post-approval surveillance is also often required due to the fact that even the largest clinical trials cannot effectively predict the prevalence of rare side-effects. Postmarketing surveillance ensures that after marketing the safety of a drug is monitored closely. In certain instances, its indication may need to be limited to particular patient groups, and in others the substance is withdrawn from the market completely. The FDA provides information about approved drugs at the Orange Book site. In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) approves and evaluates drugs for use. Normally an approval in the UK and other European countries comes later than one in the USA. Then it is the National Institute for Health and Care Excellence (NICE), for England and Wales, who decides if and how the National Health Service (NHS) will allow (in the sense of paying for) their use. The British National Formulary is the core guide for pharmacists and clinicians. In many non-US western countries, a 'fourth hurdle' of cost effectiveness analysis has developed before new technologies can be provided. This focuses on the 'efficacy price tag' (in terms of, for example, the cost per QALY) of the technologies in question. In England and Wales NICE decides whether and in what circumstances drugs and technologies will be made available by the NHS, whilst similar arrangements exist with the Scottish Medicines Consortium in Scotland, and the Pharmaceutical Benefits Advisory Committee in Australia. A product must pass the threshold for cost-effectiveness if it is to be approved. Treatments must represent 'value for money' and a net benefit to society. Orphan drugs There are special rules for certain rare diseases ("orphan diseases") in several major drug regulatory territories. For example, diseases involving fewer than 200,000 patients in the United States, or larger populations in certain circumstances are subject to the Orphan Drug Act. Because medical research and development of drugs to treat such diseases is financially disadvantageous, companies that do so are rewarded with tax reductions, fee waivers, and market exclusivity on that drug for a limited time (seven years), regardless of whether the drug is protected by patents. Global sales In 2011, global spending on prescription drugs topped $954 billion, even as growth slowed somewhat in Europe and North America. The United States accounts for more than a third of the global pharmaceutical market, with $340 billion in annual sales followed by the EU and Japan. Emerging markets such as China, Russia, South Korea and Mexico outpaced that market, growing a huge 81 percent. The top ten best-selling drugs of 2013 totaled $75.6 billion in sales, with the anti-inflammatory drug Humira being the best-selling drug worldwide at $10.7 billion in sales. The second and third best selling were Enbrel and Remicade, respectively. The top three best-selling drugs in the United States in 2013 were Abilify ($6.3 billion,) Nexium ($6 billion) and Humira ($5.4 billion). The best-selling drug ever, Lipitor, averaged $13 billion annually and netted $141 billion total over its lifetime before Pfizer's patent expired in November 2011. IMS Health publishes an analysis of trends expected in the pharmaceutical industry in 2007, including increasing profits in most sectors despite loss of some patents, and new 'blockbuster' drugs on the horizon. Patents and generics Depending on a number of considerations, a company may apply for and be granted a patent for the drug, or the process of producing the drug, granting exclusivity rights typically for about 20 years. However, only after rigorous study and testing, which takes 10 to 15 years on average, will governmental authorities grant permission for the company to market and sell the drug. Patent protection enables the owner of the patent to recover the costs of research and development through high profit margins for the branded drug. When the patent protection for the drug expires, a generic drug is usually developed and sold by a competing company. The development and approval of generics is less expensive, allowing them to be sold at a lower price. Often the owner of the branded drug will introduce a generic version before the patent expires in order to get a head start in the generic market. Restructuring has therefore become routine, driven by the patent expiration of products launched during the industry's "golden era" in the 1990s and companies' failure to develop sufficient new blockbuster products to replace lost revenues. Prescriptions In the U.S., the value of prescriptions increased over the period of 1995 to 2005 by 3.4 billion annually, a 61 percent increase. Retail sales of prescription drugs jumped 250 percent from $72 billion to $250 billion, while the average price of prescriptions more than doubled from $30 to $68. Marketing Advertising is common in healthcare journals as well as through more mainstream media routes. In some countries, notably the US, they are allowed to advertise directly to the general public. Pharmaceutical companies generally employ salespeople (often called 'drug reps' or, an older term, 'detail men') to market directly and personally to physicians and other healthcare providers. In some countries, notably the US, pharmaceutical companies also employ lobbyists to influence politicians. Marketing of prescription drugs in the US is regulated by the federal Prescription Drug Marketing Act of 1987. The pharmaceutical marketing plan incorporates the spending plans, channels, and thoughts which will take the drug association, and its items and administrations, forward in the current scene. To healthcare professionals The book Bad Pharma also discusses the influence of drug representatives, how ghostwriters are employed by the drug companies to write papers for academics to publish, how independent the academic journals really are, how the drug companies finance doctors' continuing education, and how patients' groups are often funded by industry. Direct to consumer advertising Since the 1980s, new methods of marketing for prescription drugs to consumers have become important. Direct-to-consumer media advertising was legalised in the FDA Guidance for Industry on Consumer-Directed Broadcast Advertisements. Controversies Drug marketing and lobbying There has been increasing controversy surrounding pharmaceutical marketing and influence. There have been accusations and findings of influence on doctors and other health professionals through drug reps including the constant provision of marketing 'gifts' and biased information to health professionals; highly prevalent advertising in journals and conferences; funding independent healthcare organizations and health promotion campaigns; lobbying physicians and politicians (more than any other industry in the US); sponsorship of medical schools or nurse training; sponsorship of continuing educational events, with influence on the curriculum; and hiring physicians as paid consultants on medical advisory boards. Some advocacy groups, such as No Free Lunch and AllTrials, have criticized the effect of drug marketing to physicians because they say it biases physicians to prescribe the marketed drugs even when others might be cheaper or better for the patient. There have been related accusations of disease mongering (over-medicalising) to expand the market for medications. An inaugural conference on that subject took place in Australia in 2006. In 2009, the Government-funded National Prescribing Service launched the "Finding Evidence – Recognising Hype" program, aimed at educating on methods for independent drug analysis. Meta-analyses have shown that psychiatric studies sponsored by pharmaceutical companies are several times more likely to report positive results, and if a drug company employee is involved the effect is even larger. Influence has also extended to the training of doctors and nurses in medical schools, which is being fought. It has been argued that the design of the Diagnostic and Statistical Manual of Mental Disorders and the expansion of the criteria represents an increasing medicalization of human nature, or "disease mongering", driven by drug company influence on psychiatry. The potential for direct conflict of interest has been raised, partly because roughly half the authors who selected and defined the DSM-IV psychiatric disorders had or previously had financial relationships with the pharmaceutical industry. In the US, starting in 2013, under the Physician Financial Transparency Reports (part of the Sunshine Act), the Centers for Medicare & Medicaid Services has to collect information from applicable manufacturers and group purchasing organizations in order to report information about their financial relationships with physicians and hospitals. Data are made public in the Centers for Medicare & Medicaid Services website. The expectation is that relationship between doctors and Pharmaceutical industry will become fully transparent. In a report conducted by OpenSecrets, there were more than 1,100 lobbyists working in some capacity for the pharmaceutical business in 2017. In the first quarter of 2017, the health products and pharmaceutical industry spent $78 million on lobbying members of the United States Congress. Medication pricing The pricing of pharmaceuticals is becoming a major challenge for health systems. A November 2020 study by the West Health Policy Center stated that more than 1.1 million senior citizens in the U.S. Medicare program are expected to die prematurely over the next decade because they will be unable to afford their prescription medications, requiring an additional $17.7 billion to be spent annually on avoidable medical costs due to health complications. Regulatory issues Ben Goldacre has argued that regulators – such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK, or the Food and Drug Administration (FDA) in the United States – advance the interests of the drug companies rather than the interests of the public due to revolving door exchange of employees between the regulator and the companies and friendships develop between regulator and company employees. He argues that regulators do not require that new drugs offer an improvement over what is already available, or even that they be particularly effective. Others have argued that excessive regulation suppresses therapeutic innovation and that the current cost of regulator-required clinical trials prevents the full exploitation of new genetic and biological knowledge for the treatment of human disease. A 2012 report by the President's Council of Advisors on Science and Technology made several key recommendations to reduce regulatory burdens to new drug development, including 1) expanding the FDA's use of accelerated approval processes, 2) creating an expedited approval pathway for drugs intended for use in narrowly defined populations, and 3) undertaking pilot projects designed to evaluate the feasibility of a new, adaptive drug approval process. Pharmaceutical fraud Pharmaceutical fraud involves deceptions which bring financial gain to a pharmaceutical company. It affects individuals and public and private insurers. There are several different schemes used to defraud the health care system which are particular to the pharmaceutical industry. These include: Good Manufacturing Practice (GMP) Violations, Off Label Marketing, Best Price Fraud, CME Fraud, Medicaid Price Reporting, and Manufactured Compound Drugs. Of this amount $2.5 billion was recovered through False Claims Act cases in FY 2010. Examples of fraud cases include the GlaxoSmithKline $3 billion settlement, Pfizer $2.3 billion settlement and Merck & Co. $650 million settlement. Damages from fraud can be recovered by use of the False Claims Act, most commonly under the qui tam provisions which rewards an individual for being a "whistleblower", or relator (law). Every major company selling atypical antipsychotics—Bristol-Myers Squibb, Eli Lilly and Company, Pfizer, AstraZeneca and Johnson & Johnson—has either settled recent government cases, under the False Claims Act, for hundreds of millions of dollars or is currently under investigation for possible health care fraud. Following charges of illegal marketing, two of the settlements set records in 2009 for the largest criminal fines ever imposed on corporations. One involved Eli Lilly's antipsychotic Zyprexa, and the other involved Bextra, an anti-inflammatory medication used for arthritis. In the Bextra case, the government also charged Pfizer with illegally marketing another antipsychotic, Geodon; Pfizer settled that part of the claim for $301 million, without admitting any wrongdoing. In July 2012, GlaxoSmithKline pleaded guilty to criminal charges and agreed to a $3 billion settlement of the largest health-care fraud case in the U.S. and the largest payment by a drug company. The settlement is related to the company's illegal promotion of prescription drugs, its failure to report safety data, bribing doctors, and promoting medicines for uses for which they were not licensed. The drugs involved were Paxil, Wellbutrin, Advair, Lamictal, and Zofran for off-label, non-covered uses. Those and the drugs Imitrex, Lotronex, Flovent, and Valtrex were involved in the kickback scheme. The following is a list of the four largest settlements reached with pharmaceutical companies from 1991 to 2012, rank ordered by the size of the total settlement. Legal claims against the pharmaceutical industry have varied widely over the past two decades, including Medicare and Medicaid fraud, off-label promotion, and inadequate manufacturing practices. Physician roles In May 2015, the New England Journal of Medicine emphasized the importance of pharmaceutical industry-physician interactions for the development of novel treatments, and argued that moral outrage over industry malfeasance had unjustifiably led many to overemphasize the problems created by financial conflicts of interest. The article noted that major healthcare organizations, such as National Center for Advancing Translational Sciences of the National Institutes of Health, the President's Council of Advisors on Science and Technology, the World Economic Forum, the Gates Foundation, the Wellcome Trust, and the Food and Drug Administration had encouraged greater interactions between physicians and industry in order to improve benefits to patients. Response to COVID-19 In November 2020 several pharmaceutical companies announced successful trials of COVID-19 vaccines, with efficacy of 90 to 95% in preventing infection. Per company announcements and data reviewed by external analysts, these vaccines are priced at $3 to $37 per dose. The Wall Street Journal ran an editorial calling for this achievement to be recognized with a Nobel Peace Prize. Doctors Without Borders warned that high prices and monopolies on medicines, tests, and vaccines would prolong the pandemic and cost lives. They urged governments to prevent profiteering, using compulsory licenses as needed, as had already been done by Canada, Chile, Ecuador, Germany, and Israel. On 20 February, 46 US lawmakers called for the US government not to grant monopoly rights when giving out taxpayer development money for any coronavirus vaccines and treatments, to avoid giving exclusive control of prices and availability to private manufacturers. In the United States, the government signed agreements in which research and development or the building of manufacturing plants for potential COVID-19 therapeutics was subsidized. Typically, the agreement involved the government taking ownership of a certain number of doses of the product without further payment. For example, under the auspices of Operation Warp Speed in the United States, the government subsidized research related to COVID-19 vaccines and therapeutics at Regeneron, Johnson and Johnson, Moderna, AstraZeneca, Novavax, Pfizer, and GSK. Typical terms involved research subsidies of $400 million to $2 billion, and included government ownership of the first 100 million doses of any COVID-19 vaccine successfully developed. American pharmaceutical company Gilead sought and obtained orphan drug status for remdesivir from the US Food and Drug Administration (FDA) on 23 March 2020. This provision is intended to encourage the development of drugs affecting fewer than 200,000 Americans by granting strengthened and extended legal monopoly rights to the manufacturer, along with waivers on taxes and government fees. Remdesivir is a candidate for treating COVID-19; at the time the status was granted, fewer than 200,000 Americans had COVID-19, but numbers were climbing rapidly as the COVID-19 pandemic reached the US, and crossing the threshold soon was considered inevitable. Remdesivir was developed by Gilead with over $79 million in U.S. government funding. In May 2020, Gilead announced that it would provide the first 940,000 doses of remdesivir to the federal government free of charge. After facing strong public reactions, Gilead gave up the "orphan drug" status for remdesivir on 25 March. Gilead retains 20-year remdesivir patents in more than 70 countries. In May 2020, the company further announced that it was in discussions with several generics companies to provide rights to produce remdesivir for developing countries, and with the Medicines Patent Pool to provide broader generic access. Developing world Patents Patents have been criticized in the developing world, as they are thought to reduce access to existing medicines. Reconciling patents and universal access to medicine would require an efficient international policy of price discrimination. Moreover, under the TRIPS agreement of the World Trade Organization, countries must allow pharmaceutical products to be patented. In 2001, the WTO adopted the Doha Declaration, which indicates that the TRIPS agreement should be read with the goals of public health in mind, and allows some methods for circumventing pharmaceutical monopolies: via compulsory licensing or parallel imports, even before patent expiration. In March 2001, 40 multi-national pharmaceutical companies brought litigation against South Africa for its Medicines Act, which allowed the generic production of antiretroviral drugs (ARVs) for treating HIV, despite the fact that these drugs were on-patent. HIV was and is an epidemic in South Africa, and ARVs at the time cost between US$10,000 and US$15,000 per patient per year. This was unaffordable for most South African citizens, and so the South African government committed to providing ARVs at prices closer to what people could afford. To do so, they would need to ignore the patents on drugs and produce generics within the country (using a compulsory license), or import them from abroad. After international protest in favour of public health rights (including the collection of 250,000 signatures by Médecins Sans Frontières), the governments of several developed countries (including The Netherlands, Germany, France, and later the US) backed the South African government, and the case was dropped in April of that year. In 2016, GlaxoSmithKline (the world's sixth largest pharmaceutical company) announced that it would be dropping its patents in poor countries so as to allow independent companies to make and sell versions of its drugs in those areas, thereby widening the public access to them. GlaxoSmithKline published a list of 50 countries they would no longer hold patents in, affecting one billion people worldwide. Charitable programs In 2011 four of the top 20 corporate charitable donations and eight of the top 30 corporate charitable donations came from pharmaceutical manufacturers. The bulk of corporate charitable donations (69% as of 2012) comes by way of non-cash charitable donations, the majority of which again were donations contributed by pharmaceutical companies. Charitable programs and drug discovery & development efforts by pharmaceutical companies include: "Merck's Gift", wherein billions of river blindness drugs were donated in Africa Pfizer's gift of free/discounted fluconazole and other drugs for AIDS in South Africa GSK's commitment to give free albendazole tablets to the WHO for, and until, the elimination of lymphatic filariasis worldwide. In 2006, Novartis committed US$755 million in corporate citizenship initiatives around the world, particularly focusing on improving access to medicines in the developing world through its Access to Medicine projects, including donations of medicines to patients affected by leprosy, tuberculosis, and malaria; Glivec patient assistance programs; and relief to support major humanitarian organisations with emergency medical needs. See also References External links Pharmacology Pharmacy Industries (economics)
Pharmaceutical industry
[ "Chemistry", "Biology" ]
10,709
[ "Pharmacology", "Life sciences industry", "Pharmacy", "Pharmaceutical industry", "Medicinal chemistry" ]
560,941
https://en.wikipedia.org/wiki/FASat-Alfa
FASat-Alfa was to become the first Chilean satellite, and was constructed under a Technology Transfer Program between the Chilean Air Force (FACH) and Surrey Satellite Technology Ltd (SSTL) of the United Kingdom. The primary goal of the Program was to obtain for Chile the basic scientific and technological experience required to continue with more advanced steps. The purposes of the FASat-Alfa mission are to create a group of engineers with aerospace experience, to have the first Chilean satellite in orbit, and to install and operate the Mission Control Station (ECM-Santiago) in Chile. There were two satellites: FASat-Alfa and FASat-Bravo. The Alfa satellite was launched on 31 August 1995 on TSYCLON from Plesetsk. Its orbit was intended to be 682 x 651 km, inclined at 82.53 degrees; however, the spacecraft failed to separate from the failed Ukrainian satellite it was attached to. The Bravo satellite was launched on 10 July 1998, on TSYCLON from Baikonur. Its intended orbit was 682 x 651 km, inclined at 82.53 degrees. It was to operate 13,000 orbits until 2002. Characteristics Bus: SSTL microbus, fifth generation Owner: Fuerza Aérea De Chile (Air Force), Chile Payloads: Ozone Monitoring Experiment, Remote sensing, Data Transfer Experiment, Advanced Digital Signal Processing Payload, Orbital Positioning via GPS, Educational support External links Official FASat-Alfa project page (In spanish, with some articles in english) Spanish Wikipedia article Satellites orbiting Earth Satellites of Chile Spacecraft launched in 1995 First artificial satellites of a country
FASat-Alfa
[ "Astronomy" ]
327
[ "Astronomy stubs", "Spacecraft stubs" ]
561,018
https://en.wikipedia.org/wiki/Cyanohydrin
In organic chemistry, a cyanohydrin or hydroxynitrile is a functional group found in organic compounds in which a cyano and a hydroxy group are attached to the same carbon atom. The general formula is , where R is H, alkyl, or aryl. Cyanohydrins are industrially important precursors to carboxylic acids and some amino acids. Cyanohydrins can be formed by the cyanohydrin reaction, which involves treating a ketone or an aldehyde with hydrogen cyanide (HCN) in the presence of excess amounts of sodium cyanide (NaCN) as a catalyst: In this reaction, the nucleophilic ion attacks the electrophilic carbonyl carbon in the ketone, followed by protonation by HCN, thereby regenerating the cyanide anion. Cyanohydrins are also prepared by displacement of sulfite by cyanide salts: Cyanohydrins are intermediates in the Strecker amino acid synthesis. In aqueous acid, they are hydrolyzed to the α-hydroxy acid. Acetone cyanohydrins Acetone cyanohydrin, (CH3)2C(OH)CN is the cyanohydrin of acetone. It is generated as an intermediate in the industrial production of methyl methacrylate. In the laboratory, this liquid serves as a source of HCN, which is inconveniently volatile. Thus, acetone cyanohydrin can be used for the preparation of other cyanohydrins, for the transformation of HCN to Michael acceptors, and for the formylation of arenes. Treatment of this cyanohydrin with lithium hydride affords anhydrous lithium cyanide: Preparative methods Cyanohydrins were first prepared by the addition of HCN and a catalyst (base or enzyme) to the corresponding carbonyl. On a laboratory scale the use of HCN (toxic) is largely not encouraged, for this reason other less dangerous cyanation reagents are sought out. In situ formation of HCN can be sourced using precursors such as acetone cyanohydrin. Alternatively, cyano-silyl derivatives such as TMS-CN allows for both the cyanation and protection in one step without the need for HCN. Similar procedures relying on ester, phosphate and carbonate formation have been reported. Other cyanohydrins Mandelonitrile, with the formula C6H5CH(OH)CN, occurs in small amounts in the pits of some fruits. Related cyanogenic glycosides are known, such as amygdalin. Glycolonitrile, also called hydroxyacetonitrile or formaldehyde cyanohydrin, is the organic compound with the formula HOCH2CN. It is the simplest cyanohydrin, being derived from formaldehyde. See also Halohydrin Benzoin condensation References External links IUPACs Gold Book definition of cyanohydrins Functional groups
Cyanohydrin
[ "Chemistry" ]
660
[ "Functional groups" ]
561,065
https://en.wikipedia.org/wiki/Bachelor%20herd
A bachelor herd is a herd of (usually) juvenile male animals who are still sexually immature or 'harem'-forming animals who have been thrown out of their parent groups but not yet formed a new family group. It may also refer to a group of males who are not currently territorial or mating with females. Examples include seals, dolphins, lions, and many herbivores such as deer, horses, and elephants. Bachelor herds are thought to provide useful protection for social animals against more established herd competition or aggressive, dominant males. Males in bachelor herds are sometimes closely related to each other. Some animals, for example New Zealand fur seals, live in a bachelor herd all year except for the mating season, when there is a substantial increase in aggression and competition. In many species, males and females move in separate groups, often coming together at mating time, or to fight for territory or mating partners. In many species it is common for males to leave or be driven from the group as they mature, and they may wander as lone animals or form a bachelor group for the time being. This arrangement may be long term and stable, or short term until they find a new group to join. Types The social structure, aggression level, population size, and duration of presence of these herds across species varies greatly. Bachelor herds are most often found in mammals and are especially common in the grasslands. Impala Male impala form small bachelor herds during both the wet and dry seasons. These bachelor herds are generally smaller than herds of females, numbering around 4 members, compared to upwards of 10. Juvenile males begin to join bachelor herds at 8 months of age. In the Serengeti, immature or older males will usually form their own bachelor herds, while males of reproductive age are more often in mixed groups with females. Being actively territorial in the Serengeti is physically demanding for male impala, so males occupy this role for about 3 months. Males will then join a bachelor herd, though this results in them occupying a social dominance status at the bottom of the linear rank hierarchy until their physical condition returns to pre-territorial levels. Bachelor herds may coexist with territorial males in the same area, but these individual males are always dominant above bachelor males. Within the herds, bachelor males are less territorial toward each other than males in mixed herds. These males maintain, on average, a relatively large distance of approximately between them. However, bachelor males exhibit reciprocal grooming despite occasional aggressive interactions between bachelors. Fur seals Male fur seals, as a family, commonly live in bachelor herds during the non-breeding season. During the breeding season (April–September in the Northern Hemisphere, September–January in the Southern Hemisphere), the size of herds greatly diminishes. These bachelor herds are large in size, ranging from 15,000 to more than 20,000 seals living in one area, referred to as a rookery. The grounds occupied by fur seal bachelor herds are generally far away from breeding grounds, anywhere from or more. Members of the group range from seal that are one year old, called yearlings, up through older seals. There appears to be no rigid social structure during the non-breeding season and there is little competition for food or mates. The male fur seals are also mostly non-aggressive. Fur seal bachelor herds are frequently targets of the seal hunt due to large populations being concentrated in a relatively small area. There are few regulations in regard to adult male seal hunting due to limited effects on the future population. Cape mountain zebras Cape mountain zebra male foals often leave the breeding herd they were born in after the birth of siblings or around age 2, though the stallion of the breeding herd does not force them out. In fact, it has been observed that stallions often try to prevent foals from leaving the herd. These males then often go on to form their own bachelor herd or join an existing one. Males then stay in these bachelor herds until age 5, when they leave to become the stallion of their own breeding herd with one or more mares. Within Cape mountain zebra bachelor herds, there is usually no social hierarchy. Dominance is given to the more senior members of the herd and when the oldest males leave to form a breeding herd, the next oldest bachelors take on the leadership role. There is minimal intragroup aggression and no observed fighting between members for a higher social position. Bachelor herds often move with a breeding herd that is occupying a nearby area. At least one member of the bachelor herd in this case is usually the offspring of a mare in the breeding herd. Fillies also often temporarily join bachelor herds after leaving their maternal herd at the onset of their first estrus. The fillies then stay with the group until they join an existing breeding herd or make their own breeding herd with a bachelor male from the herd. Red deer Red deer males leave their mothers between 1 and 2 years of age. They then join bachelor herds, in which they spend most of the year. These herds are smaller (less than 50 members) and more unstable than the female herds and they follow a linear dominance hierarchy. This hierarchy is determined by both body size and the size of the stag's antlers, with older stags having on average larger antlers. The older stags in the herd maintain their dominance from one year to the next. Aggression within the herd is low until the stags shed their antlers, usually in early April. Intragroup clashes then increase as the females go into estrus. Males compete with members of their own bachelor herd for the attention of the females. This is called the “rutting season” and it lasts only a few weeks before males and females separate into their respective herds. The level of aggression within the bachelor herd then decreases substantially. References Ethology Animal sexuality
Bachelor herd
[ "Biology" ]
1,174
[ "Behavior", "Animals", "Behavioural sciences", "Animal sexuality", "Ethology", "Sexuality" ]
561,173
https://en.wikipedia.org/wiki/Kim%20Maltman
Kim Maltman (born 1951) is a Canadian poet and physicist who lives in Toronto, Ontario. He is a professor of applied mathematics at York University and pursues research in theoretical nuclear/particle physics. He is serving as a judge for the 2019 Griffin Poetry Prize. Works The Country of the Mapmakers (1977), The Sicknesses of Hats (1982), Branch Lines (1982), Softened Violence (1984), The Transparence of November / Snow (1985), (with Roo Borson) Technologies/Installations (1990), Introduction to the Introduction to Wang Wei (2000), (by Pain Not Bread) External links Archives of Kim Maltman (Roo Borson and Kim Maltman fonds, (R12759) are held at Library and Archives Canada 1951 births Living people 20th-century Canadian poets Scientists from Toronto Poets from Toronto Particle physicists Academic staff of York University Canadian male poets 20th-century Canadian male writers 21st-century Canadian male writers 21st-century Canadian poets Canadian nuclear physicists Canadian particle physicists 20th-century Canadian physicists 21st-century Canadian scientists
Kim Maltman
[ "Physics" ]
225
[ "Particle physicists", "Particle physics" ]
561,445
https://en.wikipedia.org/wiki/Naugahyde
Naugahyde is an American brand of artificial leather. Naugahyde is a composite knit fabric backing and expanded polyvinyl chloride (PVC) coating. It was developed by Byron A. Hunter, a senior chemist at the United States Rubber Company, and is now manufactured and sold by the corporate spin-off Uniroyal Engineered Products LLC. Its name, first used as a trademark in 1936, comes from the name of Naugatuck, Connecticut, where it was first produced. It is now manufactured in Stoughton, Wisconsin. Uses The primary use for Naugahyde is as a substitute for leather in upholstery. In this application it is very durable and can be easily maintained by wiping with a damp sponge or cloth. Being a synthetic product, it is supplied in long rolls, allowing large sections of furniture to be covered seamlessly, unlike animal hides. General Motors for several decades used the material in several of its vehicles, with the term "Cordaveen" and later "Madrid-grain vinyl" for Buick, "Morocceen" for Oldsmobile, and "Morrokide" for Pontiac vehicles, while Chevrolet didn't use a brand name and simply listed it in sales brochures as "vinyl interior". Marketing A marketing campaign of the 1960s and 1970s asserted humorously that Naugahyde was obtained from the skin of an animal called a "Nauga". The claim became an urban myth. The campaign emphasized that, unlike other animals, which must typically be slaughtered to obtain their hides, Naugas can shed their skin without harm to themselves. The Nauga doll, a squat, horned monster with a wide, toothy grin, became popular in the 1960s and is still sold today. References External links The Naugahyde Company Artificial leather
Naugahyde
[ "Chemistry" ]
370
[ "Artificial leather", "Synthetic materials" ]
561,460
https://en.wikipedia.org/wiki/Hack%20and%20slash
Hack and slash, also known as hack and slay (H&S or HnS) or slash 'em up, refers to a type of gameplay that emphasizes combat with melee-based weapons (such as swords or blades). They may also feature projectile-based weapons as well (such as guns) as secondary weapons. It is a sub-genre of beat 'em up games, which focuses on melee combat, usually with swords. The term "hack and slash" was originally used to describe a play style in tabletop role-playing games, carrying over from there to MUDs, massively multiplayer online role-playing games, and role-playing video games. In arcade and console style action video games, the term has an entirely different usage, specifically referring to action games with a focus on real-time combat with hand-to-hand weapons as opposed to guns or fists. The two types of hack-and-slash games are largely unrelated, though action role-playing games may combine elements of both. Types of hack-and-slash games Action video games In the context of action video games, the terms "hack and slash" or "slash 'em up" refer to melee weapon-based action games that are a sub-genre of beat 'em ups. Traditional 2D side-scrolling examples include Taito's The Legend of Kage (1985) and Rastan (1987), Sega's arcade video game series Shinobi (1987 debut) and Golden Axe (1989 debut), Data East's arcade game Captain Silver (1987), Tecmo's early Ninja Gaiden (Shadow Warriors) 2D games (1988 debut), Capcom's Strider (1989), the Sega Master System game Danan: The Jungle Fighter (1990), Taito's Saint Sword (1991), Vivid Image's home computer game First Samurai (1991), and Vanillaware's Dragon's Crown (2013). The term "hack-and-slash" in reference to action-adventure games dates back to 1987, when Computer Entertainer reviewed The Legend of Zelda and said it had "more to offer than the typical hack-and-slash" epics. In the early 21st century, journalists covering the video game industry often use the term "hack and slash" to refer to a distinct genre of 3D, third-person, weapon-based, melee action games. Examples include Capcom's Devil May Cry, Onimusha, and Sengoku Basara franchises, Koei Tecmo's Dynasty Warriors and 3D Ninja Gaiden games, Sony's Genji: Dawn of the Samurai and God of War, as well as Bayonetta, Darksiders, Dante's Inferno, and No More Heroes. The genre is sometimes known as "character action" games, and represent a modern evolution of traditional arcade action games. This subgenre of games was largely defined by Hideki Kamiya, creator of Devil May Cry and Bayonetta. In turn, Devil May Cry (2001) was influenced by earlier hack-and-slash games, including Onimusha: Warlords (2001) and Strider. Role-playing games The term "hack and slash" itself has roots in "pen and paper" role-playing games such as Dungeons & Dragons (D&D), denoting campaigns of violence with no other plot elements or significant goal. The term itself dates at least as far back as 1980, as shown in a Dragon article by Jean Wells and Kim Mohan which includes the following statement: "There is great potential for more than hacking and slashing in D&D or AD&D; there is the possibility of intrigue, mystery and romance involving both sexes, to the benefit of all characters in a campaign." Hack and slash made the transition from the tabletop to role-playing video games, usually starting in D&D-like worlds. This form of gameplay influenced a wide range of action role-playing games, including games such as Xanadu and Diablo. See also Action role-playing game Beat 'em up Dungeon crawl List of beat 'em ups, including hack-and-slash games Powergaming Roguelike Slasher film References 20th-century neologisms MUD terminology Role-playing game terminology Video game genres Video game terminology
Hack and slash
[ "Technology" ]
883
[ "Computing terminology", "Video game terminology" ]
561,585
https://en.wikipedia.org/wiki/Master%20theorem%20%28analysis%20of%20algorithms%29
In the analysis of algorithms, the master theorem for divide-and-conquer recurrences provides an asymptotic analysis for many recurrence relations that occur in the analysis of divide-and-conquer algorithms. The approach was first presented by Jon Bentley, Dorothea Blostein (née Haken), and James B. Saxe in 1980, where it was described as a "unifying method" for solving such recurrences. The name "master theorem" was popularized by the widely used algorithms textbook Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein. Not all recurrence relations can be solved by this theorem; its generalizations include the Akra–Bazzi method. Introduction Consider a problem that can be solved using a recursive algorithm such as the following: procedure p(input x of size n): if n < some constant k: Solve x directly without recursion else: Create a subproblems of x, each having size n/b Call procedure p recursively on each subproblem Combine the results from the subproblems The above algorithm divides the problem into a number of subproblems recursively, each subproblem being of size . Its solution tree has a node for each recursive call, with the children of that node being the other calls made from that call. The leaves of the tree are the base cases of the recursion, the subproblems (of size less than k) that do not recurse. The above example would have child nodes at each non-leaf node. Each node does an amount of work that corresponds to the size of the subproblem passed to that instance of the recursive call and given by . The total amount of work done by the entire algorithm is the sum of the work performed by all the nodes in the tree. The runtime of an algorithm such as the above on an input of size , usually denoted , can be expressed by the recurrence relation where is the time to create the subproblems and combine their results in the above procedure. This equation can be successively substituted into itself and expanded to obtain an expression for the total amount of work done. The master theorem allows many recurrence relations of this form to be converted to Θ-notation directly, without doing an expansion of the recursive relation. Generic form The master theorem always yields asymptotically tight bounds to recurrences from divide and conquer algorithms that partition an input into smaller subproblems of equal sizes, solve the subproblems recursively, and then combine the subproblem solutions to give a solution to the original problem. The time for such an algorithm can be expressed by adding the work that they perform at the top level of their recursion (to divide the problems into subproblems and then combine the subproblem solutions) together with the time made in the recursive calls of the algorithm. If denotes the total time for the algorithm on an input of size , and denotes the amount of time taken at the top level of the recurrence then the time can be expressed by a recurrence relation that takes the form: Here is the size of an input problem, is the number of subproblems in the recursion, and is the factor by which the subproblem size is reduced in each recursive call (). Crucially, and must not depend on . The theorem below also assumes that, as a base case for the recurrence, when is less than some bound , the smallest input size that will lead to a recursive call. Recurrences of this form often satisfy one of the three following regimes, based on how the work to split/recombine the problem relates to the critical exponent . (The table below uses standard big O notation). A useful extension of Case 2 handles all values of : Examples Case 1 example As one can see from the formula above: , so , where Next, we see if we satisfy the case 1 condition: . It follows from the first case of the master theorem that (This result is confirmed by the exact solution of the recurrence relation, which is , assuming ). Case 2 example As we can see in the formula above the variables get the following values: where Next, we see if we satisfy the case 2 condition: , and therefore, c and are equal So it follows from the second case of the master theorem: Thus the given recurrence relation was in . (This result is confirmed by the exact solution of the recurrence relation, which is , assuming ). Case 3 example As we can see in the formula above the variables get the following values: , where Next, we see if we satisfy the case 3 condition: , and therefore, yes, The regularity condition also holds: , choosing So it follows from the third case of the master theorem: Thus the given recurrence relation was in , that complies with the of the original formula. (This result is confirmed by the exact solution of the recurrence relation, which is , assuming .) Inadmissible equations The following equations cannot be solved using the master theorem: a is not a constant; the number of subproblems should be fixed non-polynomial difference between and (see below; extended version applies) , which is the combination time, is not positive case 3 but regularity violation. In the second inadmissible example above, the difference between and can be expressed with the ratio . It is clear that for any constant . Therefore, the difference is not polynomial and the basic form of the Master Theorem does not apply. The extended form (case 2b) does apply, giving the solution . Application to common algorithms See also Akra–Bazzi method Asymptotic complexity Notes References Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw–Hill, 2001. . Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90. Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundation, Analysis, and Internet Examples. Wiley, 2002. . The master theorem (including the version of Case 2 included here, which is stronger than the one from CLRS) is on pp. 268–270. Asymptotic analysis Theorems in computational complexity theory Recurrence relations Analysis of algorithms
Master theorem (analysis of algorithms)
[ "Mathematics" ]
1,355
[ "Mathematical analysis", "Recurrence relations", "Theorems in discrete mathematics", "Mathematical relations", "Asymptotic analysis", "Theorems in computational complexity theory" ]
561,590
https://en.wikipedia.org/wiki/Positive%20pressure
Positive pressure is a pressure within a system that is greater than the environment that surrounds that system. Consequently, if there is any leak from the positively pressured system, it will egress into the surrounding environment. This is in contrast to a negative pressure room, where air is sucked in. Use is also made of positive pressure to ensure there is no ingress of the environment into a supposed closed system. A typical example of the use of positive pressure is the location of a habitat in an area where there may exist flammable gases such as those found on an oil platform or laboratory cleanroom. This kind of positive pressure is also used in operating theaters and in vitro fertilisation (IVF) labs. Hospitals may have positive pressure rooms for patients with compromised immune systems. Air will flow out of the room instead of in, so that any airborne microorganisms (e.g., bacteria) that may infect the patient are kept away. This process is important in human and chick development. Positive pressure, created by the closure of anterior and posterior neuropores of the neural tube during neurulation, is a requirement of brain development. Amphibians use this process to respire, whereby they use positive pressure to inflate their lungs. Historic utility Industrial utility Industrial use of positive pressure systems are commonly used to ventilate confined spaces with dust, fumes, pollutants and/or high temperatures Clinical utility Many hospitals are equipped with negative and positive pressure rooms just for the purposes described. Negative pressure rooms are used to help keep airborne pathogens (eg. aerosolized COVID-19 and active TB) from escaping into surrounding areas, thereby preventing the spread of airborne pathogens to outside the room. Positive pressure rooms are used for immunocompromised persons (eg. Neutropenic) whereby controlled quality air is sent into the room to prevent random (and potentially polluted) air from entering the room. The CDC recommends a positive pressure differential of at least 2.5 Pa between the positively pressured room and the adjoining hallway. See also Filtered air positive pressure Negative pressure (disambiguation) Negative room pressure Overpressure (CBRN protection) Plenum chamber Positive pressure enclosure References Classical mechanics Gas technologies Pressure sv:Övertryck (lufttryck)
Positive pressure
[ "Physics" ]
476
[ "Scalar physical quantities", "Mechanical quantities", "Physical quantities", "Pressure", "Classical mechanics", "Mechanics", "Wikipedia categories named after physical quantities" ]
561,597
https://en.wikipedia.org/wiki/Battlefield%20Vietnam
Battlefield Vietnam is a 2004 first-person shooter video game developed by Digital Illusions Canada and published by Electronic Arts for Microsoft Windows. It is the second installment of the Battlefield franchise, coming after Battlefield 1942. Battlefield Vietnam takes place during the Vietnam War and features a large variety of maps based on historical settings, such as the Ho Chi Minh Trail, Battle of Huế, Ia Drang Valley, Operation Flaming Dart, the Battle of Khe Sanh and Fall of Saigon. On 15 March 2005, EA re-released the game as Battlefield Vietnam: Redux, which includes new vehicles, maps and an EA-produced World War II mod, based on the previous installment Battlefield 1942. Gameplay In the game's playable maps, the player's primary objective is to occupy Control Points to enable allies and controllable vehicles to spawn. Battlefield Vietnam employs similar point-by-point objectives to its prequel, Battlefield 1942, as well as a form of asymmetrical warfare gameplay. The two teams, the U.S. and North Vietnam, are provided different equipment and vehicles. The U.S. relies on heavy vehicles, employing heavy tanks, helicopters, and bombers. The Vietnamese rely on infantry tactics, utilizing anti-tank weapons. The developers intended to reflect the actual conditions of war throughout the game. The game features a "Sipi Hole" as a mobile spawn point, which is representative of the vast tunnel networks utilized by Vietnam forces. Similar to previous games in the Battlefield series, spawn tickets (reinforcements) play a vital role in defeating the opposing team. Battlefield Vietnam features the United States with Marines, Army and the Navy; South Vietnam with Army of the Republic of Vietnam; and North Vietnam with People's Army of Vietnam and the Viet Cong. Built on a modified version of the Battlefield 1942 engine, Battlefield Vietnam has new and improved features compared to its predecessor. The game gives the player a variety of weapons based on the war and features various contemporary weapons and concepts, such as the AK47 assault rifle and punji stick traps. The game introduced several vehicle improvements over the prequel, such as air-lifting vehicles and working vehicle radios. The radios feature 1960s music and an option for the player to import their own audio files into a designated directory. Unlike the prequel, players are able to fire their weapons from vehicles when in the passenger seat of a vehicle. The game is the first in the Battlefield series to utilize a 3D map, allowing players to see icons that represent the position of control points or friendly units, giving the player increased situational awareness. Reception In June 2004, Battlefield Vietnam received a "Gold" certification from the Verband der Unterhaltungssoftware Deutschland, indicating sales of at least 100,000 units across Germany, Switzerland and Austria. Overall sales of Battlefield Vietnam reached 990,000 copies by that November, by which time the Battlefield series had sold 4.4 million copies. The game received "generally favorable reviews" according to the review aggregation website Metacritic. Battlefield Vietnam was a runner-up for Computer Games Magazines list of the 10 best computer games of 2004. It won the magazine's special award for "Best Soundtrack". It also won GameSpot's 2004 "Best Licensed Music" award. Notes References External links 2004 video games Asymmetrical multiplayer video games 02 Cold War video games Electronic Arts games First-person shooter multiplayer online games Inactive multiplayer online games Multiplayer and single-player video games Multiplayer online games Video games about the United States Marine Corps Video games developed in Canada Video games set in the 1960s Video games set in the 1970s Video games set in Cambodia Video games set in Vietnam Vietnam War video games Windows games Windows-only games
Battlefield Vietnam
[ "Physics" ]
742
[ "Asymmetrical multiplayer video games", "Symmetry", "Asymmetry" ]
561,717
https://en.wikipedia.org/wiki/Dimethylmercury
Dimethylmercury is an extremely toxic organomercury compound with the formula (CH3)2Hg. A volatile, flammable, dense and colorless liquid, dimethylmercury is one of the strongest known neurotoxins. Less than 0.1 mL is capable of inducing severe mercury poisoning resulting in death. Synthesis, structure, and reactions The compound was one of the earliest organometallics reported, reflecting its considerable stability. The compound was first prepared by George Buckton in 1857 by a reaction of methylmercury iodide with potassium cyanide: 2 CH3HgI + 2 KCN → Hg(CH3)2 + 2 KI + (CN)2 + Hg Later, Edward Frankland discovered that it could be synthesized by treating sodium amalgam with methyl halides: Hg + 2 Na + 2 CH3I → Hg(CH3)2 + 2 NaI It can also be obtained by alkylation of mercuric chloride with methyllithium: HgCl2 + 2 LiCH3 → Hg(CH3)2 + 2 LiCl The molecule adopts a linear structure with Hg–C bond lengths of 2.083 Å. Reactivity and physical properties Dimethylmercury is stable in water and reacts with mineral acids at a significant rate only at elevated temperatures, whereas the corresponding organocadmium and organozinc compounds (and most metal alkyls in general) hydrolyze rapidly. The difference reflects the high electronegativity of Hg (Pauling EN = 2.00) and the low affinity of Hg(II) for oxygen ligands. The compound undergoes a redistribution reaction with mercuric chloride to give methylmercury chloride: (CH3)2Hg + HgCl2 → 2 CH3HgCl Whereas dimethylmercury is a volatile liquid, methylmercury chloride is a crystalline solid. Use Dimethylmercury has few applications because of the risks involved. It has been studied for reactions involving bonding methylmercury cations to target molecules, forming potent bactericides, but methylmercury's bioaccumulation and ultimate toxicity has led to it being largely abandoned in favor of the less toxic ethylmercury and diethylmercury compounds, which perform a similar function without the bioaccumulation hazard. In toxicology, it still finds limited use as a reference toxin. It is also used to calibrate NMR instruments for detection of mercury (δ 0 ppm for 199Hg NMR), although diethylmercury and less toxic mercury salts are now preferred. Around 1960, Phil Pomerantz, a man working at the Bureau of Naval Weapons, suggested that dimethylmercury be used as a fuel mix with red fuming nitric acid. This was never done although it did lead to testing a red fuming nitric acid-Unsymmetrical dimethylhydrazine rocket with elemental mercury being injected into the combustion chamber at the Naval Ordnance Test Station. Safety Dimethylmercury is extremely toxic and dangerous to handle. Absorption of doses as low as 0.1 mL can result in severe mercury poisoning. The risks are enhanced because of the compound's high vapor pressure. Since it is highly lipophilic, it absorbs through the skin and into body fat very easily and can permeate many materials, including many plastics and rubber compounds. Permeation tests showed that several types of disposable latex or polyvinyl chloride gloves (typically, about 0.1 mm thick), commonly used in most laboratories and clinical settings, had high and maximal rates of permeation by dimethylmercury within 15 seconds. The American Occupational Safety and Health Administration advises handling dimethylmercury with highly resistant laminated gloves with an additional pair of abrasion-resistant gloves worn over the laminate pair, and also recommends using a face shield and working in a fume hood. Dimethylmercury is metabolized after several days to methylmercury. Methylmercury crosses the blood–brain barrier easily, probably owing to formation of a complex with cysteine. It easily absorbs into the body, and has a tendency to bioaccumulate. The symptoms of poisoning may be delayed by months, resulting in cases in which a diagnosis is ultimately discovered, but only at a point in which it is too late or almost too late for an effective treatment regimen to be successful. Methylmercury poisoning is also known as Minamata disease. Incidents As early as 1865, two workers in the laboratory of Edward Frankland died after exhibiting progressive neurological symptoms following accidental exposure to the compound. Karen Wetterhahn, a professor of chemistry at Dartmouth College, died in 1997, ten months after spilling only a few drops of dimethylmercury onto her latex gloves. This incident resulted in improved awareness of the substance's extreme toxicity, and its ability to easily penetrate latex, compared to less porous materials such as nitrile. New OSHA material-handling guidelines were published, many institutions purged their supplies of the compound, and it became almost impossible to buy. Christoph Bulwin, a 40-year-old German database administrator for IG Bergbau, Chemie, Energie, claimed to have been attacked with a syringe-tipped umbrella on 15 July 2011 in Hanover, Germany. Bulwin, who died a year later from mercury poisoning, had said he confiscated the syringe, which was later found to contain dimethylmercury. According to a 2020 article in Forensic Science, Medicine and Pathology, police investigations revealed a syringe containing a typical mercury thallium compound in Bulwin's car, and mercury and thallium in thermometers at his workplace. Inconclusive antemortem and postmortem blood, urine, and tissue analysis cast doubts on the assault account. The absence of an identified assailant or motive, as well as the presence of different mercury compounds in Bulwin's car, led police to conclude that the intoxication was likely self-administered, thereby terminating the preliminary investigation. The Forensic Science, Medicine and Pathology account is contradicted by other reports, including the 2022 episode of the German TV program Aktenzeichen XY .. ungelöst that was co-edited by the Hannover criminal police. References External links ATSDR – ToxFAQs: Mercury ATSDR – Public Health Statement: Mercury ATSDR – MMG: Mercury ATSDR – Toxicological Profile: Mercury National Pollutant Inventory – Mercury and compounds Fact Sheet Neurotoxins Organomercury compounds Methyl complexes
Dimethylmercury
[ "Chemistry" ]
1,406
[ "Neurochemistry", "Neurotoxins" ]
561,845
https://en.wikipedia.org/wiki/Jacob%20Bekenstein
Jacob David Bekenstein (; May 1, 1947 – August 16, 2015) was a Mexican-born American-Israeli theoretical physicist who made fundamental contributions to the foundation of black hole thermodynamics and to other aspects of the connections between information and gravitation. Biography Jacob Bekenstein was born in Mexico City to Joseph and Esther (née Vladaslavotsky), Polish Jews who immigrated to Mexico. He moved to the United States during his early life, gaining U.S. citizenship in 1968. He was also a citizen of Israel. Bekenstein attended the Polytechnic Institute of Brooklyn, now known as the New York University Tandon School of Engineering, obtaining both an undergraduate degree and a Master of Science degree in 1969. He went on to receive a Doctor of Philosophy degree from Princeton University, working under the direction of John Archibald Wheeler, in 1972. Bekenstein had three children with his wife, Bilha. All three children, Yehonadav, Uriya and Rivka Bekenstein, became scientists. Bekenstein was known as a religious man and a believer, being quoted as saying: "I look at the world as a product of God, He set very specific laws and we delight in discovering them through scientific work." Scientific career By 1972, Bekenstein had published three influential papers about the black hole stellar phenomenon, postulating the no-hair theorem and presenting a theory on black hole thermodynamics. In the years to come, Bekenstein continued his exploration of black holes, publishing papers on their entropy and quantum mass. Bekenstein was a postdoctoral fellow at the University of Texas at Austin from 1972 to 1974. He then immigrated to Israel to lecture and teach at Ben-Gurion University in Beersheba. In 1978, he became a full professor and in 1983, head of the astrophysics department. In 1990, he became a professor at the Hebrew University of Jerusalem and was appointed head of its theoretical physics department three years later. He was elected to the Israel Academy of Sciences and Humanities in 1997. He was a visiting scholar at the Institute for Advanced Study in 2009 and 2010. In addition to lectures and residencies around the world, Bekenstein continued to serve as Polak professor of theoretical physics at the Hebrew University until his death at the age of 68, in Helsinki, Finland. He died unexpectedly on August 16, 2015, just months after receiving the American Physical Society's Einstein Prize "for his ground-breaking work on black hole entropy, which launched the field of black hole thermodynamics and transformed the long effort to unify quantum mechanics and gravitation". Contributions to physics In 1972, Bekenstein was the first to suggest that black holes should have a well-defined entropy. He wrote that a black hole's entropy was proportional to the area of its (the black hole's) event horizon. Bekenstein also formulated the generalized second law of thermodynamics, black hole thermodynamics, for systems including black holes. Both contributions were affirmed when Stephen Hawking (and, independently, Zeldovich and others) proposed the existence of Hawking radiation two years later. Hawking had initially opposed Bekenstein's idea on the grounds that a black hole could not radiate energy and therefore could not have entropy. However, in 1974, Hawking performed a lengthy calculation that convinced him that particles can indeed be emitted from black holes. Today this is known as Hawking radiation. Bekenstein's doctoral adviser, John Archibald Wheeler, also worked with him to develop the no-hair theorem, a reference to Wheeler's saying that "black holes have no hair," in the early 1970s. Bekenstein's suggestion was proven unstable, but it was influential in the development of the field. Based on his black-hole thermodynamics work, Bekenstein also demonstrated the Bekenstein bound: there is a maximum to the amount of information that can potentially be stored in a given finite region of space which has a finite amount of energy (which is similar to the holographic principle). In 1982, Bekenstein developed a rigorous framework to generalize the laws of electromagnetism to handle inconstant physical constants. His framework replaces the fine-structure constant by a scalar field. However, this framework for changing constants did not incorporate gravity. In 2004, Bekenstein boosted Mordehai Milgrom's theory of Modified Newtonian Dynamics (MOND) by developing a relativistic version. It is known as TeVeS for Tensor/Vector/Scalar and it introduces three different fields in space time to replace the one gravitational field. Awards and recognition Ernst David Bergmann Prize for Science (Israel) in 1977 Landau Prize for Research in Physics (Israel) in 1981 First prize essay for the Gravity Research Foundation (United States) in 1981 Rothschild Prize in the Physical Sciences in 1988 Elected to the Israel Academy of Sciences and Humanities in 1997 Second prize essay for the Gravity Research Foundation in 2001 Elected to the World Jewish Academy of Sciences in 2003 Israel Prize in Physics in 2005 Weizmann Prize in the Exact Sciences (Tel Aviv, Israel) in 2011 Wolf Prize in Physics in 2012 Einstein Prize of the American Physical Society in 2015 Published works J. D. Bekenstein, Information in the Holographic Universe. Scientific American, Volume 289, Number 2, August 2003, p. 61. J. D. Bekenstein and M. Schiffer, Quantum Limitations on the Storage and Transmission of Information, Int. J. of Modern Physics 1:355–422 (1990). J. D. Bekenstein, Entropy content and information flow in systems with limited energy, Phys. Rev. D 30:1669–1679 (1984) . [citeseer] J. D. Bekenstein, Communication and energy, Phys. Rev A 37(9):3437–3449 (1988) . [citeseer] J. D. Bekenstein, Entropy bounds and the second law for black holes, Phys. Rev. D 27(10):2262–2270 (1983). [citeseer] J. D. Bekenstein, Specific entropy and the sign of the energy, Phys. Rev. D 26(4):950–953 (1982). [citeseer] J. D. Bekenstein, Black holes and everyday physics, General Relativity and Gravitation, 14(4):355–359 (1982). [citeseer] J. D. Bekenstein, Universal upper bound to entropy-to-energy ratio for bounded systems, Phys. Rev. D 23:287–298 (1981). [citeseer] J. D. Bekenstein, Energy cost of information transfer, Phys. Rev. Lett 46:623–626. (1981). [citeseer] J. D. Bekenstein, Black-hole thermodynamics, Physics Today, 24–31 (Jan. 1980). J. D. Bekenstein, Statistical black hole thermodynamics, Phys. Rev. D 12:3077–3085 (1975). [citeseer] J. D. Bekenstein, Generalized second law of thermodynamics in black hole physics, Phys. Rev. D 9:3292–3300 (1974) . [citeseer] J. D. Bekenstein, Black holes and entropy, Phys. Rev. D 7:2333–2346 (1973) . [citeseer] J. D. Bekenstein, Black holes and the second law, Nuovo Cimento Letters 4:737–740 (1972). J. D. Bekenstein, Nonexistence of baryon number of static black holes, Phys. Rev. D 5:2403–2412 (1972). [citeseer] Notes References External links Bekenstein's papers list at ArXiv with links to the full papers Israel Prize Official Site – CV of Jacob Bekenstein 1947 births 2015 deaths Israel Prize in physics recipients Israeli physicists Israeli astronomers American relativity theorists Thermodynamicists Princeton University alumni Polytechnic Institute of New York University alumni Academic staff of the Hebrew University of Jerusalem Members of the Israel Academy of Sciences and Humanities Institute for Advanced Study visiting scholars Mexican emigrants to Israel Mexican Jews Scientists from Mexico City Wolf Prize in Physics laureates Jewish American physicists Mexican people of Polish-Jewish descent Mexican emigrants to the United States Burials at Har HaMenuchot
Jacob Bekenstein
[ "Physics", "Chemistry" ]
1,776
[ "Thermodynamics", "Thermodynamicists" ]
561,885
https://en.wikipedia.org/wiki/Armillaria%20mellea
Armillaria mellea, commonly known as honey fungus, is an edible basidiomycete fungus in the genus Armillaria. It is a plant pathogen and part of a cryptic species complex of closely related and morphologically similar species. It causes Armillaria root rot in many plant species and produces mushrooms around the base of trees it has infected. The symptoms of infection appear in the crowns of infected trees as discoloured foliage, reduced growth, dieback of the branches and death. The mushrooms are edible but some people may be intolerant to them. This species is capable of producing light via bioluminescence in its mycelium. Armillaria mellea is widely distributed in temperate regions of the Northern Hemisphere. The fruit body or mushroom, commonly known as stump mushroom, stumpie, honey mushroom, pipinky or pinky, grows typically on hardwoods but may be found around and on other living and dead wood or in open areas. Taxonomy The species was originally named Agaricus melleus by Danish-Norwegian botanist Martin Vahl in 1790; it was transferred to the genus Armillaria in 1871 by Paul Kummer. Numerous subtaxa have been described: Similar species Armillaria mellea once included a range of species with similar features that have since been reclassified. The following are reassigned subtaxa, mostly variety-level entries from the 19th century: Description The basidiocarp of each has a smooth cap in diameter, convex at first but becoming flattened with age often with a central raised umbo, later becoming somewhat dish-shaped. The margins of the cap are often arched at maturity and the surface is sticky when wet. Though typically honey-coloured, this fungus is rather variable in appearance and sometimes has a few dark, hairy scales near the centre somewhat radially arranged. The gills are white at first, sometimes becoming pinkish-yellow or discoloured with age, broad and fairly distant, attached to the stipe at right angles or are slightly decurrent. The stipe is of variable length, up to about long and in diameter. It is fibrillose and of a firm spongy consistency at first but later becomes hollow. It is cylindrical and tapers to a point at its base where it is fused to the stipes of other mushrooms in the clump. It is whitish at the upper end and brownish-yellow below, often with a very dark-coloured base. There is a broad persistent skin-like ring attached to the upper part of the stipe. This has a velvety margin and yellowish fluff underneath and extends outwards as a white partial veil protecting the gills when young. The flesh of the cap is whitish and has a sweetish odour and flavour with a tinge of bitterness. Under the microscope, the spores are approximately elliptical, 7–9 by 6–7 μm, inamyloid with prominent apiculi (short, pointed projections) at the base. The spore print is white. The basidia (spore-producing structures) lack basal clamps. The main part of the fungus is underground where a mat of mycelial threads may extend for great distances. They are bundled together in rhizomorphs that are black in this species. The fungal body is not bioluminescent but its mycelia are luminous when in active growth. Pathogenesis Armillaria mellea infects new hosts through rhizomorphs and basidiospores. It is rare for basidiospores to be successful in infecting new hosts and often colonize woody debris instead, but rhizomorphs, however, can grow up to ten feet long in order to find a new host. Distribution and habitat Armillaria mellea is widespread in northern temperate zones. It has been found in North America, Europe and northern Asia, and It has been introduced to South Africa. The fungus grows parasitically on a large number of broadleaf trees. It fruits in dense clusters at the base of trunks or stumps. It has been reported in almost every state with the continental United States. Ecology Armillaria mellea prefers moist soil and lower soil temperatures but it can also withstand extreme temperatures, such as forest fires, due to the protection of the soil. It is found in many kinds of landscapes, including gardens, parks, vineyards, tree production areas, and natural landscapes. Armillaria mellea typically is symbiotic with hardwood trees and conifers, this includes orchards, planted forests, vineyards, and a few herbaceous plants. There are few signs, and the ones that do exist are often hard to find. The most prominent sign is honey-coloured mushrooms at the base of the infected plant. Additional signs include white, fan-shaped mycelia and black rhizomorphs with diameters between 1/32nd of an inch and 1/8th of an inch. These usually are not as noticeable because they occur beneath the bark and in the soil, respectively. The symptoms are much more numerous, including slower growth, dieback of branches, yellowing foliage, rotted wood at base and/or roots, external cankers, cracking bark, bleeding stem, leaf wilting, defoliation, and rapid death. Leaf wilting, defoliation, and dieback occur after the destruction of the cambium. It is one of the most common causes of death in trees and shrubs in both natural and human cultivated habitats, and cause steady and substantial losses. Disease cycle Armillaria mellea infects both through basidiospore and penetration of host species by rhizomorphs which can grow up to long per year to find new, living tissue to infect. However, infection of living host tissue through basidiospores is quite rare. Two basidiospores must germinate and fuse to be viable and produce mycelium. In the late summer and autumn, Armillaria mellea produces mushrooms with notched gills, a ring near the cap base, and a white to golden color. They do not always appear, but when they do they can be found on both living and dead trees near the ground. These mushrooms produce and release the sexually created basidiospore which is dispersed by the wind. This is the only spore-bearing phase. The fungus overwinters as either rhizomorphs or vegetative mycelium. Infected wood is weakened through decay in roots and tree base after destruction of the vascular cambium and underlying wood. Trees become infected by A. mellea when rhizomorphs growing through the soil encounter uninfected roots. Alternatively, when infected roots come into contact with uninfected ones the fungal mycelium may grow across. The rhizomorphs invade the trunk, growing between the bark and the wood and causing wood decay, growth reduction and mortality. Trees that are already under stress are more likely to be attacked but healthy trees may also be parasitized. The foliage becomes sparse and discoloured, twig growth slows down and branches may die back. When they are attacked, the Douglas-fir, western larch and some other conifers often produce an extra large crop of cones shortly before dying. Coniferous trees also tend to ooze resin from infected areas whereas broad-leaved trees sometimes develop sunken cankers. A growth of fruiting bodies near the base of the trunk confirms the suspicion of Armillaria root rot. In 1893, the American mycologist Charles Horton Peck reported finding Armillaria fruiting bodies that were "aborted", in a similar way to specimens of Entoloma abortivum. It was not until 1974 that Roy Watling showed that the aborted specimens included cells of both Armillaria mellea and Entoloma abortivum. He thought that the Armillaria was parasitizing the Entoloma, a plausible hypothesis given its pathogenic behaviour. However, a 2001 study by Czederpiltz, Volk and Burdsall showed that the Entoloma was in fact the microparasite. The whitish-grey malformed fruit bodies known as carpophoroids were the result of E. abortivum hyphae penetrating the Armillaria and disrupting its normal development. The main part of the fungus is underground where a mat of mycelial threads may extend for great distances. The rhizomorphs of A. mellea are initiated from mycelium into multicellular apices of rhizomorphs, which are multicellular vegetative organs that exclude soil from the interior of the rhizomorph tissues. The rhizomorphs spread through far greater distances through the ground than the mycelium. The rhizomorphs are black in this species. The fungal body is not bioluminescent but its mycelia and rhizomorphs are luminous when in active growth. A. mellea producing rhizomorphs is parasitic on woody plants of many species, including especially shrubs, hardwood and evergreen trees. In one example, A. mellea spread by rhizomorphs from an initially infected tree killed 600 trees in a prune orchard in 6 years. Each infected tree was immediately adjacent to an already infected one, the spread by rhizomorphs through the tree roots and soil. Management There are fungicides or management practices that will kill A. mellea after infection without damaging the infected plant, but these practicies are still being studied. There are practices that can extend the life of the plant and prevent further spreading. The best way to extend the plant life is to improve the host condition through supplemental watering and fertilization. To prevent further spread, regulate irrigation to avoid water stress, keep the root collar dry, control defoliating pathogens, remove stumps, fertilize adequately, avoid physical root damage and soil compaction, and don't plant trees that are especially susceptible to the disease in places where Armillaria mellea has been recorded. There is also some evidence that biological control using the fungus genus Trichoderma may help. Trichoderma is a predator of A. mellea and is often found in woodchips. Therefore, chipping or grinding dead and infected roots will give Trichoderma its preferred habitat and help it proliferate. Solarization will also create an ideal habitat as dry soil and higher soil temperatures are preferable for Trichoderma but poor conditions for A. mellea. Edibility Armillaria mellea mushroom are considered good edibles, though not preferred by some, and the tough stalks are usually excluded. They are best collected when young and thoroughly cooked. Some individuals have reported "allergic" reactions that result in stomach upsets. Some authors suggest not collecting mushrooms from the wood of various trees, including hemlock, buckeye, eucalyptus, and locust. They may have been used medicinally by indigenous peoples as a laxative. The mushrooms have a taste that has been described as slightly sweet and nutty, with a texture ranging from chewy to crunchy, depending on the method of preparation. Parboiling mushrooms before consuming removes the bitter taste present in some specimens, and may reduce the amount of gastrointestinal irritants. According to one guide, they must be cooked before eating. Drying the mushrooms preserves and intensifies their flavour, although reconstituted mushrooms tend to be tough to eat. The mushrooms can also be pickled and roasted. Chemistry Several bioactive compounds have been isolated and identified from the fruit bodies. The triterpenes 3β-hydroxyglutin-5-ene, friedelane-2α,3β-diol, and friedelin were reported in 2011. Indole compounds include tryptamine, and serotonin. The fungus produces cytotoxic compounds known as melleolides. Melleolides are made from orsellinic acid and protoilludane sesquiterpene alcohols via esterification. A polyketide synthase gene, termed ArmB, was identified in the genome of the fungus, which was found expressed during melleolide production. The gene shares c. 42% similarity with the orsellinic acid synthase gene (OrsA) in Aspergillus nidulans. Characterization of the gene proved it to catalyze orsillinic acid in vitro. It is a non-reducing iterative type-1 polyketide synthase. Co-incubation of free orsellinic acid with alcohols and ArmB showed cross-coupling activity. Therefore, the enzyme has transesterification activity. Also, there are other auxiliary factors suspected to control substrate specificity. Additionally, halogen modifications have been observed. Overexpression of annotated halogenases (termed ArmH1-5) and characterization of the subsequent enzymes revealed in all five enzymes the chlorination of mellolide F. In vitro reactions of free standing substrates showed that the enzymes do not require auxiliary carrier proteins for substrate delivery. See also Forest pathology List of Armillaria species List of bioluminescent fungi References Bioluminescent fungi mellea Edible fungi Fungi described in 1790 Fungi of Africa Fungi of Asia Fungi of Europe Fungi of North America Parasitic fungi Fungal grape diseases Fungal tree pathogens and diseases Taxa named by Martin Vahl Fungus species
Armillaria mellea
[ "Biology" ]
2,783
[ "Fungi", "Fungus species" ]
561,940
https://en.wikipedia.org/wiki/Yasumasa%20Kanada
was a Japanese computer scientist most known for his numerous world records over the past three decades for calculating digits of . He set the record 11 of the past 21 times. Career Kanada was a professor in the Department of Information Science at the University of Tokyo in Tokyo, Japan until 2015. Pi records From 2002 until 2009, Kanada held the world record calculating the number of digits in the decimal expansion of pi – exactly 1.2411 trillion digits. The calculation took more than 600 hours on 64 nodes of a HITACHI SR8000/MPP supercomputer. Some of his competitors in recent years include Jonathan and Peter Borwein and the Chudnovsky brothers. See also Chronology of computation of References External links 1949 births 2020 deaths 20th-century Japanese mathematicians 21st-century Japanese mathematicians People from Himeji, Hyōgo Pi-related people Tohoku University alumni Academic staff of the University of Tokyo
Yasumasa Kanada
[ "Mathematics" ]
188
[ "Pi-related people", "Pi" ]
561,962
https://en.wikipedia.org/wiki/Upsilon%20Pi%20Epsilon
Upsilon Pi Epsilon () is the first honor society dedicated to the computing and information disciplines. Informally known as UPE, Upsilon Pi Epsilon was founded in 1967, at Texas A&M University. It has more than 300 chapters worldwide. About Upsilon Pi Epsilon was established at Texas A&M University in January 1967 as an honor society for computer information. It was founded with 22 original members. Dr. Dan Drew, head of the university's Department of Computer Science, was the society's advisor and, later, its national president. The purpose of Upsilon Pi Epsilon was "the promotion of high scholarship and original investigation in the field of computer science and the advancement of the art and profession of computer science and related endeavors." It al recognized talent and sought to maintain high standards in the field. It was the first society developed for computer science in the United States. A second chapter, Alpha of Pennsylvania, was formed at Pennsylvania State University in December 1969. The group expanded to other colleges in the United States and abroad, becoming the first international honor society for the computer and information disciplines. Upsilon Pi Epsilon became a member of the Association of College Honor Societies in 1997. Upsilon Pi Epsilon is endorsed by the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers Computer Society (IEEE-CS). It was also a founding member of the International Federation of Engineering Education Societies. In 2012, Upsilon Pi Epsilon had 247 active collegiate chapters and 235 active alumni chapters. That same year, its annual initiations totaled 2,138 and its total membership was 229,800. Symbols The Greek name Upsilon Pi Epsilon was selected for the first letters of the Greek words for computer, information, and science. The UPE emblem or key features three symbols that are important historically to the computing and information disciplines: the zero, the one, and the abacus. The numbers are arranged as eleven binary bits. During the society's initiation ceremony, inductees sit in front of a table that features nine lit candles and two unlit candles that are arranged as the eleven binary bits of its key. The colors of Upsilon Pi Epsilon are maroon and white. Its symbol is the abacus. Its quarterly publication is the UPE NewsBrief. Activities Upsilon Pi Epsilon holds an annual convention. The society gives out several scholarships for its members and those who are active student members. It also cosponsores the International Collegiate Programming Contest with the Association of Computing Machinery. Membership Membership is available to undergraduate and graduate students in computer science who have a high grade point average. Eligible undergraduates must complete 48 credits, rank in the upper third of their class, and have a 3.25 GPA overall. Graduate students must have completed 15 units of graduate coursework in computing and must be in the top third of their class. Alumni who majored in computer science can also be offered membership into the fraternity, along with faculty members. Faculty must have taught in field for one year. Membership in UPE is lifetime. Chapters Upsilon Pi Epsilon has more than 300 chapters in the United States and overseas. Notable members Thomas G. Dietterich, emeritus professor of computer science at Oregon State University and a pioneer of the field of machine learning Devin Gaines, student at the University of Connecticut who earned five bachelor's degrees simultaneously Timothy P. McNamara, psychologist and the Gertrude Conaway Vanderbilt Chair in social and natural sciences at Vanderbilt University Joseph Monroe, computer scientist and academic Andrea Grimes Parker, computer scientist and professor at Georgia Tech Bryan Simonaire, Maryland state senator Angela Y. Wu, computer scientist and a professor emerita at American University See also Honor cords References External links UPE Homepage Association of College Honor Societies Student organizations established in 1967 1967 establishments in Texas Engineering honor societies
Upsilon Pi Epsilon
[ "Engineering" ]
774
[ "Engineering societies", "Engineering honor societies" ]
562,007
https://en.wikipedia.org/wiki/Magnesium%20chloride
Magnesium chloride is an inorganic compound with the formula . It forms hydrates , where n can range from 1 to 12. These salts are colorless or white solids that are highly soluble in water. These compounds and their solutions, both of which occur in nature, have a variety of practical uses. Anhydrous magnesium chloride is the principal precursor to magnesium metal, which is produced on a large scale. Hydrated magnesium chloride is the form most readily available. Production Magnesium chloride can be extracted from brine or sea water. In North America and South America too, it is produced primarily from Great Salt Lake brine. In the Jordan Valley, it is obtained from the Dead Sea. The mineral bischofite () is extracted (by solution mining) out of ancient seabeds, for example, the Zechstein seabed in northwest Europe. Some deposits result from high content of magnesium chloride in the primordial ocean. Some magnesium chloride is made from evaporation of seawater. In the Dow process, magnesium chloride is regenerated from magnesium hydroxide using hydrochloric acid: It can also be prepared from magnesium carbonate by a similar reaction. Structure crystallizes in the cadmium chloride motif, therefore it loses water upon heating: n = 12 (−16.4 °C), 8 (−3.4 °C), 6 (116.7 °C), 4 (181 °C), 2 (about 300 °C). In the hexahydrate, the is also octahedral, being coordinated to six water ligands. The octahydrate and the dodecahydrate can be crystallized from water below 298K. As verified by X-ray crystallography, these "higher" hydrates also feature [Mg(H2O)6]2+ ions. A decahydrate has also been crystallized. Preparation, general properties Anhydrous is produced industrially by heating the complex salt named hexamminemagnesium dichloride . The thermal dehydration of the hydrates (n = 6, 12) does not occur straightforwardly. As suggested by the existence of hydrates, anhydrous is a Lewis acid, although a weak one. One derivative is tetraethylammonium tetrachloromagnesate . The adduct is another. In the coordination polymer with the formula , Mg adopts an octahedral geometry. The Lewis acidity of magnesium chloride is reflected in its deliquescence, meaning that it attracts moisture from the air to the extent that the solid turns into a liquid. Applications Precursor to metallic magnesium Anhydrous is the main precursor to metallic magnesium. The reduction of into metallic Mg is performed by electrolysis in molten salt. As it is also the case for aluminium, an electrolysis in aqueous solution is not possible as the produced metallic magnesium would immediately react with water, or in other words that the water would be reduced into gaseous before Mg reduction could occur. So, the direct electrolysis of molten in the absence of water is required because the reduction potential to obtain Mg is lower than the stability domain of water on an Eh–pH diagram (Pourbaix diagram). The production of metallic magnesium at the cathode (reduction reaction) is accompanied by the oxidation of the chloride anions at the anode with release of gaseous chlorine. This process is developed at a large industrial scale. Dust and erosion control Magnesium chloride is one of many substances used for dust control, soil stabilization, and wind erosion mitigation. When magnesium chloride is applied to roads and bare soil areas, both positive and negative performance issues occur which are related to many application factors. Catalysis Ziegler-Natta catalysts, used commercially to produce polyolefins, often contain as a catalyst support. The introduction of supports increases the activity of traditional catalysts and allowed the development of highly stereospecific catalysts for the production of polypropylene. Magnesium chloride is also a Lewis acid catalyst in aldol reactions. Ice control Magnesium chloride is used for low-temperature de-icing of highways, sidewalks, and parking lots. When highways are treacherous due to icy conditions, magnesium chloride is applied to help prevent ice from bonding to the pavement, allowing snow plows to clear treated roads more efficiently. For the purpose of preventing ice from forming on pavement, magnesium chloride is applied in three ways: anti-icing, which involves spreading it on roads to prevent snow from sticking and forming; prewetting, which means a liquid formulation of magnesium chloride is sprayed directly onto salt as it is being spread onto roadway pavement, wetting the salt so that it sticks to the road; and pretreating, when magnesium chloride and salt are mixed together before they are loaded onto trucks and spread onto paved roads. Calcium chloride damages concrete twice as fast as magnesium chloride. The amount of magnesium chloride is supposed to be controlled when it is used for de-icing as it may cause pollution to the environment. Nutrition and medicine Magnesium chloride is used in nutraceutical and pharmaceutical preparations. The hexahydrate is sometimes advertised as "magnesium oil". Cuisine Magnesium chloride (E511) is an important coagulant used in the preparation of tofu from soy milk. In Japan it is sold as nigari (にがり, derived from the Japanese word for "bitter"), a white powder produced from seawater after the sodium chloride has been removed, and the water evaporated. In China, it is called lushui (卤水). Nigari or Iushui is, in fact, natural magnesium chloride, meaning that it is not completely refined (it contains up to 5% magnesium sulfate and various minerals). The crystals originate from lakes in the Chinese province of Qinghai, to be then reworked in Japan. Gardening and horticulture Because magnesium is a mobile nutrient, magnesium chloride can be effectively used as a substitute for magnesium sulfate (Epsom salt) to help correct magnesium deficiency in plants via foliar feeding. The recommended dose of magnesium chloride is smaller than the recommended dose of magnesium sulfate (20 g/L). This is due primarily to the chlorine present in magnesium chloride, which can easily reach toxic levels if over-applied or applied too often. It has been found that higher concentrations of magnesium in tomato and some pepper plants can make them more susceptible to disease caused by infection of the bacterium Xanthomonas campestris, since magnesium is essential for bacterial growth. Wastewater treatment It is used to supply the magnesium necessary to precipitate phosphorus in the form of struvite from agricultural waste as well as human urine. Occurrence Magnesium concentrations in natural seawater are between 1250 and 1350 mg/L, around 3.7% of the total seawater mineral content. Dead Sea minerals contain a significantly higher magnesium chloride ratio, 50.8%. Carbonates and calcium are essential for all growth of corals, coralline algae, clams, and invertebrates. Magnesium can be depleted by mangrove plants and the use of excessive limewater or by going beyond natural calcium, alkalinity, and pH values. The most common mineral form of magnesium chloride is its hexahydrate, bischofite. Anhydrous compound occurs very rarely, as chloromagnesite. Magnesium chloride-hydroxides, korshunovskite and nepskoeite, are also very rare. Toxicology Magnesium ions are bitter-tasting, and magnesium chloride solutions are bitter in varying degrees, depending on the concentration. Magnesium toxicity from magnesium salts is rare in healthy individuals with a normal diet, because excess magnesium is readily excreted in urine by the kidneys. A few cases of oral magnesium toxicity have been described in persons with normal renal function ingesting large amounts of magnesium salts, but it is rare. If a large amount of magnesium chloride is eaten, it will have effects similar to magnesium sulfate, causing diarrhea, although the sulfate also contributes to the laxative effect in magnesium sulfate, so the effect from the chloride is not as severe. Plant toxicity Chloride () and magnesium () are both essential nutrients important for normal plant growth. Too much of either nutrient may harm a plant, although foliar chloride concentrations are more strongly related with foliar damage than magnesium. High concentrations of ions in the soil may be toxic or change water relationships such that the plant cannot easily accumulate water and nutrients. Once inside the plant, chloride moves through the water-conducting system and accumulates at the margins of leaves or needles, where dieback occurs first. Leaves are weakened or killed, which can lead to the death of the tree. See also Acceptable daily intake Sorel cement Notes and references Notes References Handbook of Chemistry and Physics, 71st edition, CRC Press, Ann Arbor, Michigan, 1990. External links Magnesium Chloride as a De-Icing Agent MSDS file for Magnesium Chloride Hexahydrate Chlorides Magnesium compounds Alkaline earth metal halides Deliquescent materials Food additives E-number additives
Magnesium chloride
[ "Chemistry" ]
1,869
[ "Deliquescent materials", "Chlorides", "Inorganic compounds", "Salts" ]
562,061
https://en.wikipedia.org/wiki/Sorting%20network
In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks. Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea. Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets (especially bitonic mergesort) are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units. Introduction A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values (one per wire) that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater or equal to the bottom wire's value. In a formula, if the top wire carries and the bottom wire carries , then after hitting a comparator the wires carry and , respectively, so the pair of values is sorted. A network of wires and comparators that will correctly sort all possible inputs into ascending order is called a sorting network or Kruskal hub. By reflecting the network, it is also possible to sort all inputs into descending order. The full operation of a simple sorting network is shown below. It is evident why this sorting network will correctly sort the inputs; note that the first four comparators will "sink" the largest value to the bottom and "float" the smallest value to the top. The final comparator sorts out the middle two wires. Depth and efficiency The efficiency of a sorting network can be measured by its total size, meaning the number of comparators in the network, or by its depth, defined (informally) as the largest number of comparators that any input value can encounter on its way through the network. Noting that sorting networks can perform certain comparisons in parallel (represented in the graphical notation by comparators that lie on the same vertical line), and assuming all comparisons to take unit time, it can be seen that the depth of the network is equal to the number of time steps required to execute it. Insertion and Bubble networks We can easily construct a network of any size recursively using the principles of insertion and selection. Assuming we have a sorting network of size n, we can construct a network of size by "inserting" an additional number into the already sorted subnet (using the principle underlying insertion sort). We can also accomplish the same thing by first "selecting" the lowest value from the inputs and then sort the remaining values recursively (using the principle underlying bubble sort). The structure of these two sorting networks are very similar. A construction of the two different variants, which collapses together comparators that can be performed simultaneously shows that, in fact, they are identical. The insertion network (or equivalently, bubble network) has a depth of , where is the number of values. This is better than the time needed by random-access machines, but it turns out that there are much more efficient sorting networks with a depth of just , as described below. Zero-one principle While it is easy to prove the validity of some sorting networks (like the insertion/bubble sorter), it is not always so easy. There are permutations of numbers in an -wire network, and to test all of them would take a significant amount of time, especially when is large. The number of test cases can be reduced significantly, to , using the so-called zero-one principle. While still exponential, this is smaller than for all , and the difference grows quite quickly with increasing . The zero-one principle states that, if a sorting network can correctly sort all sequences of zeros and ones, then it is also valid for arbitrary ordered inputs. This not only drastically cuts down on the number of tests needed to ascertain the validity of a network, it is of great use in creating many constructions of sorting networks as well. The principle can be proven by first observing the following fact about comparators: when a monotonically increasing function is applied to the inputs, i.e., and are replaced by and , then the comparator produces and . By induction on the depth of the network, this result can be extended to a lemma stating that if the network transforms the sequence into , it will transform into . Suppose that some input contains two items , and the network incorrectly swaps these in the output. Then it will also incorrectly sort for the function This function is monotonic, so we have the zero-one principle as the contrapositive. Constructing sorting networks Various algorithms exist to construct sorting networks of depth (hence size ) such as Batcher odd–even mergesort, bitonic sort, Shell sort, and the Pairwise sorting network. These networks are often used in practice. It is also possible to construct networks of depth (hence size ) using a construction called the AKS network, after its discoverers Ajtai, Komlós, and Szemerédi. While an important theoretical discovery, the AKS network has very limited practical application because of the large linear constant hidden by the Big-O notation. These are partly due to a construction of an expander graph. A simplified version of the AKS network was described by Paterson in 1990, who noted that "the constants obtained for the depth bound still prevent the construction being of practical value". A more recent construction called the zig-zag sorting network of size was discovered by Goodrich in 2014. While its size is much smaller than that of AKS networks, its depth makes it unsuitable for a parallel implementation. Optimal sorting networks For small, fixed numbers of inputs , optimal sorting networks can be constructed, with either minimal depth (for maximally parallel execution) or minimal size (number of comparators). These networks can be used to increase the performance of larger sorting networks resulting from the recursive constructions of, e.g., Batcher, by halting the recursion early and inserting optimal nets as base cases. The following table summarizes the optimality results for small networks for which the optimal depth is known: For larger networks neither the optimal depth nor the optimal size are currently known. The bounds known so far are provided in the table below: The first sixteen depth-optimal networks are listed in Knuth's Art of Computer Programming, and have been since the 1973 edition; however, while the optimality of the first eight was established by Floyd and Knuth in the 1960s, this property wasn't proven for the final six until 2014 (the cases nine and ten having been decided in 1991). For one to twelve inputs, minimal (i.e. size-optimal) sorting networks are known, and for higher values, lower bounds on their sizes can be derived inductively using a lemma due to Van Voorhis (p. 240): . The first ten optimal networks have been known since 1969, with the first eight again being known as optimal since the work of Floyd and Knuth, but optimality of the cases and took until 2014 to be resolved. The optimality of the smallest known sorting networks for and was resolved in 2020. Some work in designing optimal sorting network has been done using genetic algorithms: D. Knuth mentions that the smallest known sorting network for was found by Hugues Juillé in 1995 "by simulating an evolutionary process of genetic breeding" (p. 226), and that the minimum depth sorting networks for and were found by Loren Schwiebert in 2001 "using genetic methods" (p. 229). Complexity of testing sorting networks Unless P=NP, the problem of testing whether a candidate network is a sorting network is likely to remain difficult for networks of large sizes, due to the problem being co-NP-complete. References External links List of smallest sorting networks for given number of inputs Sorting Networks CHAPTER 28: SORTING NETWORKS Sorting Networks Tool for generating and graphing sorting networks Sorting networks and the END algorithm Sorting Networks validity Computer engineering Sorting algorithms
Sorting network
[ "Mathematics", "Technology", "Engineering" ]
1,869
[ "Electrical engineering", "Order theory", "Computer engineering", "Sorting algorithms" ]
562,067
https://en.wikipedia.org/wiki/Brillouin%20zone
In mathematics and solid state physics, the first Brillouin zone (named after Léon Brillouin) is a uniquely defined primitive cell in reciprocal space. In the same way the Bravais lattice is divided up into Wigner–Seitz cells in the real lattice, the reciprocal lattice is broken up into Brillouin zones. The boundaries of this cell are given by planes related to points on the reciprocal lattice. The importance of the Brillouin zone stems from the description of waves in a periodic medium given by Bloch's theorem, in which it is found that the solutions can be completely characterized by their behavior in a single Brillouin zone. The first Brillouin zone is the locus of points in reciprocal space that are closer to the origin of the reciprocal lattice than they are to any other reciprocal lattice points (see the derivation of the Wigner–Seitz cell). Another definition is as the set of points in k-space that can be reached from the origin without crossing any Bragg plane. Equivalently, this is the Voronoi cell around the origin of the reciprocal lattice. There are also second, third, etc., Brillouin zones, corresponding to a sequence of disjoint regions (all with the same volume) at increasing distances from the origin, but these are used less frequently. As a result, the first Brillouin zone is often called simply the Brillouin zone. In general, the n-th Brillouin zone consists of the set of points that can be reached from the origin by crossing exactly n − 1 distinct Bragg planes. A related concept is that of the irreducible Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries in the point group of the lattice (point group of the crystal). The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969), a French physicist. Within the Brillouin zone, a constant-energy surface represents the loci of all the -points (that is, all the electron momentum values) that have the same energy. Fermi surface is a special constant-energy surface that separates the unfilled orbitals from the filled ones at zero kelvin. Critical points Several points of high symmetry are of special interest – these are called critical points. Other lattices have different types of high-symmetry points. They can be found in the illustrations below. See also Fundamental pair of periods Fundamental domain References Bibliography External links Brillouin Zone simple lattice diagrams by Thayer Watkins Brillouin Zone 3d lattice diagrams by Technion. DoITPoMS Teaching and Learning Package – "Brillouin Zones" Aflowlib.org consortium database (Duke University) AFLOW Standardization of VASP/QUANTUM ESPRESSO input files (Duke University) Crystallography Electronic band structures Vibrational spectroscopy
Brillouin zone
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
583
[ "Electron", "Spectrum (physical sciences)", "Materials science", "Spectroscopy", "Crystallography", "Electronic band structures", "Condensed matter physics", "Vibrational spectroscopy" ]
562,077
https://en.wikipedia.org/wiki/Toothing
Toothing was originally a hoax claim that Bluetooth-enabled mobile phones or PDAs were being used to arrange random sexual encounters, perpetrated as a prank on the media who reported it. The hoax was created by Ste Curran, then Editor at Large at the gaming magazine Edge, and ex-journalist Simon Byron. They based it on the two concepts, namely dogging and bluejacking, that were popular at the time. The creators started a forum in March 2004 where they wrote fake news articles about toothing with other members and then sent them off to well-known Internet-based news services. The point of the hoax was to "highlight how journalists are happy to believe something is true without necessarily checking the facts". Dozens of news organizations, including BBC News, Wired News, and The Independent thought the toothing story was real and printed it. On April 4, 2005, Curran and Byron admitted that the whole thing was a hoax. There have, however, been real Bluetooth dating devices since. Conception Devised by Swedish telecommunication company Ericsson, Bluetooth is an open wireless protocol for exchanging data over short distances from mobile devices such as mobile phones, laptops, and personal computers. Originally, Bluetooth was only intended for wireless exchanging of files between these devices, but it was later discovered that it could also be used for sexual intentions. The hoax concept of toothing started around March 2004 in the form of a forum designed by Ste Curran, then Editor at Large at games magazine Edge, and ex-journalist Simon Byron. Toothing was conceived as a merger of the two concepts dogging with bluejacking, both of which were frequently mentioned in the UK media around that time. Byron said he and Curran were "idly messaging about the Stan Collymore dogging scandal, and how this stupid sexual buzzword had (apparently) come from nowhere," when they came up with the concept. "We wondered if we could create our own. We wonder a lot of things, and rarely push them past concept, because we’re as collectively creative as we are frustratingly idle. This particularly concept was simple enough to outstrip the temptations of grinning, saying 'Yeah!', and wandering off to see what was on [television]." Several newspapers have also compared toothing to dogging. In toothing, a Bluetooth device is used to find other Bluetooth enabled devices within a close distance (on trains or buses, for example), and then send the expression "toothing?" as an initial greeting, letting the person with the enabled Bluetooth device know you are looking for sex. If sending of text messages via Bluetooth is not possible, the Bluetooth name of the mobile phone can be set to "toothing?" or something else to indicate interest. The pair of hoaxers wrote fake news articles on the forum about toothing and sent them off well-known Internet-based news services. Byron said he had to write "Penthouse-letters-page style sexual adventure stories" for articles and interviews with the media. The point with hoax was, according to Byron, to "highlight how journalists are happy to believe something is true without necessarily checking the facts." Spread in media The concept of toothing quickly reached a large audience, even in countries outside of the UK. Curran and Byron said they kept a record from the start of all their mentions in the media, "but there were soon too many to record in full." They agreed to do an interview with The Daily Telegraph and "many papers read that and followed up, broadsheet and tabloid, regional, national, all over the planet." One of the hoaxers made an appearance on BBC Radio 5 Live, and a member of the Parliament of the United Kingdom reportedly declared his interest in toothing as a way of meeting women. The couple also received offers to license official toothing merchandise such as sex lines, websites, and mobile-phone software. Dozens of news organizations, including BBC, Wired News, Infosyncworld, and The Independent fell for the story and printed it. The Guardian also printed the story, but the article's author suspected it to be an April Fools' Day prank. The BBC wrote in their article: One practitioner is Jon, a "Toother" living near London. "One morning I received an anonymous text message via bluetooth," he told BBC News. "I didn't understand what had happened, but that evening I did some research and worked out how to send my own." The pair started to exchange messages on a train station platform; messages which got gradually more flirty. "Eventually she asked me if I fancied a quickie in the toilets at the station we were travelling to. "It happened, but I never saw her again." Since that day Jon - who claims to have had Toothing success five times - has set up a website dedicated to the practice but he admits it takes a degree of perseverance. Aftermath On April 4, 2005, the creators of the forum admitted that the whole thing was a hoax. Though the concept of toothing is possible, the hoaxers never intended for it to turn into something real. The couple said: "It's like going into a crowded nightclub, throwing a brick at the dance floor with a love letter attached, and hoping that the person it hits will agree to sleep with you." When announcing the hoax, Curran and Byron reassured that toothing was nothing more than a practical joke gone too far and despite all the articles in newspapers and tabloids, "no one has ever ever, ever toothed." Shanna Petersen, a sexologist, disagreed with the hoaxers' statement that no-one has ever toothed: "It's simple, doesn't take a lot of guts and rejection is nowhere as personal. Of course it's popular. Show people a new way through which they have a chance to have more sex and they'll do it. No matter how much effort goes into it or how meager the results." Multiple forums were in fact created throughout Europe, Asia and America within months of the original post of toothing. People signed up to the forums looking for good locations in their area to tooth, and to share their toothing stories with other members. There have later been real Bluetooth dating devices to hit the market. University of Bath psychologist Linda Blair said the practice of toothing is down to the human need to take risks: "I think we protect ourselves too much in modern society, and risk is a human need. We need motivation. In some ways this is a tame way of picking people up, it's almost a natural follow up from randomly picking people's names out of the phone book. It's voluntary at all stages, and has choice. As long as that's there and it's legal, then people should be able to do what they want." Sue Peters of the Terrence Higgins Trust worried that anonymous sex made possible by toothing would cause an increase of sexually transmitted diseases such as chlamydia in the United Kingdom. University of Amsterdam sociologist Albert Benschop researched the hoax. He said toothing is "the next logical step" in dating and that the "old game is just adapting to new times". Benschop added that toothing is "just like picking up people in bars but without the silly time-consuming conventions of decorum that people are obliged to keep to these days. This is much more direct. You both know what you want." He also sees it as a way for people "to satisfy their need for intimacy. As long as it helps people out of loneliness and gives them more to enjoy in life, I think it's a very good development." The term "toothing" was included in the 2006 version of The New Partridge Dictionary of Slang and Unconventional English. It was described as an "anonymous casual sexual activity with any partner arranged over Bluetooth radio technology enabled mobile phones." In addition, toothing is listed in the Sex Slang dictionary, authored by Tom Dalzell and Terry Victor, with an explanation similar to the one in the New Partridge Dictionary. Toothing was referenced in an episode of the American television series CSI: Miami, called "Killer Date", that aired in the United States on April 18, 2005. See also Gel bracelet Grindr References 2004 hoaxes Bluetooth Internet hoaxes Sexual urban legends Sexuality and computing Online dating
Toothing
[ "Technology" ]
1,724
[ "Wireless networking", "Computing and society", "Sexuality and computing", "Bluetooth" ]
562,079
https://en.wikipedia.org/wiki/European%20Latsis%20Prize
The European Latsis Prize is awarded annually by the European Science Foundation for "outstanding and innovative contributions in a selected field of European research". The prize is worth 100,000 Swiss francs and is awarded within a different discipline each year. The prize was inaugurated in 1999 by the Latsis Foundation and ended in 2012. The prize was awarded in a different scientific field. Laureates See also Swiss Science Prize Latsis Notes and references External links European Science Foundation Awards established in 1999 European science and technology awards
European Latsis Prize
[ "Technology" ]
104
[ "Science and technology awards", "Science award stubs" ]
562,163
https://en.wikipedia.org/wiki/John%20Stott
John Robert Walmsley Stott (27 April 1921 – 27 July 2011) was a British Anglican priest and theologian who was noted as a leader of the worldwide evangelical movement. He was one of the principal authors of the Lausanne Covenant in 1974. In 2005, Time magazine ranked Stott among the 100 most influential people in the world. Life Early life and education John Robert Walmsley Stott was born on 27 April 1921 in London, England, to Sir Arnold and Emily "Lily" Stott (née Holland). His father was a leading physician at Harley Street and an agnostic, while his mother had been raised Lutheran and attended the nearby Church of England church, All Souls, Langham Place. Stott was sent to boarding schools at eight years old, initially to a prep school, Oakley Hall. In 1935, he went on to Rugby School. While at Rugby School in 1938, Stott heard Eric Nash (nicknamed "Bash"), director of the Iwerne camps, deliver a sermon entitled "What Then Shall I Do with Jesus, Who Is Called the Christ?" After this talk, Nash pointed Stott to Revelation 3:20, "Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me." Stott later described the impact this verse had upon him as follows: Stott was mentored by Nash, who wrote a weekly letter to him, advising him on how to develop and grow in his Christian life, as well as practicalities such as leading the Christian Union at his school. At this time, also, Stott was a pacifist and a member of the Anglican Pacifist Fellowship. In later life he withdrew from pacifism, adopting a 'just war' stance. Stott studied modern languages at Trinity College, Cambridge, where he graduated with double first-class honours in French and theology. At university, he was active in the Cambridge Inter-Collegiate Christian Union, where the executive committee considered him too invaluable a person to be asked to commit his time by joining the committee. After Trinity he transferred to Ridley Hall Theological College, affiliated to the University of Cambridge, to train for ordination as an Anglican cleric. He later received a Lambeth Doctorate of Divinity in 1983. Ministry Stott was ordained as a deacon in 1945 and became a curate at All Souls Church, Langham Place (1945–1950), then rector (1950–1975). This was the church in which he had grown up and where he spent almost his whole life apart from a few years spent in Cambridge. In 1956, he appointed Frances Whitehead as his secretary and the pair remained as a working partnership until his death, with "Auntie Frances" as the right hand to "Uncle John". While rector, he became increasingly influential on a national and international basis, most notably being a key player in the 1966–1967 dispute about the appropriateness of evangelicals remaining in the Church of England. He had founded the Church of England Evangelical Council (CEEC) in 1960 to bring together the different strands of evangelicals. In 1970, in response to increasing demands on his time from outside the All Souls congregation, he appointed a vicar of All Souls, to enable him to work on other projects. In 1975 Stott resigned as rector and Michael Baughen, the then vicar, was appointed in his place; Stott remained at the church and was appointed rector emeritus. In 1969, he founded Langham Trust, and in 1982 the London Institute for Contemporary Christianity of which he remained honorary president until his death. During his presidency he gathered together leading evangelical intellectuals to shape courses and programmes communicating the Christian faith into a secular context. He was regularly accompanied by a leading paediatrician, John Wyatt, and the institute director, the broadcaster Elaine Storkey, when they spoke across the country to large audiences on "Matters of Life and Death". Following his chairmanship of the second National Evangelical Anglican Congress in April 1977, the Nottingham statement was published which said, "Seeing ourselves and Roman Catholics as fellow-Christians, we repent of attitudes that have seemed to deny it." This aroused controversy amongst some evangelicals at the time. Retirement and death Stott announced his retirement from public ministry in April 2007 at the age of 86. He took up residence in the College of St Barnabas, Lingfield, Surrey, a retirement community for Anglican clergy but remained as rector emeritus of All Souls Church. Stott died on 27 July 2011 at the College of St Barnabas in Lingfield at 3:15 pm local time. He was surrounded by family and close friends and they were reading the Bible and listening to Handel's Messiah when he peacefully died. An obituary in Christianity Today reported that his death was due to age-related complications and that he had been in discomfort for several weeks. The obituary described him as "An architect of 20th-century evangelicalism [who] shaped the faith of a generation." His status was such that his death was reported in the secular media. The BBC referred to him as someone who could "explain complex theology in a way lay people could easily understand". Obituaries were published in The Daily Telegraph and The New York Times. Tributes were paid to Stott by a number of leaders and other figures within the Christian community. The American evangelist Billy Graham released a statement saying, "The evangelical world has lost one of its greatest spokesmen, and I have lost one of my close personal friends and advisors. I look forward to seeing him again when I go to heaven." The Archbishop of Canterbury, Rowan Williams, wrote: Stott's funeral was held on 8 August 2011 at All Souls Church. It was reported that the church was full with people queuing for a considerable time before the service started. A memorial website remembrance book (closed 2017) attracted comments from over one thousand individuals. Memorial services for Stott were held at St Paul's Cathedral, London; Holy Trinity Cathedral, Auckland, New Zealand; St Andrew's Cathedral, Sydney, Australia; College Church, Wheaton, Illinois, United States; Anglican Network Church of the Good Shepherd, Vancouver, Canada; St. Paul's, Bloor Street, Toronto, Canada; as well as in cities across Africa, Asia and Latin America. Upon his death, he was cremated, his ashes were interred at Dale Cemetery, in Pembrokeshire, Wales. Frances Whitehead, his secretary since 1956, was the executor of his will and she ensured that all his papers were deposited in the Lambeth Palace archive the year after his death. Influence Stott has had considerable influence in evangelicalism. In a November 2004 editorial on Stott, the New York Times columnist David Brooks cited Michael Cromartie of the Ethics and Public Policy Center as saying that "if evangelicals could elect a pope, Stott is the person they would likely choose". Writing He wrote over 50 books, some of which appear only in Chinese, Korean, or Spanish, as well as many articles and papers. One of these is Basic Christianity, a book which seeks to explain the message of Christianity, and convince its readers of its truth and importance. The Preacher's Portrait: Some New Testament Word Studies, published in 1961, it was an important reference for clergy. He was also the author of The Cross of Christ (), of which J. I. Packer stated, "No other treatment of this supreme subject says so much so truly and so well." Other books he wrote include Essentials: A Liberal–Evangelical Dialogue, a dialogue with the liberal cleric and theologian David L. Edwards, over whether what evangelicals hold as essential should be seen as such. In 2005, he produced Evangelical Truth, which summarises what he perceives as being the central claims of Christianity, essential for evangelicalism. Upon his formal retirement from public engagements, he continued to engage in regular writing until his death. In 2008, he produced The Anglican Evangelical Doctrine of Infant Baptism with J. Alec Motyer. An introduction to his thought can be found in his two final substantial publications, which act as a summation of his thinking. Both were published by the publishing house with which he had a lifelong association, IVP. In 2007, his reflections on the life of the church: The Living Church: Convictions of a Lifelong Pastor. In January 2010, at the age of 88, he saw the launch of what would be his final book: The Radical Disciple. It concludes with a poignant farewell and appeal for his legacy to be continued through the work of the Langham Partnership International. Anglican evangelicalism Stott's churchmanship fell within the conservative evangelical wing of the Church of England. He played a key role as a leader of evangelicalism within the Church of England, and was regarded as instrumental in persuading evangelicals to play an active role in the Church of England rather than leaving for exclusively evangelical denominations. There were two major events where he played a key role in this regard. He was chairing the National Assembly of Evangelicals in 1966, a convention organised by the Evangelical Alliance, when Martyn Lloyd-Jones made an unexpected call for evangelicals to unite as evangelicals and no longer stay within their "mixed" denominations. This view was motivated by a belief that true Christian fellowship requires evangelical views on central topics such as the atonement and the inspiration of Scripture. Lloyd-Jones was a key figure to many in the free churches, and evangelical Anglicans regarded Stott similarly. The two leaders publicly disagreed, as Stott, though not scheduled as a speaker that evening, used his role as chairman to refute Lloyd-Jones, saying that his opinion went against history and the Bible. The following year saw the first National Evangelical Anglican Congress, which was held at Keele University. At this conference, largely due to Stott's influence, evangelical Anglicans committed themselves to full participation in the Church of England, rejecting the separationist approach proposed by Lloyd-Jones. These two conferences effectively fixed the direction of a large part of the British evangelical community. Although there is an ongoing debate as to the exact nature of Lloyd-Jones's views, they undoubtedly caused the two groupings to adopt diametrically opposed positions. These positions, and the resulting split, continue largely unchanged to this day. Honours Stott was appointed a Chaplain to Queen Elizabeth II in 1959 and, on his retirement in 1991, an Extra Chaplain. He was appointed a Commander of the Order of the British Empire (CBE) in the New Year Honours 2006. He received a Lambeth Doctorate of Divinity in 1983, as well as five honorary degrees, including doctorates from Trinity Evangelical Divinity School (1971), Wycliffe College, Toronto (1993), and Brunel University (1997). Annihilationism Stott tentatively held to annihilationism, which is the view that the final state of the unsaved, known as hell, is death and destruction, rather than everlasting conscious torment. Stott said that: "the ultimate annihilation of the wicked should at least be accepted as a legitimate, biblically founded alternative to their eternal conscious torment." This led to a heated debate within mainstream evangelical Christianity: some writers criticised Stott in very strong terms while others supported his views. Anti-Zionism Stott stated his firm opposition to Zionism: "Political Zionism and Christian Zionism are anathema to Christian faith ... The true Israel today is neither Jews nor Israelis, but believers in the Messiah, even if they are Gentiles ..." Personal life Stott never married and had no children but remained celibate his entire life. He said, "The gift of singleness is more a vocation than an empowerment, although to be sure God is faithful in supporting those he calls." He lived simply and gave his wealth away. 'Pride is without doubt the greatest temptation of Christian leaders', he said. When asked what he would change if he had his time again he replied 'I would pray more'. Stott's favourite relaxation was birdwatching; his book The Birds Our Teachers draws on this interest. Bibliography The books of John Stott See also International Fellowship of Evangelical Students Notes References Citations Works cited Further reading External links John Stott website: biography, publications, etc. Langham Partnership International The London Institute for Contemporary Christianity 1996 Christianity Today interview 2006 Christianity Today interview Papers of John Stott at Lambeth Palace Library Holders of a Lambeth degree 1921 births 2011 deaths 20th-century Anglican theologians 20th-century British Anglican priests 20th-century English male writers 20th-century English non-fiction writers 20th-century British Christian theologians 20th-century evangelicals 21st-century Anglican theologians 21st-century British Anglican priests 21st-century English male writers 21st-century English non-fiction writers 21st-century British Christian theologians 21st-century evangelicals Alumni of Ridley Hall, Cambridge Alumni of Trinity College, Cambridge Anglican pacifists Anglican writers Annihilationists Bible commentators British Anglican theologians British anti-Zionists British Christian pacifists English evangelicals English male non-fiction writers British religious writers Commanders of the Order of the British Empire Christian apologists Christian humanists Evangelical Anglican clergy Evangelical Anglican theologians Evangelicalism in the Church of England Environmental writers People educated at Rugby School Theistic evolutionists Writers about religion and science
John Stott
[ "Biology" ]
2,736
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
562,353
https://en.wikipedia.org/wiki/Butterworth%20filter
The Butterworth filter is a type of signal processing filter designed to have a frequency response that is as flat as possible in the passband. It is also referred to as a maximally flat magnitude filter. It was first described in 1930 by the British engineer and physicist Stephen Butterworth in his paper entitled "On the Theory of Filter Amplifiers". Original paper Butterworth had a reputation for solving very complex mathematical problems thought to be 'impossible'. At the time, filter design required a considerable amount of designer experience due to limitations of the theory then in use. The filter was not in common use for over 30 years after its publication. Butterworth stated that: Such an ideal filter cannot be achieved, but Butterworth showed that successively closer approximations were obtained with increasing numbers of filter elements of the right values. At the time, filters generated substantial ripple in the passband, and the choice of component values was highly interactive. Butterworth showed that a low-pass filter could be designed whose gain as a function of frequency (i.e., the magnitude of its frequency response) is: where is the angular frequency in radians per second and is the number of poles in the filter—equal to the number of reactive elements in a passive filter. Its cutoff frequency (the half-power point of approximately −3 dB or a voltage gain of 1/ ≈ 0.7071) is normalized to 𝜔 = 1 radian per second. Butterworth only dealt with filters with an even number of poles in his paper, though odd-order filters can be created with the addition of a single-pole filter applied to the output of the even-order filter. He built his higher-order filters from 2-pole filters separated by vacuum tube amplifiers. His plot of the frequency response of 2-, 4-, 6-, 8-, and 10-pole filters is shown as A, B, C, D, and E in his original graph. Butterworth solved the equations for two-pole and four-pole filters, showing how the latter could be cascaded when separated by vacuum tube amplifiers and so enabling the construction of higher-order filters despite inductor losses. In 1930, low-loss core materials such as molypermalloy had not been discovered and air-cored audio inductors were rather lossy. Butterworth discovered that it was possible to adjust the component values of the filter to compensate for the winding resistance of the inductors. He used coil forms of 1.25″ diameter and 3″ length with plug-in terminals. Associated capacitors and resistors were contained inside the wound coil form. The coil formed part of the plate load resistor. Two poles were used per vacuum tube and RC coupling was used to the grid of the following tube. Butterworth also showed that the basic low-pass filter could be modified to give low-pass, high-pass, band-pass and band-stop functionality. Overview The frequency response of the Butterworth filter is maximally flat (i.e., has no ripples) in the passband and rolls off towards zero in the stopband. When viewed on a logarithmic Bode plot, the response slopes off linearly towards negative infinity. A first-order filter's response rolls off at −6 dB per octave (−20 dB per decade) (all first-order lowpass filters have the same normalized frequency response). A second-order filter decreases at −12 dB per octave, a third-order at −18 dB and so on. Butterworth filters have a monotonically changing magnitude function with , unlike other filter types that have non-monotonic ripple in the passband and/or the stopband. Compared with a Chebyshev Type I/Type II filter or an elliptic filter, the Butterworth filter has a slower roll-off, and thus will require a higher order to implement a particular stopband specification, but Butterworth filters have a more linear phase response in the passband than Chebyshev Type I/Type II and elliptic filters can achieve. Example A transfer function of a third-order low-pass Butterworth filter design shown in the figure on the right looks like this: A simple example of a Butterworth filter is the third-order low-pass design shown in the figure on the right, with  = 4/3 F,  = 1 Ω,  = 3/2 H, and  = 1/2 H. Taking the impedance of the capacitors to be and the impedance of the inductors to be , where is the complex frequency, the circuit equations yield the transfer function for this device: The magnitude of the frequency response (gain) is given by obtained from and the phase is given by The group delay is defined as the negative derivative of the phase shift with respect to angular frequency and is a measure of the distortion in the signal introduced by phase differences for different frequencies. The gain and the delay for this filter are plotted in the graph on the left. There are no ripples in the gain curve in either the passband or the stopband. The log of the absolute value of the transfer function is plotted in complex frequency space in the second graph on the right. The function is defined by the three poles in the left half of the complex frequency plane. These are arranged on a circle of radius unity, symmetrical about the real axis. The gain function will have three more poles on the right half-plane to complete the circle. By replacing each inductor with a capacitor and each capacitor with an inductor, a high-pass Butterworth filter is obtained. A band-pass Butterworth filter is obtained by placing a capacitor in series with each inductor and an inductor in parallel with each capacitor to form resonant circuits. The value of each new component must be selected to resonate with the old component at the frequency of interest. A band-stop Butterworth filter is obtained by placing a capacitor in parallel with each inductor and an inductor in series with each capacitor to form resonant circuits. The value of each new component must be selected to resonate with the old component at the frequency that is to be rejected. Transfer function Like all filters, the typical prototype is the low-pass filter, which can be modified into a high-pass filter, or placed in series with others to form band-pass and band-stop filters, and higher order versions of these. The gain of an th-order Butterworth low-pass filter is given in terms of the transfer function as where is the order of filter, is the cutoff frequency (approximately the −3 dB frequency), and is the DC gain (gain at zero frequency). It can be seen that as approaches infinity, the gain becomes a rectangle function and frequencies below will be passed with gain , while frequencies above will be suppressed. For smaller values of , the cutoff will be less sharp. We wish to determine the transfer function where (from Laplace transform). Because and, as a general property of Laplace transforms at , , if we select such that: then, with , we have the frequency response of the Butterworth filter. The poles of this expression occur on a circle of radius at equally-spaced points, and symmetric around the negative real axis. For stability, the transfer function, , is therefore chosen such that it contains only the poles in the negative real half-plane of . The -th pole is specified by and hence The transfer (or system) function may be written in terms of these poles as . where is the product of a sequence operator. The denominator is a Butterworth polynomial in . Normalized Butterworth polynomials The Butterworth polynomials may be written in complex form as above, but are usually written with real coefficients by multiplying pole pairs that are complex conjugates, such as and . The polynomials are normalized by setting . The normalized Butterworth polynomials then have the general product form Factors of Butterworth polynomials of order 1 through 10 are shown in the following table (to six decimal places). Factors of Butterworth polynomials of order 1 through 6 are shown in the following table (Exact). where the Greek letter phi ( or ) represents the golden ratio. It is an irrational number that is a solution to the quadratic equation with a value of The th Butterworth polynomial can also be written as a sum with its coefficients given by the recursion formula and by the product formula where Further, . The rounded coefficients for the first 10 Butterworth polynomials are: The normalized Butterworth polynomials can be used to determine the transfer function for any low-pass filter cut-off frequency , as follows , where Transformation to other bandforms are also possible, see prototype filter. Maximal flatness Assuming and , the derivative of the gain with respect to frequency can be shown to be which is monotonically decreasing for all since the gain is always positive. The gain function of the Butterworth filter therefore has no ripple. The series expansion of the gain is given by In other words, all derivatives of the gain up to but not including the 2-th derivative are zero at , resulting in "maximal flatness". If the requirement to be monotonic is limited to the passband only and ripples are allowed in the stopband, then it is possible to design a filter of the same order, such as the inverse Chebyshev filter, that is flatter in the passband than the "maximally flat" Butterworth. High-frequency roll-off Again assuming , the slope of the log of the gain for large is In decibels, the high-frequency roll-off is therefore 20 dB/decade, or 6 dB/octave (the factor of 20 is used because the power is proportional to the square of the voltage gain; see 20 log rule.) Minimum order To design a Butterworth filter using the minimum required number of elements, the minimum order of the Butterworth filter may be calculated as follows. where: and are the pass band frequency and attenuation at that frequency in dB. and are the stop band frequency and attenuation at that frequency in dB. is the minimum number of poles, the order of the filter. denotes the ceiling function. Nonstandard cutoff attenuation The cutoff attenuation for Butterworth filters is usually defined to be −3.01 dB. If it is desired to use a different attenuation at the cutoff frequency, then the following factor may be applied to each pole, whereupon the poles will continue to lie on a circle, but the radius will no longer be unity. The cutoff attenuation equation may be derived through algebraic manipulation of the Butterworth defining equation stated at the top of the page. where: is the relocated pole positioned to set the desired cutoff attenuation. is a −3.01 dB cutoff pole that lies on the unit circle. is the desired attenuation at the cutoff frequency in dB (1 dB, 10 dB, etc.). is the number of poles, the order of the filter. Filter implementation and design There are several different filter topologies available to implement a linear analogue filter. The most often used topology for a passive realisation is the Cauer topology, and the most often used topology for an active realisation is the Sallen–Key topology. Cauer topology The Cauer topology uses passive components (shunt capacitors and series inductors) to implement a linear analog filter. The Butterworth filter having a given transfer function can be realised using a Cauer 1-form. The k-th element is given by The filter may start with a series inductor if desired, in which case the Lk are k odd and the Ck are k even. These formulae may usefully be combined by making both Lk and Ck equal to gk. That is, gk is the immittance divided by s. These formulae apply to a doubly terminated filter (that is, the source and load impedance are both equal to unity) with ωc = 1. This prototype filter can be scaled for other values of impedance and frequency. For a singly terminated filter (that is, one driven by an ideal voltage or current source) the element values are given by where and Voltage driven filters must start with a series element and current driven filters must start with a shunt element. These forms are useful in the design of diplexers and multiplexers. Sallen–Key topology The Sallen–Key topology uses active and passive components (noninverting buffers, usually op amps, resistors, and capacitors) to implement a linear analog filter. Each Sallen–Key stage implements a conjugate pair of poles; the overall filter is implemented by cascading all stages in series. If there is a real pole (in the case where is odd), this must be implemented separately, usually as an RC circuit, and cascaded with the active stages. For the second-order Sallen–Key circuit shown to the right the transfer function is given by We wish the denominator to be one of the quadratic terms in a Butterworth polynomial. Assuming that , this will mean that and This leaves two undefined component values that may be chosen at will. Butterworth lowpass filters with Sallen–Key topology of third and fourth order, using only one op amp, are described by Huelsman, and further single-amplifier Butterworth filters also of higher order are given by Jurišić et al. Digital implementation Digital implementations of Butterworth and other filters are often based on the bilinear transform method or the matched Z-transform method, two different methods to discretize an analog filter design. In the case of all-pole filters such as the Butterworth, the matched Z-transform method is equivalent to the impulse invariance method. For higher orders, digital filters are sensitive to quantization errors, so they are often calculated as cascaded biquad sections, plus one first-order or third-order section for odd orders. Comparison with other linear filters Properties of the Butterworth filter are: Monotonic amplitude response in both passband and stopband Quick roll-off around the cutoff frequency, which improves with increasing order Considerable overshoot and ringing in step response, which worsens with increasing order Slightly non-linear phase response Group delay largely frequency-dependent Here is an image showing the gain of a discrete-time Butterworth filter next to other common filter types. All of these filters are fifth-order. The Butterworth filter rolls off more slowly around the cutoff frequency than the Chebyshev filter or the Elliptic filter, but without ripple. See also Bessel filter Chebyshev filter Comb filter Elliptic filter Filter design References Linear filters Network synthesis filters Electronic design
Butterworth filter
[ "Engineering" ]
3,053
[ "Electronic design", "Electronic engineering", "Design" ]
7,066,527
https://en.wikipedia.org/wiki/Height%20Modernization
Height Modernization is the name of a series of state-by-state programs recently begun by the United States' National Geodetic Survey, a division of the National Oceanic and Atmospheric Administration. The goal of each state program is to place GPS base stations at various locations within each participating state to measure topographic changes in the directions of latitude and longitude caused by subsidence or earthquakes, as well as to measure changes in height (elevation). References Arizona Height Modernization – Arizona Geographic Information Council Texas Height Modernization – Texas A&M University – Corpus Christi Geodesy
Height Modernization
[ "Mathematics" ]
114
[ "Applied mathematics", "Geodesy" ]
7,066,763
https://en.wikipedia.org/wiki/Shack%E2%80%93Hartmann%20wavefront%20sensor
A Shack–Hartmann (or Hartmann–Shack) wavefront sensor (SHWFS) is an optical instrument used for characterizing an imaging system. It is a wavefront sensor commonly used in adaptive optics systems. It consists of an array of lenses (called lenslets) of the same focal length. Each is focused onto a photon sensor (typically a CCD array or CMOS array or quad-cell). If the sensor is placed at the geometric focal plane of the lenslet, and is uniformly illuminated, then, the integrated gradient of the wavefront across the lenslet is proportional to the displacement of the centroid. Consequently, any phase aberration can be approximated by a set of discrete tilts. By sampling the wavefront with an array of lenslets, all of these local tilts can be measured and the whole wavefront reconstructed. Since only tilts are measured the Shack–Hartmann cannot detect discontinuous steps in the wavefront. The design of this sensor improves upon an array of holes in a mask that had been developed in 1904 by Johannes Franz Hartmann as a means of tracing individual rays of light through the optical system of a large telescope, thereby testing the quality of the image. In the late 1960s, Roland Shack and Ben Platt modified the Hartmann screen by replacing the apertures in an opaque screen by an array of lenslets. The terminology as proposed by Shack and Platt was Hartmann screen. The fundamental principle seems to be documented even before Huygens by the Jesuit philosopher, Christopher Scheiner, in Austria. Shack–Hartmann sensors are used in astronomy to measure telescopes and in medicine to characterize eyes for corneal treatment of complex refractive errors. Recently, Pamplona et al. developed and patented an inverse of the Shack–Hartmann system to measure one's eye lens aberrations. While Shack–Hartmann sensors measure the localized slope of the wavefront error using spot displacement in the sensor plane, Pamplona et al. replace the sensor plane with a high resolution visual display (e.g. a mobile phone screen) that displays spots that the user views through a lenslet array. The user then manually shifts the displayed spots (i.e. the generated wavefront) until the spots align. The magnitude of this shift provides data to estimate the first-order parameters such as radius of curvature and hence error due to defocus and spherical aberration. References See also Optical Telescope Element (used this sensor in development of the James Webb Space Telescope) Sensors Optical metrology
Shack–Hartmann wavefront sensor
[ "Technology", "Engineering" ]
522
[ "Sensors", "Measuring instruments" ]
7,066,954
https://en.wikipedia.org/wiki/Low-energy%20transfer
A low-energy transfer, or low-energy trajectory, is a route in space that allows spacecraft to change orbits using significantly less fuel than traditional transfers. These routes work in the Earth–Moon system and also in other systems, such as between the moons of Jupiter. The drawback of such trajectories is that they take longer to complete than higher-energy (more-fuel) transfers, such as Hohmann transfer orbits. Low-energy transfers are also known as Weak Stability Boundary trajectories, and include ballistic capture trajectories. Low-energy transfers follow special pathways in space, sometimes referred to as the Interplanetary Transport Network. Following these pathways allows for long distances to be traversed for little change in velocity, or . Example missions Missions that have used low-energy transfers include: Hiten, from JAXA SMART-1, from ESA Genesis, from NASA. GRAIL, from NASA. Danuri from KARI On-going missions that uses low-energy transfers include: BepiColombo, from ESA/JAXA CAPSTONE from NASA SLIM, from JAXA Proposed missions using low-energy transfers include: European Student Moon Orbiter (ESMO) Mars Direct History Low-energy transfers to the Moon were first demonstrated in 1991 by the Japanese spacecraft Hiten, which was designed to swing by the Moon but not to enter orbit. The Hagoromo subsatellite was released by Hiten on its first swing-by and may have successfully entered lunar orbit, but suffered a communications failure. Edward Belbruno and James Miller of the Jet Propulsion Laboratory had heard of the failure, and helped to salvage the mission by developing a ballistic capture trajectory that would enable the main Hiten probe to itself enter lunar orbit. The trajectory they developed for Hiten used Weak Stability Boundary Theory and required only a small perturbation to the elliptical swing-by orbit, sufficiently small to be achievable by the spacecraft's thrusters. This course would result in the probe being captured into temporary lunar orbit using zero , but required five months instead of the usual three days for a Hohmann transfer. Delta-v savings From low Earth orbit to lunar orbit, the savings approach 25% on the burn applied after leaving low Earth orbit, compared to the retrograde burn applied near the Moon in the traditional , and allow for a doubling of payload. Robert Farquhar had described a 9-day route from low earth orbit to lunar capture that takes 3.5 km/s. Belbruno's routes from low Earth orbit require a 3.1 km/s burn for trans lunar injection, a delta-v saving of not more than 0.4 km/s. However, the latter require no large delta-v change after leaving low Earth orbit, which may have operational benefits if using an upper stage with limited restart or in-orbit endurance capability, which would require the spacecraft to have a separate main propulsion system for capture. For rendezvous with the Martian moons, the savings are 12% for Phobos and 20% for Deimos. Rendezvous is targeted because the stable pseudo-orbits around the Martian moons do not spend much time within 10 km of the surface. See also Bi-elliptic transfer Gravity assist Interplanetary Transport Network Orbital mechanics References External links Celestial Mechanics Theory Meets the Nitty-Gritty of Trajectory Design Earth-to-Moon Low Energy Transfers Targeting L1 Hyperbolic Transit Orbit June 2005 Low Energy Trajectories and Chaos: Applications to Astrodynamics and Dynamical Astronomy Navigating Celestial Currents Astrodynamics
Low-energy transfer
[ "Engineering" ]
725
[ "Astrodynamics", "Aerospace engineering" ]
7,067,238
https://en.wikipedia.org/wiki/Danishefsky%27s%20diene
Danishefsky's diene (Kitahara diene) is an organosilicon compound and a diene with the formal name trans-1-methoxy-3-trimethylsilyloxy-buta-1,3-diene named after Samuel J. Danishefsky. Because the diene is very electron-rich it is a very reactive reagent in Diels-Alder reactions. This diene reacts rapidly with electrophilic alkenes, such as maleic anhydride. The methoxy group promotes highly regioselective additions. The diene is known to react with amines, aldehydes, alkenes and alkynes. Reactions with imines and nitro-olefins have been reported. It was first synthesized by the reaction of trimethylsilyl chloride with 4-methoxy-3-buten-2-one and zinc chloride: The diene has two features of interest: the substituents promote regiospecific addition to unsymmetrical dienophiles and the resulting adduct is amenable to further functional group manipulations after the addition reaction. High regioselectivity is obtained with unsymmetrical alkenes with a preference for a 1,2-relation of the ether group with the electron-deficient alkene-carbon. All this is exemplified in this aza Diels-Alder reaction: In the cycloaddition product, the silyl ether is a synthon for a carbonyl group through the enol. The methoxy group is susceptible to an elimination reaction enabling the formation of a new alkene group. Applications in asymmetric synthesis have been reported. Derivatives have been reported. References Conjugated dienes Trimethylsilyl compounds Reagents for organic chemistry
Danishefsky's diene
[ "Chemistry" ]
392
[ "Functional groups", "Trimethylsilyl compounds", "Reagents for organic chemistry" ]
7,067,420
https://en.wikipedia.org/wiki/%CE%92-Methylamino-L-alanine
{{DISPLAYTITLE:β-Methylamino-L-alanine}} β-Methylamino--alanine, or BMAA, is a non-proteinogenic amino acid produced by cyanobacteria. BMAA is a neurotoxin. Its potential role in various neurodegenerative disorders is the subject of scientific research. Structure and properties BMAA is a derivative of the amino acid alanine with a methylamino group on the side chain. This non-proteinogenic amino acid is classified as a polar base. Sources and detection BMAA is produced by cyanobacteria in marine, freshwater, and terrestrial environments. In cultured non-nitrogen-fixing cyanobacteria, BMAA production increases in a nitrogen-depleted medium. The biosynthetic pathway in cyanobacteria is unknown, but involvement of BMAA and its structural analog 2,4-diaminobutanoic acid (2,4-DAB) in environmental iron scavenging has been hypothesized. BMAA has been found in aquatic organisms and in plants with cyanobacterial symbionts such as certain lichens, the floating fern Azolla, the leaf petioles of the tropical flowering plant Gunnera, cycads as well as in animals that eat the fleshy covering of cycad seeds, including flying foxes. High concentrations (144 to 1836 ng/mg of flesh) of BMAA are present in shark fins. Because BMAA is a neurotoxin, consumption of shark fin soup and cartilage pills therefore may pose a health risk. The toxin can be detected via several laboratory methods, including liquid chromatography, high-performance liquid chromatography, mass spectrometry, amino acid analyzer, capillary electrophoresis, and NMR spectroscopy. Neurotoxicity BMAA can cross the blood–brain barrier in rats. It takes longer to get into the brain than into other organs, but once there, it is trapped in proteins, forming a reservoir for slow release over time. Mechanisms Although the mechanisms by which BMAA causes motor neuron dysfunction and death are not entirely understood, current research suggests that there are multiple mechanisms of action. Acutely, BMAA can act as an excitotoxin on glutamate receptors, such as NMDA, calcium-dependent AMPA, and kainate receptors. The activation of the metabotropic glutamate receptor 5 is believed to induce oxidative stress in the neuron by depletion of glutathione. BMAA can be misincorporated into nascent proteins in place of -serine, possibly causing protein misfolding and aggregation, both hallmarks of tangle diseases, including Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis (ALS), progressive supranuclear palsy (PSP), and Lewy body disease. In vitro research has shown that protein association of BMAA may be inhibited in the presence of excess -serine. Effects A study performed in 2015 with vervet monkeys (Chlorocebus sabaeus) in St. Kitts, which are homozygous for the apoE4 gene (a condition which in humans is a risk factor for Alzheimer's disease), found that vervets that were administered BMAA orally developed hallmark histopathology features of Alzheimer's disease, including amyloid beta plaques and neurofibrillary tangle accumulation. Vervets in the trial fed smaller doses of BMAA were found to have correlative decreases in these pathology features. Additionally, vervets that were co-administered BMAA with serine were found to have 70% less beta-amyloid plaques and neurofibrillary tangles than those administered BMAA alone, suggesting that serine may be protective against the neurotoxic effects of BMAA. This experiment represents the first in-vivo model of Alzheimer's disease that features both beta-amyloid plaques and hyperphosphorylated tau protein. This study also demonstrates that BMAA, an environmental toxin, can trigger neurodegenerative disease as a result of a gene-environment interaction. Degenerative locomotor diseases have been described in animals grazing on cycad species, fueling interest in a possible link between the plant and the etiology of ALS/PDC. Subsequent laboratory investigations discovered the presence of BMAA. BMAA induced severe neurotoxicity in rhesus macaques, including: limb muscle atrophy nonreactive degeneration of anterior horn cells degeneration and partial loss of pyramidal neurons of the motor cortex behavioral dysfunction conduction deficits in the central motor pathway neuropathological changes of motor cortex Betz cells There are reports that low BMAA concentrations can selectively kill cultured motor neurons from mouse spinal cords and produce reactive oxygen species. Scientists have also found that newborn rats treated with BMAA show a progressive neurodegeneration in the hippocampus, including intracellular fibrillar inclusions, and impaired learning and memory as adults. BMAA has been reported to be excreted into rodent breast milk, and subsequently transferred to the suckling offspring, suggesting mothers' and cows' milk might be other possible exposure routes. Human cases Chronic dietary exposure to BMAA is now considered to be a cause of the amyotrophic lateral sclerosis/parkinsonism–dementia complex (ALS/PDC) that had an extremely high rate of incidence among the Chamorro people of Guam. The Chamorro call the condition lytico-bodig. In the 1950s, ALS/PDC prevalence ratios and death rates for Chamorro residents of Guam and Rota were 50–100 times that of developed countries, including the United States. No demonstrable heritable or viral factors were found for the disease, and a subsequent decline of ALS/PDC after 1963 on Guam led to the search for responsible environmental agents. The use of flour made from cycad seed (Cycas micronesica) in traditional food items decreased as that plant became rarer and the Chamorro population became more Americanized following World War II. Cycads harbor symbiotic cyanobacteria of the genus Nostoc in specialized roots which push up through the leaf litter into the light; these cyanobacteria produce BMAA. In addition to eating traditional food items from cycad flour directly, BMAA may be ingested by humans through biomagnification. Flying foxes, a Chamorro delicacy, forage on the fleshy seed covering of cycad seeds and concentrate the toxin in their bodies. Twenty-four specimens of flying foxes from museum collections were tested for BMAA, which was found in large concentrations in the flying foxes from Guam. As of 2021 studies continued examining BMAA biomagnification in marine and estuarine systems and its possible impact on human health outside of Guam. Studies on human brain tissue of ALS/PDC, ALS, Alzheimer's disease, Parkinson's disease, Huntington's disease, and neurological controls indicated that BMAA is present in non-genetic progressive neurodegenerative disease, but not in controls or genetic-based Huntington's disease. research into the role of BMAA as an environmental factor in neurodegenerative disease continued. Clinical trials Safe and effective ways of treating ALS patients with -serine that has been found to protect non-human primates from BMAA-induced neurodegeneration, have been goals of clinical trials conducted by the Phoenix Neurological Associates and the Forbes/Norris ALS/MND clinic and sponsored by the Institute for Ethnomedicine. See also Oxalyldiaminopropionic acid, a related toxin References 2,3-Diaminopropionic acids Neurotoxins Cyanotoxins Toxic amino acids Secondary amino acids
Β-Methylamino-L-alanine
[ "Chemistry" ]
1,688
[ "Neurochemistry", "Neurotoxins" ]
7,067,473
https://en.wikipedia.org/wiki/Industrial%20applications%20of%20nanotechnology
Nanotechnology is impacting the field of consumer goods, several products that incorporate nanomaterials are already in a variety of items; many of which people do not even realize contain nanoparticles, products with novel functions ranging from easy-to-clean to scratch-resistant. Examples of that car bumpers are made lighter, clothing is more stain repellant, sunscreen is more radiation resistant, synthetic bones are stronger, cell phone screens are lighter weight, glass packaging for drinks leads to a longer shelf-life, and balls for various sports are made more durable. Using nanotech, in the mid-term modern textiles will become "smart", through embedded "wearable electronics", such novel products have also a promising potential especially in the field of cosmetics, and has numerous potential applications in heavy industry. Nanotechnology is predicted to be a main driver of technology and business in this century and holds the promise of higher performance materials, intelligent systems and new production methods with significant impact for all aspects of society. Foods A complex set of engineering and scientific challenges in the food and bioprocessing industry for manufacturing high quality and safe food through efficient and sustainable means can be solved through nanotechnology. Bacteria identification and food quality monitoring using biosensors; intelligent, active, and smart food packaging systems; nanoencapsulation of bioactive food compounds are few examples of emerging applications of nanotechnology for the food industry. Nanotechnology can be applied in the production, processing, safety and packaging of food. A nanocomposite coating process could improve food packaging by placing anti-microbial agents directly on the surface of the coated film. Nanocomposites could increase or decrease gas permeability of different fillers as is needed for different products. They can also improve the mechanical and heat-resistance properties and lower the oxygen transmission rate. Research is being performed to apply nanotechnology to the detection of chemical and biological substances for sensanges in foods. A complex set of engineering and scientific challenges in the food and bioprocessing industry for manufacturing high quality and safe food through efficient and sustainable means can be solved through nanotechnology. Bacteria identification and food quality monitoring using biosensors; intelligent, active, and smart food packaging systems; nanoencapsulation of bioactive food compounds are few examples of emerging applications of nanotechnology for the food industry.[2] Nanotechnology can be applied in the production, processing, safety and packaging of food. A nanocomposite coating process could improve food packaging by placing anti-microbial agents directly on the surface of the coated film. Nanocomposites could increase or decrease gas permeability of different fillers as is needed for different products. They can also improve the mechanical and heat-resistance properties and lower the oxygen transmission rate. Research is being performed to apply nanotechnology to the detection of chemical and biological substances for sensanges in foods. Nano-foods New foods are among the nanotechnology-created consumer products coming onto the market at the rate of 3 to 4 per week, according to the Project on Emerging Nanotechnologies (PEN), based on an inventory it has drawn up of 609 known or claimed nano-products. On PEN's list are three foods—a brand of canola cooking oil called Canola Active Oil, a tea called Nanotea and a chocolate diet shake called Nanoceuticals Slim Shake Chocolate. According to company information posted on PEN's Web site, the canola oil, by Shemen Industries of Israel, contains an additive called "nanodrops" designed to carry vitamins, minerals and phytochemicals through the digestive system and urea. The shake, according to U.S. manufacturer RBC Life Sciences Inc., uses cocoa infused "NanoClusters" to enhance the taste and health benefits of cocoa without the need for extra sugar. Consumer goods Surfaces and coatings The most prominent application of nanotechnology in the household is self-cleaning or "easy-to-clean" surfaces on ceramics or glasses. Nanoceramic particles have improved the smoothness and heat resistance of common household equipment such as the flat iron. The first sunglasses using protective and anti-reflective ultrathin polymer coatings are on the market. For optics, nanotechnology also offers scratch resistant surface coatings based on nanocomposites. Nano-optics could allow for an increase in precision of pupil repair and other types of laser eye surgery. Textiles The use of engineered nanofibers already makes clothes water- and stain-repellent or wrinkle-free. Textiles with a nanotechnological finish can be washed less frequently and at lower temperatures. Nanotechnology has been used to integrate tiny carbon particles membrane and guarantee full-surface protection from electrostatic charges for the wearer. Many other applications have been developed by research institutions such as the Textiles Nanotechnology Laboratory at Cornell University, and the UK's Dstl and its spin out company P2i. Sports Nanotechnology may also play a role in sports such as soccer, football, and baseball. Materials for new athletic shoes may be made in order to make the shoe lighter (and the athlete faster). Baseball bats already on the market are made with carbon nanotubes that reinforce the resin, which is said to improve its performance by making it lighter. Other items such as sport towels, yoga mats, exercise mats are on the market and used by players in the National Football League, which use antimicrobial nanotechnology to prevent parasuram from illnesses caused by bacteria such as Methicillin-resistant Staphylococcus aureus (commonly known as MRSA). Aerospace and vehicle manufacturers Lighter and stronger materials will be of immense use to aircraft manufacturers, leading to increased performance. Spacecraft will also benefit, where weight is a major factor. Nanotechnology might thus help to reduce the size of equipment and thereby decrease fuel-consumption required to get it airborne. Hang gliders may be able to halve their weight while increasing their strength and toughness through the use of nanotech materials. Nanotech is lowering the mass of supercapacitors that will increasingly be used to give power to assistive electrical motors for launching hang gliders off flatland to thermal-chasing altitudes. Much like aerospace, lighter and stronger materials would be useful for creating vehicles that are both faster and safer. Combustion engines might also benefit from parts that are more hard-wearing and more heat-resistant. Military Biological sensors Nanotechnology can improve the military's ability to detect biological agents. By using nanotechnology, the military would be able to create sensor systems that could detect biological agents. The sensor systems are already well developed and will be one of the first forms of nanotechnology that the military will start to use. Uniform material Nanoparticles can be injected into the material on soldiers’ uniforms to not only make the material more durable, but also to protect soldiers from many different dangers such as high temperatures, impacts and chemicals. The nanoparticles in the material protect soldiers from these dangers by grouping together when something strikes the armor and stiffening the area of impact. This stiffness helps lessen the impact of whatever hit the armor, whether it was extreme heat or a blunt force. By reducing the force of the impact, the nanoparticles protect the soldier wearing the uniform from any injury the impact could have caused. Another way nanotechnology can improve soldiers’ uniforms is by creating a better form of camouflage. Mobile pigment nanoparticles injected into the material can produce a better form of camouflage. These mobile pigment particles would be able to change the color of the uniforms depending upon the area that the soldiers are in. There is still much research being done on this self-changing camouflage. Nanotechnology can improve thermal camouflage. Thermal camouflage helps protect soldiers from people who are using night vision technology. Surfaces of many different military items can be designed in a way that electromagnetic radiation can help lower the infrared signatures of the object that the surface is on. Surfaces of soldiers’ uniforms and surfaces of military vehicle are a few surfaces that can be designed in this way. By lowering the infrared signature of both the soldiers and the military vehicles the soldiers are using, it will provide better protection from infrared guided weapons or infrared surveillance sensors. Communication method There is a way to use nanoparticles to create coated polymer threads that can be woven into soldiers’ uniforms. These polymer threads could be used as a form of communication between the soldiers. The system of threads in the uniforms could be set to different light wavelengths, eliminating the ability for anyone else to listen in. This would lower the risk of having anything intercepted by unwanted listeners. Medical system A medical surveillance system for soldiers to wear can be made using nanotechnology. This system would be able to watch over their health and stress levels. The systems would be able to react to medical situations by releasing drugs or compressing wounds as necessary. This means that if the system detected an injury that was bleeding, it would be able to compress around the wound until further medical treatment could be received. The system would also be able to release drugs into the soldier's body for health reasons, such as pain killers for an injury. The system would be able to inform the medics at base of the soldier's health status at all times that the soldier is wearing the system. The energy needed to communicate this information back to base would be produced through the soldier's body movements. Weapons Nanoweapon is the name given to military technology currently under development which seeks to exploit the power of nanotechnology in the modern battlefield. Risks in military People such as state agencies, criminals and enterprises could use nano-robots to eavesdrop on conversations held in private. Grey goo: an uncontrollable, self-replicating nano-machine or robot. Nanoparticles used in different military materials could potentially be a hazard to the soldiers that are wearing the material, if the material is allowed to get worn out. As the uniforms wear down it is possible for nanomaterial to break off and enter the soldiers’ bodies. Having nanoparticles entering the soldiers’ bodies would be very unhealthy and could seriously harm them. There is not a lot of information on what the actual damage to the soldiers would be, but there have been studies on the effect of nanoparticles entering a fish through its skin. The studies showed that the different fish in the study suffered from varying degrees of brain damage. Although brain damage would be a serious negative effect, the studies also say that the results cannot be taken as an accurate example of what would happen to soldiers if nanoparticles entered their bodies. There are very strict regulations on the scientists that manufacture products with nanoparticles. With these strict regulations, they are able to largely decrease the danger of nanoparticles wearing off of materials and entering the soldiers’ systems. Catalysis Chemical catalysis benefits especially from nanoparticles, due to the extremely large surface-to-volume ratio. The application potential of nanoparticles in catalysis ranges from fuel cell to catalytic converters and photocatalytic devices. Catalysis is also important for the production of chemicals. For example, nanoparticles with a distinct chemical surrounding (ligands), or specific optical properties. Platinum nanoparticles are being considered in the next generation of automotive catalytic converters because the very high surface area of nanoparticles could reduce the amount of platinum required. However, some concerns have been raised due to experiments demonstrating that they will spontaneously combust if methane is mixed with the ambient air. Ongoing research at the Centre National de la Recherche Scientifique (CNRS) in France may resolve their true usefulness for catalytic applications. Nanofiltration may come to be an important application, although future research must be careful to investigate possible toxicity. Construction Nanotechnology has the potential to make construction faster, cheaper, safer, and more varied. Automation of nanotechnology construction can allow for the creation of structures from advanced homes to massive skyscrapers much more quickly and at much lower cost. In the near future, Nanotechnology can be used to sense cracks in foundations of architecture and can send nanobots to repair them. Nanotechnology is an active research area that encompasses a number of disciplines such as electronics, bio-mechanics and coatings. These disciplines assist in the areas of civil engineering and construction materials. If nanotechnology is implemented in the construction of homes and infrastructure, such structures will be stronger. If buildings are stronger, then fewer of them will require reconstruction and less waste will be produced. Nanotechnology in construction involves using nanoparticles such as alumina and silica. Manufacturers are also investigating the methods of producing nano-cement. If cement with nano-size particles can be manufactured and processed, it will open up a large number of opportunities in the fields of ceramics, high strength composites and electronic applications. Nanomaterials still have a high cost relative to conventional materials, meaning that they are not likely to feature in high-volume building materials. The day when this technology slashes the consumption of structural steel has not yet been contemplated. Cement Much analysis of concrete is being done at the nano-level in order to understand its structure. Such analysis uses various techniques developed for study at that scale such as Atomic Force Microscopy (AFM), Scanning Electron Microscopy (SEM) and Focused Ion Beam (FIB). This has come about as a side benefit of the development of these instruments to study the nanoscale in general, but the understanding of the structure and behavior of concrete at the fundamental level is an important and very appropriate use of nanotechnology. One of the fundamental aspects of nanotechnology is its interdisciplinary nature and there has already been cross over research between the mechanical modeling of bones for medical engineering to that of concrete which has enabled the study of chloride diffusion in concrete (which causes corrosion of reinforcement). Concrete is, after all, a macro-material strongly influenced by its nano-properties and understanding it at this new level is yielding new avenues for improvement of strength, durability and monitoring as outlined in the following paragraphs Silica (SiO2) is present in conventional concrete as part of the normal mix. However, one of the advancements made by the study of concrete at the nanoscale is that particle packing in concrete can be improved by using nano-silica which leads to a densifying of the micro and nanostructure resulting in improved mechanical properties. Nano-silica addition to cement based materials can also control the degradation of the fundamental C-S-H (calcium-silicatehydrate) reaction of concrete caused by calcium leaching in water as well as block water penetration and therefore lead to improvements in durability. Related to improved particle packing, high energy milling of ordinary Portland cement (OPC) clinker and standard sand, produces a greater particle size diminution with respect to conventional OPC and, as a result, the compressive strength of the refined material is also 3 to 6 times higher (at different ages). Steel Steel is a widely available material that has a major role in the construction industry. The use of nanotechnology in steel helps to improve the physical properties of steel. Fatigue, or the structural failure of steel, is due to cyclic loading. Current steel designs are based on the reduction in the allowable stress, service life or regular inspection regime. This has a significant impact on the life-cycle costs of structures and limits the effective use of resources. Stress risers are responsible for initiating cracks from which fatigue failure results. The addition of copper nanoparticles reduces the surface un-evenness of steel, which then limits the number of stress risers and hence fatigue cracking. Advancements in this technology through the use of nanoparticles would lead to increased safety, less need for regular inspection, and more efficient materials free from fatigue issues for construction. Steel cables can be strengthened using carbon nanotubes. Stronger cables reduce the costs and period of construction, especially in suspension bridges, as the cables are run from end to end of the span. The use of vanadium and molybdenum nanoparticles improves the delayed fracture problems associated with high strength bolts. This reduces the effects of hydrogen embrittlement and improves steel micro-structure by reducing the effects of the inter-granular cementite phase. Welds and the Heat Affected Zone (HAZ) adjacent to welds can be brittle and fail without warning when subjected to sudden dynamic loading. The addition of nanoparticles such as magnesium and calcium makes the HAZ grains finer in plate steel. This nanoparticle addition leads to an increase in weld strength. The increase in strength results in a smaller resource requirement because less material is required in order to keep stresses within allowable limits. Wood Nanotechnology represents a major opportunity for the wood industry to develop new products, substantially reduce processing costs, and open new markets for biobased materials. Wood is also composed of nanotubes or “nanofibrils”; namely, lignocellulosic (woody tissue) elements which are twice as strong as steel. Harvesting these nanofibrils would lead to a new paradigm in sustainable construction as both the production and use would be part of a renewable cycle. Some developers have speculated that building functionality onto lignocellulosic surfaces at the nanoscale could open new opportunities for such things as self-sterilizing surfaces, internal self-repair, and electronic lignocellulosic devices. These non-obtrusive active or passive nanoscale sensors would provide feedback on product performance and environmental conditions during service by monitoring structural loads, temperatures, moisture content, decay fungi, heat losses or gains, and loss of conditioned air. Currently, however, research in these areas appears limited. Due to its natural origins, wood is leading the way in cross-disciplinary research and modelling techniques. BASF have developed a highly water repellent coating based on the actions of the lotus leaf as a result of the incorporation of silica and alumina nanoparticles and hydrophobic polymers. Mechanical studies of bones have been adapted to model wood, for instance in the drying process. Glass Research is being carried out on the application of nanotechnology to glass, another important material in construction. Titanium dioxide (TiO2) nanoparticles are used to coat glazing since it has sterilizing and anti-fouling properties. The particles catalyze powerful reactions that break down organic pollutants, volatile organic compounds and bacterial membranes. TiO2 is hydrophilic (attraction to water), which can attract rain drops that then wash off the dirt particles. Thus the introduction of nanotechnology in the Glass industry, incorporates the self-cleaning property of glass. Fire-protective glass is another application of nanotechnology. This is achieved by using a clear intumescent layer sandwiched between glass panels (an interlayer) formed of silica nanoparticles (SiO2), which turns into a rigid and opaque fire shield when heated. Most of glass in construction is on the exterior surface of buildings. So the light and heat entering the building through glass has to be prevented. The nanotechnology can provide a better solution to block light and heat coming through windows. Coatings Coatings is an important area in construction coatings are extensively use to paint the walls, doors, and windows. Coatings should provide a protective layer bound to the base material to produce a surface of the desired protective or functional properties. The coatings should have self healing capabilities through a process of "self-assembly". Nanotechnology is being applied to paints to obtained the coatings having self healing capabilities and corrosion protection under insulation. Since these coatings are hydrophobic and repels water from the metal pipe and can also protect metal from salt water attack. Nanoparticle based systems can provide better adhesion and transparency. The TiO2 coating captures and breaks down organic and inorganic air pollutants by a photocatalytic process, which leads to putting roads to good environmental use. Fire Protection and detection Fire resistance of steel structures is often provided by a coating produced by a spray-on-cementitious process. The nano-cement has the potential to create a new paradigm in this area of application because the resulting material can be used as a tough, durable, high temperature coating. It provides a good method of increasing fire resistance and this is a cheaper option than conventional insulation. Risks in construction In building construction nanomaterials are widely used from self-cleaning windows to flexible solar panels to wi-fi blocking paint. The self-healing concrete, materials to block ultraviolet and infrared radiation, smog-eating coatings and light-emitting walls and ceilings are the new nanomaterials in construction. Nanotechnology is a promise for making the "smart home" a reality. Nanotech-enabled sensors can monitor temperature, humidity, and airborne toxins, which needs nanotech-based improved batteries. The building components will be intelligent and interactive since the sensor uses wireless components, it can collect the wide range of data. If nanosensors and nanomaterials become an everyday part of the buildings, as with smart homes, what are the consequences of these materials on human beings? Effect of nanoparticles on health and environment: Nanoparticles may also enter the body if building water supplies are filtered through commercially available nanofilters. Airborne and waterborne nanoparticles enter from building ventilation and wastewater systems. Effect of nanoparticles on societal issues: As sensors become commonplace, a loss of privacy and autonomy may result from users interacting with increasingly intelligent building components. References External links Overview of Nanotechnology Applications Project on Emerging Nanotechnologies Nanotechnology
Industrial applications of nanotechnology
[ "Materials_science", "Engineering" ]
4,495
[ "Nanotechnology", "Materials science" ]
7,068,038
https://en.wikipedia.org/wiki/Thermal%20barrier%20coating
Thermal barrier coatings (TBCs) are advanced materials systems usually applied to metallic surfaces on parts operating at elevated temperatures, such as gas turbine combustors and turbines, and in automotive exhaust heat management. These 100 μm to 2 mm thick coatings of thermally insulating materials serve to insulate components from large and prolonged heat loads and can sustain an appreciable temperature difference between the load-bearing alloys and the coating surface. In doing so, these coatings can allow for higher operating temperatures while limiting the thermal exposure of structural components, extending part life by reducing oxidation and thermal fatigue. In conjunction with active film cooling, TBCs permit working fluid temperatures higher than the melting point of the metal airfoil in some turbine applications. Due to increasing demand for more efficient engines running at higher temperatures with better durability/lifetime and thinner coatings to reduce parasitic mass for rotating/moving components, there is significant motivation to develop new and advanced TBCs. The material requirements of TBCs are similar to those of heat shields, although in the latter application emissivity tends to be of greater importance. Structure An effective TBC needs to meet certain requirements to perform well in aggressive thermo-mechanical environments. To deal with thermal expansion stresses during heating and cooling, adequate porosity is needed, as well as appropriate matching of thermal expansion coefficients with the metal surface that the TBC is coating. Phase stability is required to prevent significant volume changes (which occur during phase changes), which would cause the coating to crack or spall. In air-breathing engines, oxidation resistance is necessary, as well as decent mechanical properties for rotating/moving parts or parts in contact. Therefore, general requirements for an effective TBC can be summarized as needing: 1) a high melting point. 2) no phase transformation between room temperature and operating temperature. 3) low thermal conductivity. 4) chemical inertness. 5) similar thermal expansion match with the metallic substrate. 6) good adherence to the substrate. 7) low sintering rate for a porous microstructure. These requirements severely limit the number of materials that can be used, with ceramic materials usually being able to satisfy the required properties. Thermal barrier coatings typically consist of four layers: the metal substrate, metallic bond coat, thermally-grown oxide (TGO), and ceramic topcoat. The ceramic topcoat is typically composed of yttria-stabilized zirconia (YSZ), which has very low conductivity while remaining stable at the nominal operating temperatures typically seen in TBC applications. This ceramic layer creates the largest thermal gradient of the TBC and keeps the lower layers at a lower temperature than the surface. However, above 1200 °C, YSZ suffers from unfavorable phase transformations, changing from t'-tetragonal to tetragonal to cubic to monoclinic. Such phase transformations lead to crack formation within the top coating. Recent efforts to develop an alternative to the YSZ ceramic topcoat have identified many novel ceramics (e.g., rare earth zirconates) exhibiting superior performance at temperatures above 1200 °C, but with inferior fracture toughness compared to that of YSZ. In addition, such zirconates may have a high concentration of oxygen-ion vacancies, which may facilitate oxygen transport and exacerbate the formation of the TGO. With a thick enough TGO, spalling of the coating may occur, which is a catastrophic mode of failure for TBCs. The use of such coatings would require additional coatings that are more oxidation resistant, such as alumina or mullite. The bond coat is an oxidation-resistant metallic layer which is deposited directly on top of the metal substrate. It is typically 75-150 μm thick and made of a NiCrAlY or NiCoCrAlY alloy, though other bond coats made of Ni and Pt aluminides also exist. The primary purpose of the bond coat is to protect the metal substrate from oxidation and corrosion, particularly from oxygen and corrosive elements that pass through the porous ceramic top coat. At peak operating conditions found in gas-turbine engines with temperatures in excess of 700 °C, oxidation of the bond-coat leads to the formation of a thermally-grown oxide (TGO) layer. Formation of the TGO layer is inevitable for many high-temperature applications, so thermal barrier coatings are often designed so that the TGO layer grows slowly and uniformly. Such a TGO will have a structure that has a low diffusivity for oxygen, so that further growth is controlled by diffusion of metal from the bond-coat rather than the diffusion of oxygen from the top-coat. The TBC can also be locally modified at the interface between the bond coat and the thermally grown oxide so that it acts as a thermographic phosphor, which allows for remote temperature measurement. Failure mechanisms In general, failure mechanisms of TBCs are very complex and can vary significantly from TBC to TBC and depending on the environment in which the thermal cycling takes place. For this reason, the failure mechanisms are still not yet fully understood. Despite this multitude of failure mechanisms and their complexity, though, three of the most important failure mechanisms have to do with the growth of the thermally-grown oxide (TGO) layer, thermal shock, and sintering of the top coat (TC), discussed below. Additional factors contributing to failure of TBCs include mechanical rumpling of the bond coat during thermal cyclic exposure (especially coatings in aircraft engines), accelerated oxidation at high temperatures, hot corrosion, and molten deposit degradation. TGO layer growth The growth of the thermally-grown oxide (TGO) layer is the most important cause of TBC spallation failure. When the TGO forms as the TBC is heated, it causes a compressive growth stress associated with volume expansion. When it is cooled, a lattice mismatch strain arises between TGO and the top coat (TC) due to differing thermal expansion coefficients. Lattice mismatch strain refers to the strain that comes about when two crystalline lattices at an interface have different lattice constants and must nonetheless match one another where they meet at the interface. These growth stresses and lattice mismatch stresses, which increase with increasing cycling number, lead to plastic deformation, crack nucleation, and crack propagation, ultimately contributing to TBC failure after many cycles of heating and cooling. For this reason, in order to make a TBC that lasts a long time before failure, the thermal expansion coefficients between all layers should match well. Whereas a high BC creep rate increases the tensile stresses present in the TC due to TGO growth, a high TGO creep rate actually decreases these tensile stresses. Because the TGO is made of Al2O3, and the metallic bond coat (BC) is normally made of an aluminum-containing alloy, TGO formation tends to deplete the Al in the bond coat. If the BC runs out of aluminum to supply to the growing TGO, it's possible for compounds other than Al2O3 to enter the TGO (such as Y2O3, for example), which weakens the TGO, making it easier for the TBC to fail. Thermal shock Because the purpose of TBCs is to insulate metallic substrates such that they can be used for prolonged times at high temperatures, they often undergo thermal shock, which is a stress that arises in a material when it undergoes a rapid temperature change. This thermal shock is a major contributor to the failure of TBCs, since the thermal shock stresses can cause cracking in the TBC if they are sufficiently strong. In fact, the repeated thermal shocks associated with turning the engine on and off many times is a main contributor to failure of TBC-coated turbine blades in airplanes. Over the course of repeated cycles of rapid heating and cooling, thermal shock leads to significant tensile strains perpendicular to the interface between the BC and the TC, reaching a maximum magnitude at the BC/TC interface, as well as a periodic strain field in the direction parallel to the BC/TC interface. Especially after many cycles of heating and cooling, these strains can lead to nucleation and propagation of cracks both parallel and perpendicular to the BC/TC interface. These linked-up horizontal and vertical cracks due to thermal shock ultimately contribute to the failure of the TBC via delamination of the TC. Sintering A third major contributor to TBC failure is sintering of the TC. In TBC applications, YSZ has a columnar structure. These columns start out with a feathery structure, but become smoother with heating due to atomic diffusion at high temperature in order to minimize surface energy. The undulations on adjacent smoother columns eventually touch one another and begin to coalesce. As the YSZ sinters and becomes more dense in this fashion, it shrinks in size, leading to the formation of cracks via a mechanism analogous to the formation of mudcracks, where the top layer shrinks but the bottom layer (the BC in the case of TBCs, or the earth in the case of mud) remains the same size. This mud-cracking effect can be exacerbated if the underlying substrate is rough, or if it roughens upon heating, for the following reason. If the surface under the columns is curvy and if the columns can be modeled as straight rods normal to the surface underneath them, then column density will necessarily be high above valleys in the surface and low above peaks in the surface due to the tilting of the straight rods. This leads to a non-uniform columnar density throughout the TBC and promotes crack development in low-density regions. In addition to this mud-cracking effect, sintering increases the Young's modulus of the TC as the columns become attached to one another. This in turn increases the lattice mismatch strain at the interface between the TC and BC or TGO. The TC's increased Young's modulus makes it more difficult for its lattice to bend to meet that of the substrate under it; this is the origin of the increased lattice mismatch strain. In turn, this increased mismatch strain adds with the other previously mentioned strain fields in the TC to promote crack formation and propagation, leading to failure of the TBC. Types YSZ YSZ is the most widely studied and used TBC because it provides excellent performance in applications such as diesel engines and gas turbines. Additionally, it was one of the few refractory oxides that could be deposited as thick films using the then-known technology of plasma spraying. As for properties, it has low thermal conductivity, high thermal expansion coefficient, and low thermal shock resistance. However, it has a fairly low operating limit of 1200 °C due to phase instability, and can corrode due to its oxygen transparency. Mullite Mullite is a compound of alumina and silica, with the formula 3Al2O3-2SiO2. It has a low density, along with good mechanical properties, high thermal stability, low thermal conductivity, and is corrosion and oxidation resistant. However, it suffers from crystallization and volume contraction above 800 °C, which leads to cracking and delamination. Therefore, this material is suitable as a zirconia alternative for applications such as diesel engines, where surface temperatures are relatively low and temperature variations across the coating may be large. Alumina Only α-phase Al2O3 is stable among aluminum oxides. With a high hardness and chemical inertness, but high thermal conductivity and low thermal expansion coefficient, alumina is often used as an addition to an existing TBC coating. By incorporating alumina in YSZ TBC, oxidation and corrosion resistance can be improved, as well as hardness and bond strength without significant change in the elastic modulus or toughness. One challenge with alumina is applying the coating through plasma spraying, which tends to create a variety of unstable phases, such as γ-alumina. When these phases eventually transform into the stable α-phase through thermal cycling, a significant volume change of ~15% (γ to α) follows, which can lead to microcrack formation in the coating. CeO2 + YSZ CeO2 (Ceria) has a higher thermal expansion coefficient and lower thermal conductivity than YSZ. Adding ceria into a YSZ coating can significantly improve the TBC performance, especially in thermal shock resistance. This is most likely due to less bond coat stress due to better insulation and a better net thermal expansion coefficient. Some negative effects of the addition of ceria include the decrease of hardness and accelerated rate of sintering of the coating (less porous). Rare-earth zirconates La2Zr2O7, also referred to as LZ, is an example of a rare-earth zirconate that shows potential for use as a TBC. This material is phase stable up to its melting point and can largely tolerate vacancies on any of its sublattices. Along with the ability for site-substitution with other elements, this means that thermal properties can potentially be tailored. Although it has a very low thermal conductivity compared to YSZ, it also has a low thermal expansion coefficient and low toughness. Rare earth oxides Single and mixed phase materials consisting of rare earth oxides represent a promising low-cost approach towards TBCs. Coatings of rare earth oxides (e.g.: La2O3, Nb2O5, Pr2O3, CeO2 as main phases) have lower thermal conductivity and higher thermal expansion coefficients when compared to YSZ. The main challenge to overcome is the polymorphic nature of most rare earth oxides at elevated temperatures, as phase instability tends to negatively impact thermal shock resistance. Another advantage of rare earth oxides as TBCs is their tendency to exhibit intrinsic hydrophobicity, which provides various advantages for systems that undergo intermittent use and may otherwise suffer from moisture adsorption or surface ice formation. Metal-glass composites A powder mixture of metal and normal glass can be plasma-sprayed in vacuum, with a suitable composition resulting in a TBC comparable to YSZ. Additionally, metal-glass composites have superior bond-coat adherence, higher thermal expansion coefficients, and no open porosity, which prevents oxidation of the bond-coat. Uses Automotive Thermal barrier ceramic coatings are becoming more common in automotive applications. They are specifically designed to reduce heat loss from engine exhaust system components including exhaust manifolds, turbocharger casings, exhaust headers, downpipes and tailpipes. This process is also known as "exhaust heat management". When used under-bonnet, these have the positive effect of reducing engine bay temperatures, therefore reducing the intake air temperature. Although most ceramic coatings are applied to metallic parts directly related to the engine exhaust system, technological advances now allow thermal barrier coatings to be applied via plasma spray onto composite materials. It is now commonplace to find ceramic-coated components in modern engines and on high-performance components in race series such as Formula 1. As well as providing thermal protection, these coatings are also used to prevent physical degradation of the composite material due to friction. This is possible because the ceramic material bonds with the composite (instead of merely sticking on the surface with paint), thereby forming a tough coating that doesn't chip or flake easily. Although thermal barrier coatings have been applied to the insides of exhaust system components, problems have been encountered because of the difficulty in preparing the internal surface prior to coating. Aviation Thermal barrier coatings are commonly used to protect nickel-based superalloys from both melting and thermal cycling in aviation turbines. Combined with cool air flow, TBCs increase the allowable gas temperature above that of the superalloy melting point. To avoid the difficulties associated with the melting point of superalloys, many researchers are investigating ceramic-matrix composites (CMCs) as high-temperature alternatives. Generally, these are made from fiber-reinforced SiC. Rotating parts are especially good candidates for the material change due to the enormous fatigue that they endure. Not only do CMCs have better thermal properties, but they are also lighter meaning that less fuel would be needed to produce the same thrust for the lighter aircraft. The material change is, however, not without consequences. At high temperatures, these CMCs are reactive with water and form gaseous silicon hydroxide compounds that corrode the CMC. SiOH2 + H2O = SiO(OH)2 SiOH2 + 2H2O = Si(OH)4 2SiOH2 + 3H2O = Si2O(OH)6 The thermodynamic data for these reactions has been experimentally determined over many years to determine that Si(OH)4 is generally the dominant vapor species. Even more advanced environmental barrier coatings are required to protect these CMCs from water vapor as well as other environmental degradants. For instance, as the gas temperatures increase towards 1400 K-1500 K, sand particles begin to melt and react with coatings. The melted sand is generally a mixture of calcium oxide, magnesium oxide, aluminum oxide, and silicon oxide (commonly referred to as CMAS). Many research groups are investigating the harmful effects of CMAS on turbine coatings and how to prevent damage. CMAS is a large barrier to increasing the combustion temperature of gas turbine engines and will need to be solved before turbines see a large increase in efficiency from temperature increase. Processing In industry, thermal barrier coatings are produced in a number of ways: Electron beam physical vapor deposition: EBPVD Air plasma spray: APS High velocity oxygen fuel: HVOF Electrostatic spray-assisted vapor deposition: ESAVD Direct vapor deposition Additionally, the development of advanced coatings and processing methods is a field of active research. One such example is the solution precursor plasma spray process, which has been used to create TBCs with some of the lowest reported thermal conductivities without sacrificing thermal cyclic durability. See also Piezospectroscopy Thermal spraying Zircotec References External links Materials science Thin film deposition Thermal protection
Thermal barrier coating
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
3,719
[ "Applied and interdisciplinary physics", "Thin film deposition", "Coatings", "Thin films", "Materials science", "nan", "Planes (geometry)", "Solid state engineering" ]
7,068,198
https://en.wikipedia.org/wiki/Natural%20lines%20of%20drift
Natural lines of drift are those paths across terrain that are the most likely to be used when going from one place to another. These paths are paths of least resistance: those that offer the greatest ease while taking into account obstacles (e.g. rivers, cliffs, dense unbroken woodland, etc.) and modes of transit (e.g. pedestrian, automobile, horses.). Common endpoints or fixed points may include water sources, food sources, and obstacle passages such as fords or bridges. Local paths may be derived from game trails or from artificial paths created by utility lines or political boundaries. Property ownership and land use may also be factors in determining local variation. Improved paths may also be partially defined by the logistics necessary to build roads or railways. See also Footpath Trail Sources Realistic Human Path Planning using Fluid Simulation Human geography Military geography
Natural lines of drift
[ "Environmental_science" ]
171
[ "Environmental social science stubs", "Environmental social science", "Human geography" ]
7,068,306
https://en.wikipedia.org/wiki/Cloudbuster
A cloudbuster is a device designed by Austrian psychoanalyst Wilhelm Reich (1897–1957), which Reich claimed could produce rain by manipulating what he called "orgone energy" present in the atmosphere. The cloudbuster was intended to be used in a way similar to a lightning rod: focusing it on a location in the sky and grounding it in some material that was presumed to absorb orgone—such as a body of water—would draw the orgone energy out of the atmosphere, causing the formation of clouds and rain. Reich conducted dozens of experiments with the cloudbuster, calling the research "Cosmic orgone engineering". There have been no verified instances of a cloudbuster actually working and producing noticeable weather change, such as causing rain. Orgone therapy is seen as pseudoscience. A modern reinvention of cloudbuster is being sold under names of chembuster, orgone cannon or akasha pillar. It is marketed as a countermeasure against chemtrails (conspiracy theory relating to aircraft condensation trails). Construction A cloudbuster consists of an array of parallel hollow copper tubes which are connected at the rear to a series of flexible copper hoses which are equal or slightly smaller in diameter to the parallel tubes. Alternatively, the rear of the tubes are joined to a single large diameter pipe and flexible copper hose. The open end of these hoses are placed in water, which Reich believed to be a natural orgone absorber. The pipes can be aimed into areas of the sky to purportedly draw energy to the ground like a lightning rod. The remains of one of Reich's cloudbusters can be found in Rangeley, Maine at the Orgone Energy Observatory in the Reich Museum. In popular culture Wilhelm Reich's cloudbuster at Orgonon can be seen in Dušan Makavejev's 1971 film W.R.: Mysteries of the Organism. The cloudbuster was the inspiration for the 1985 song "Cloudbusting" by British singer/songwriter Kate Bush. The song describes Reich's arrest and incarceration through the eyes of his son, Peter, who later wrote the memoir A Book of Dreams (1973). A cloudbuster, bearing only a superficial resemblance to the genuine article, was designed and built for the video. The video, intended by Bush to be a short narrative film rather than a traditional music video, was conceived by Terry Gilliam and Kate Bush, and directed by Julian Doyle. The video stars actor Donald Sutherland as Reich and Bush as his son, Peter. Some chemtrail conspiracy theory believers have built cloudbusters filled with crystals and metal filings, which are pointed at the sky in an attempt to clear it of chemtrails. See also Climate engineering Cloud seeding - a process for dispersing substances into existing clouds to affect precipitation patterns Negative air ionization therapy Rainmaking References Orgonomy Pseudoscience Weather modification
Cloudbuster
[ "Engineering" ]
604
[ "Planetary engineering", "Weather modification" ]
7,068,593
https://en.wikipedia.org/wiki/Active%20safety
The term active safety (or primary safety) is used in two distinct ways. The first, mainly in the United States, refers to automobile safety systems that help avoid accidents, such as good steering and brakes. In this context, passive safety refers to features that help reduce the effects of an accident, such as seat belts, airbags and strong body structures. This use is essentially interchangeable with the terms primary and secondary safety that tend to be used worldwide in standard UK English. The correct ISO term is "primary safety" (ISO 12353-1). However, active safety is increasingly being used to describe systems that use an understanding of the state of the vehicle to both avoid and minimise the effects of a crash. These include braking systems, like brake assist, traction control systems and electronic stability control systems, that interpret signals from various sensors to help the driver control the vehicle. Additionally, forward-looking, sensor-based systems such as advanced driver-assistance systems including adaptive cruise control and collision warning/avoidance/mitigation systems are also considered as active safety systems under this definition. These forward-looking technologies are expected to play an increasing role in collision avoidance and mitigation in the future. Most major component suppliers, such as Aptiv, TRW and Bosch, are developing such systems. However, as they become more sophisticated, questions will need to be addressed regarding driver autonomy and at what point these systems should intervene if they believe a crash is likely. In engineering, active safety systems are systems activated in response to a safety problem or abnormal event. Such systems may be activated by a human operator, automatically by a computer driven system, or even mechanically. In nuclear engineering, active safety contrasts to passive safety in that it relies on operator or computer automated intervention, whereas passive safety systems rely on the laws of nature to make the reactor respond to dangerous events in a favourable manner. Examples The computer operated control rods in a nuclear power station provide an active safety system, whereas a fuel that produces less heat at abnormally high temperatures constitutes a passive safety feature Collision avoidance systems in a modern car Many buildings have interconnected fire alarms that can be triggered manually by pushing a button or breaking a glass plate attached to sensors Automotive sector In the automotive sector the term active safety (or primary safety) refers to safety systems that are active prior to an accident. This has traditionally referred to non-complex systems such as good visibility from the vehicle and low interior noise levels. Nowadays, however, this area contains highly advanced systems such as anti-lock braking system, electronic stability control and collision warning/avoidance through automatic braking. This compares with passive safety (or secondary safety), which are active during an accident. To this category belong seat belts, deformation zones and air-bags, etc. Advancement in passive safety systems has progressed very far over the years, and the automotive industry has shifted its attention to active safety where there are still a lot of new unexplored areas. Research today focuses primarily on collision avoidance (with other vehicles, pedestrians and wild animals) and vehicle platooning. Examples of active safety Good visibility from driver's seat, Low noise level in interior, Legibility of instrumentation and warning symbols, Early warning of severe braking ahead, Head up displays, Good chassis balance and handling, Good grip, Anti-lock braking system, Electronic Stability Control, Chassis assist, Intelligent speed adaptation, Brake assist, Traction control, Collision warning/avoidance, Adaptive or autonomous cruise control system. Electronic brakeforce distribution front & rear wiper Examples of passive safety Passenger safety cell, Crumple zones, Seat belts, Loadspace barrier-nets, Air bags, Laminated glass, Correctly positioned fuel tanks, Fuel pump kill switches See also Passively safe Intelligent Speed Adaptation (ISA) Electronic Stability Control References External links Continental Automotive Systems TRW Cognitive Safety Systems SafelyThere - Continental Automotive Systems Vehicle Safety Equipment "Drive Safer America" Safety engineering Vehicle safety technologies
Active safety
[ "Engineering" ]
793
[ "Safety engineering", "Systems engineering" ]
7,069,425
https://en.wikipedia.org/wiki/Prentice%20Hall%20International%20Series%20in%20Computer%20Science
Prentice Hall International Series in Computer Science was a series of books on computer science published by Prentice Hall. The series' founding editor was Tony Hoare. Richard Bird subsequently took over editing the series. Many of the books in the series have been in the area of formal methods in particular. Selected books The following books were published in the series: R. S. Bird, Introduction to Functional Programming using Haskell, 2nd edition, 1998. . R. S. Bird and O. de Moor, Algebra of Programming, 1996. . (100th volume in the series.) O.-J. Dahl, Verifiable Programming, 1992. . D. M. Gabbay, Elementary Logics: A Procedural Perspective, 1998. . I. J. Hayes (ed.), Specification Cases Studies, 2nd edition, 1993. . M. G. Hinchey and J. P. Bowen (eds.), Applications of Formal Methods, 1996. . C. A. R. Hoare, Communicating Sequential Processes, 1985. hardback or paperback. C. A. R. Hoare and M. J. C. Gordon, Mechanized Reasoning and Hardware Design, 1998. . C. A. R. Hoare and He Jifeng, Unifying Theories of Programming, 1998. . INMOS Limited, Occam 2 Reference Manual, 1988. . Cliff Jones, Systematic Software Development Using VDM, 1986. hardback or paperback. M. Joseph (ed.), Real-Time Systems: Specification, Verification and Analysis, 1996. . Bertrand Meyer, Object-Oriented Software Construction (first edition only). Robin Milner, Communication and Concurrency, 1989. (for the paperback). C. C. Morgan, Programming from Specifications, 2nd edition, 1994. . P. N. Nissanke, Realtime Systems, 1997. . B. Potter, J. Sinclair and D. Till, An Introduction to Formal Specification and Z, 2nd edition, 1996. . A. W. Roscoe (ed.), A Classical Mind: Essays in Honour of C. A. R. Hoare, 1994. . A. W. Roscoe, The Theory and Practice of Concurrency, 1997. . J. M. Spivey, The Z Notation: A Reference Manual, 2nd edition, 1992. . J. C. P. Woodcock and J. W. Davies, Using Z: Specification, Refinement and Proof, 1996. . References Year of establishment missing Year of disestablishment missing Book series Computer science books Formal methods publications
Prentice Hall International Series in Computer Science
[ "Technology" ]
524
[ "Computing stubs", "Computer book stubs" ]
7,069,430
https://en.wikipedia.org/wiki/Kurtosis%20risk
In statistics and decision theory, kurtosis risk is the risk that results when a statistical model assumes the normal distribution, but is applied to observations that have a tendency to occasionally be much farther (in terms of number of standard deviations) from the average than is expected for a normal distribution. Overview Kurtosis risk applies to any kurtosis-related quantitative model that assumes the normal distribution for certain of its independent variables when the latter may in fact have kurtosis much greater than does the normal distribution. Kurtosis risk is commonly referred to as "fat tail" risk. The "fat tail" metaphor explicitly describes the situation of having more observations at either extreme than the tails of the normal distribution would suggest; therefore, the tails are "fatter". Ignoring kurtosis risk will cause any model to understate the risk of variables with high kurtosis. For instance, Long-Term Capital Management, a hedge fund cofounded by Myron Scholes, ignored kurtosis risk to its detriment. After four successful years, this hedge fund had to be bailed out by major investment banks in the late 1990s because it understated the kurtosis of many financial securities underlying the fund's own trading positions. Research by Mandelbrot Benoit Mandelbrot, a French mathematician, extensively researched this issue. He felt that the extensive reliance on the normal distribution for much of the body of modern finance and investment theory is a serious flaw of any related models including the Black–Scholes option model developed by Myron Scholes and Fischer Black, and the capital asset pricing model developed by William F. Sharpe. Mandelbrot explained his views and alternative finance theory in his book: The (Mis)Behavior of Markets: A Fractal View of Risk, Ruin, and Reward published on August 3, 2004. See also Kurtosis Skewness risk Stochastic volatility Holy grail distribution Taleb distribution The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb Notes References Premaratne, G., Bera, A. K. (2000). Modeling Asymmetry and Excess Kurtosis in Stock Return Data. Office of Research Working Paper Number 00-0123, University of Illinois Normal distribution Investment Risk analysis Mathematical finance
Kurtosis risk
[ "Mathematics" ]
464
[ "Applied mathematics", "Mathematical finance" ]
7,069,871
https://en.wikipedia.org/wiki/John%20A.%20Pyle
John Adrian Pyle is a British atmospheric scientist, Director of the Centre for Atmospheric Science in Cambridge, England. He is a Professor in the Department of Chemistry at the University of Cambridge, and since 2007 has held the 1920 Chair of Physical Chemistry in the Chemistry Department. He is also a Fellow of the Royal Society and of St Catharine's College, Cambridge. Education Pyle was educated at De La Salle College, Salford, gained his Bachelor of Science degree in Physics at Durham University and his DPhil from Jesus College, Oxford in 1978. Research Pyle is known for his extensive work on atmospheric chemistry and its interactions with climate. His early research was focusing on issues related to stratospheric ozone depletion but in the following decades his work has expanded in a variety of chemistry and climate-related fields. Pyle was appointed Commander of the Order of the British Empire (CBE) in the 2017 New Year Honours for services to atmospheric chemistry and environmental science. References Living people Fellows of St Catharine's College, Cambridge Fellows of the Royal Society Place of birth missing (living people) Atmospheric chemists 1951 births Alumni of Grey College, Durham Members of the University of Cambridge Department of Chemistry Alumni of the University of Oxford British physical chemists Commanders of the Order of the British Empire Professors of Physical Chemistry (Cambridge)
John A. Pyle
[ "Chemistry" ]
271
[ "Professors of Physical Chemistry (Cambridge)", "Physical chemists" ]
7,070,301
https://en.wikipedia.org/wiki/Telescope
A telescope is a device used to observe distant objects by their emission, absorption, or reflection of electromagnetic radiation. Originally, it was an optical instrument using lenses, curved mirrors, or a combination of both to observe distant objects – an optical telescope. Nowadays, the word "telescope" is defined as a wide range of instruments capable of detecting different regions of the electromagnetic spectrum, and in some cases other types of detectors. The first known practical telescopes were refracting telescopes with glass lenses and were invented in the Netherlands at the beginning of the 17th century. They were used for both terrestrial applications and astronomy. The reflecting telescope, which uses mirrors to collect and focus light, was invented within a few decades of the first refracting telescope. In the 20th century, many new types of telescopes were invented, including radio telescopes in the 1930s and infrared telescopes in the 1960s. Etymology The word telescope was coined in 1611 by the Greek mathematician Giovanni Demisiani for one of Galileo Galilei's instruments presented at a banquet at the Accademia dei Lincei. In the Starry Messenger, Galileo had used the Latin term . The root of the word is from the Ancient Greek τῆλε, tele 'far' and σκοπεῖν, skopein 'to look or see'; τηλεσκόπος, teleskopos 'far-seeing'. History The earliest existing record of a telescope was a 1608 patent submitted to the government in the Netherlands by Middelburg spectacle maker Hans Lipperhey for a refracting telescope. The actual inventor is unknown but word of it spread through Europe. Galileo heard about it and, in 1609, built his own version, and made his telescopic observations of celestial objects. The idea that the objective, or light-gathering element, could be a mirror instead of a lens was being investigated soon after the invention of the refracting telescope. The potential advantages of using parabolic mirrors—reduction of spherical aberration and no chromatic aberration—led to many proposed designs and several attempts to build reflecting telescopes. In 1668, Isaac Newton built the first practical reflecting telescope, of a design which now bears his name, the Newtonian reflector. The invention of the achromatic lens in 1733 partially corrected color aberrations present in the simple lens and enabled the construction of shorter, more functional refracting telescopes. Reflecting telescopes, though not limited by the color problems seen in refractors, were hampered by the use of fast tarnishing speculum metal mirrors employed during the 18th and early 19th century—a problem alleviated by the introduction of silver coated glass mirrors in 1857, and aluminized mirrors in 1932. The maximum physical size limit for refracting telescopes is about , dictating that the vast majority of large optical researching telescopes built since the turn of the 20th century have been reflectors. The largest reflecting telescopes currently have objectives larger than , and work is underway on several 30–40m designs. The 20th century also saw the development of telescopes that worked in a wide range of wavelengths from radio to gamma-rays. The first purpose-built radio telescope went into operation in 1937. Since then, a large variety of complex astronomical instruments have been developed. In space Since the atmosphere is opaque for most of the electromagnetic spectrum, only a few bands can be observed from the Earth's surface. These bands are visible – near-infrared and a portion of the radio-wave part of the spectrum. For this reason there are no X-ray or far-infrared ground-based telescopes as these have to be observed from orbit. Even if a wavelength is observable from the ground, it might still be advantageous to place a telescope on a satellite due to issues such as clouds, astronomical seeing and light pollution. The disadvantages of launching a space telescope include cost, size, maintainability and upgradability. Some examples of space telescopes from NASA are the Hubble Space Telescope that detects visible light, ultraviolet, and near-infrared wavelengths, the Spitzer Space Telescope that detects infrared radiation, and the Kepler Space Telescope that discovered thousands of exoplanets. The latest telescope that was launched was the James Webb Space Telescope on December 25, 2021, in Kourou, French Guiana. The Webb telescope detects infrared light. By electromagnetic spectrum The name "telescope" covers a wide range of instruments. Most detect electromagnetic radiation, but there are major differences in how astronomers must go about collecting light (electromagnetic radiation) in different frequency bands. As wavelengths become longer, it becomes easier to use antenna technology to interact with electromagnetic radiation (although it is possible to make very tiny antenna). The near-infrared can be collected much like visible light; however, in the far-infrared and submillimetre range, telescopes can operate more like a radio telescope. For example, the James Clerk Maxwell Telescope observes from wavelengths from 3 μm (0.003 mm) to 2000 μm (2 mm), but uses a parabolic aluminum antenna. On the other hand, the Spitzer Space Telescope, observing from about 3 μm (0.003 mm) to 180 μm (0.18 mm) uses a mirror (reflecting optics). Also using reflecting optics, the Hubble Space Telescope with Wide Field Camera 3 can observe in the frequency range from about 0.2 μm (0.0002 mm) to 1.7 μm (0.0017 mm) (from ultra-violet to infrared light). With photons of the shorter wavelengths, with the higher frequencies, glancing-incident optics, rather than fully reflecting optics are used. Telescopes such as TRACE and SOHO use special mirrors to reflect extreme ultraviolet, producing higher resolution and brighter images than are otherwise possible. A larger aperture does not just mean that more light is collected, it also enables a finer angular resolution. Telescopes may also be classified by location: ground telescope, space telescope, or flying telescope. They may also be classified by whether they are operated by professional astronomers or amateur astronomers. A vehicle or permanent campus containing one or more telescopes or other instruments is called an observatory. Radio and submillimeter Radio telescopes are directional radio antennas that typically employ a large dish to collect radio waves. The dishes are sometimes constructed of a conductive wire mesh whose openings are smaller than the wavelength being observed. Unlike an optical telescope, which produces a magnified image of the patch of sky being observed, a traditional radio telescope dish contains a single receiver and records a single time-varying signal characteristic of the observed region; this signal may be sampled at various frequencies. In some newer radio telescope designs, a single dish contains an array of several receivers; this is known as a focal-plane array. By collecting and correlating signals simultaneously received by several dishes, high-resolution images can be computed. Such multi-dish arrays are known as astronomical interferometers and the technique is called aperture synthesis. The 'virtual' apertures of these arrays are similar in size to the distance between the telescopes. As of 2005, the record array size is many times the diameter of the Earth – using space-based very-long-baseline interferometry (VLBI) telescopes such as the Japanese HALCA (Highly Advanced Laboratory for Communications and Astronomy) VSOP (VLBI Space Observatory Program) satellite. Aperture synthesis is now also being applied to optical telescopes using optical interferometers (arrays of optical telescopes) and aperture masking interferometry at single reflecting telescopes. Radio telescopes are also used to collect microwave radiation, which has the advantage of being able to pass through the atmosphere and interstellar gas and dust clouds. Some radio telescopes such as the Allen Telescope Array are used by programs such as SETI and the Arecibo Observatory to search for extraterrestrial life. Infrared Visible light An optical telescope gathers and focuses light mainly from the visible part of the electromagnetic spectrum. Optical telescopes increase the apparent angular size of distant objects as well as their apparent brightness. For the image to be observed, photographed, studied, and sent to a computer, telescopes work by employing one or more curved optical elements, usually made from glass lenses and/or mirrors, to gather light and other electromagnetic radiation to bring that light or radiation to a focal point. Optical telescopes are used for astronomy and in many non-astronomical instruments, including: theodolites (including transits), spotting scopes, monoculars, binoculars, camera lenses, and spyglasses. There are three main optical types: The refracting telescope which uses lenses to form an image. The reflecting telescope which uses an arrangement of mirrors to form an image. The catadioptric telescope which uses mirrors combined with lenses to form an image. A Fresnel imager is a proposed ultra-lightweight design for a space telescope that uses a Fresnel lens to focus light. Beyond these basic optical types there are many sub-types of varying optical design classified by the task they perform such as astrographs, comet seekers and solar telescopes. Ultraviolet Most ultraviolet light is absorbed by the Earth's atmosphere, so observations at these wavelengths must be performed from the upper atmosphere or from space. X-ray X-rays are much harder to collect and focus than electromagnetic radiation of longer wavelengths. X-ray telescopes can use X-ray optics, such as Wolter telescopes composed of ring-shaped 'glancing' mirrors made of heavy metals that are able to reflect the rays just a few degrees. The mirrors are usually a section of a rotated parabola and a hyperbola, or ellipse. In 1952, Hans Wolter outlined 3 ways a telescope could be built using only this kind of mirror. Examples of space observatories using this type of telescope are the Einstein Observatory, ROSAT, and the Chandra X-ray Observatory. In 2012 the NuSTAR X-ray Telescope was launched which uses Wolter telescope design optics at the end of a long deployable mast to enable photon energies of 79 keV. Gamma ray Higher energy X-ray and gamma ray telescopes refrain from focusing completely and use coded aperture masks: the patterns of the shadow the mask creates can be reconstructed to form an image. X-ray and Gamma-ray telescopes are usually installed on high-flying balloons or Earth-orbiting satellites since the Earth's atmosphere is opaque to this part of the electromagnetic spectrum. An example of this type of telescope is the Fermi Gamma-ray Space Telescope which was launched in June 2008. The detection of very high energy gamma rays, with shorter wavelength and higher frequency than regular gamma rays, requires further specialization. Such detections can be made either with the Imaging Atmospheric Cherenkov Telescopes (IACTs) or with Water Cherenkov Detectors (WCDs). Examples of IACTs are H.E.S.S. and VERITAS with the next-generation gamma-ray telescope, the Cherenkov Telescope Array (CTA), currently under construction. HAWC and LHAASO are examples of gamma-ray detectors based on the Water Cherenkov Detectors. A discovery in 2012 may allow focusing gamma-ray telescopes. At photon energies greater than 700 keV, the index of refraction starts to increase again. Lists of telescopes List of optical telescopes List of largest optical reflecting telescopes List of largest optical refracting telescopes List of largest optical telescopes historically List of radio telescopes List of solar telescopes List of space observatories List of telescope parts and construction List of telescope types See also Airmass Amateur telescope making Angular resolution ASCOM open standards for computer control of telescopes Bahtinov mask Binoculars Bioptic telescope Carey mask Dew shield Dynameter f-number First light Hartmann mask Keyhole problem Microscope Planetariums Remote Telescope Markup Language Robotic telescope Timeline of telescope technology Timeline of telescopes, observatories, and observing technology References Further reading External links Galileo to Gamma Cephei – The History of the Telescope. The Galileo Project – The Telescope by Al Van Helden "The First Telescopes". Part of an exhibit from Cosmic Journey: A History of Scientific Cosmology. by the American Institute of Physics Outside the Optical: Other Kinds of Telescopes Astronomical imaging Astronomical instruments Dutch inventions
Telescope
[ "Astronomy" ]
2,491
[ "Telescopes", "Astronomical instruments" ]
7,070,302
https://en.wikipedia.org/wiki/Alagoas%20curassow
The Alagoas curassow (Mitu mitu) is a glossy-black, pheasant-like bird. It was formerly found in forests in Northeastern Brazil in what is now the states of Pernambuco and Alagoas, which is the origin of its common name. It is now extinct in the wild; there are about 130 individuals in captivity. German naturalist Georg Marcgrave first identified the Alagoas curassow in 1648 in its native range. Subsequently, the origin and legitimacy of the bird began to be questioned due to the lack of specimens. An adult female curassow was rediscovered in 1951, in the coastal forests of Alagoas. The Mitu mitu was then accepted as a separate species. At that time fewer than 60 birds were left in the wild, in the forests around São Miguel dos Campos. Several authors in the 1970s brought to light the growing destruction of its habitat and the rarity of the species. Even with these concerns, the last large forest remnants which contained native Mitu mitu were demolished for sugarcane agriculture. Description The Alagoas curassow measures approximately in length. Feathers covering its body are black and glossy, with a blue-purple hue. Specimens of Mitu mitu also has a large, bright red beak, flattened at its sides, with a white tip. The same red coloration found on its legs and feet. The tips of its tail feathers are light brown in color, with chestnut colored feathers under the tail. It has a unique grey colored, crescent-shaped patch of bare skin covering its ears, a character not found in other curassows. The distinct coloration separates M.mitu as its own species distinct from other curassow species. Sexual dimorphism is not pronounced: females tend to be lighter in color and slightly smaller in size. The birds can live to more than twenty four years in captivity. Video recording in captivity show that this cracid sporadically makes a high-pitched chirping sound. Population Since 1977, the entire Mitu mitu population has been in captivity. The population numbered 44 in 2000, and by 2008, there were 130 birds in two aviaries. About 35% of the birds were hybrids with M. tuberosum. Habitat and ecology Mitu mitu native habitat is subtropical/tropical moist lowland primary forest, where it was known to consume fruit of Phyllanthus, Eugenia and "mangabeira." It is extinct and extirpated in its native range in Alagoas and Pernambuco states, Northeastern Brazil. Breeding habits Due to their absence in the wild and lack of study previously conducted on these cracids before their extinction in the wild, not much is known about their breeding habits outside of captivity. Alagoas curassow females begin reproducing at about 2 years old. In captivity, they produce about 2–3 eggs each year. There has been a greater genetic variability amongst the Alagoas curassow after 1990, when hybrid breeding programs were introduced; Alagoas curassows were bred with closely related razor-billed curassows. Taxonomy The Alagoas curassow was first mentioned by German naturalist Georg Marcgrave in his work Historia Naturalis Brasiliae which was published in 1648. Because of the lack of information and specimens, it was considered conspecific with the common razor-billed curassow, until its rediscovery in 1951 in the Alagoas lowland forests, Brazil. Following the review of Pereira & Baker (2004), they are today believed to be a fairly basal lineage of its genus, related to the crestless curassow, the other Mitu species with brown eumelanin in the tail tip. Its lineage has been distinct since the Miocene-Pliocene boundary (approximately 5 million years ago), when it became isolated in refugia in the Atlantic Forest. Conservation efforts As this species is extinct in the wild, the total population of 130 birds only persists in two separate captive populations. A reintroduction plan is being organized, though it faces challenges. Even if the population could be bred to healthy numbers, the species would need to be reintroduced into a large natural geographical area. Human expansion and overpopulation has caused nearly all of the Alagoas curassow's natural habitat to be destroyed. One potential reintroduction site has been proposed. Precautions would have to be taken in order to prevent illegal hunting of the species after reintroduction. Status The Alagoas curassow became extinct in the wild due to deforestation and hunting. The last wild Alagoas curassow was seen and killed in 1984, or possibly 1987 or 1988. The captive population has been extensively hybridized with the razor-billed curassow, and there are several dozen purebred birds left. These are being maintained and bred in two privately owned professional aviaries in Brazil mainly due to lack of official interest owing to the long-standing doubt about the taxon's validity. Diet and interactions The Alagoas curassow is known to consume a diet of fruits and nuts. Although not much information is known about this species' interactions and behavior in the wild, the stomach contents of these birds were found to contain fruits specifically from the castelo tree. It has also been said that they enjoy fruits from the plant Clarisia racemosa. Generally, the female birds weigh less than the males and lay about 2–3 eggs a year. The average lifespan in captivity is about 24 years. The lack of knowledge about their behavior in the wild makes it difficult to know how the birds interact with other species. The impact of their introduction on interactions with other species is difficult to predict. For instance, the Chamek spider monkey also eats Clarisia racemosa, which could lead to competition with the Alagoas curassow. A lack of genetic diversity is another potential concern. Scientists have been controlling the sexual interactions within the species by pairing certain birds together in order to reduce hybridization and maintain the original Alagoas curassow. Future of the species With the objective to preserve the species and to increase genetic variability in the population, the "original" stock had their DNA examined by scientists in order to guide future pairings. Once a captive population has been successfully created, they can start being reintroduced back into the wild. The more ideal locations would be large forest remnants, such as those located at Usina Utinga-Leão and Usina Serra Grande. Footnotes References BirdLife International (2000): Alagoas Curassow. In: Threatened Birds of the World: 132. Lynx Edicions & BirdLife International, Barcelona & Cambridge, UK. Alagoas Curassow (Mitu Mitu). Arkive. Web. 24 October 2013. Kirwan, Guy M. Mitu Mitu. Neotropical Birds Online. Web. 24 October 2013. Further reading External links BirdLife Species Factsheet Video of Alagoas curassow in captivity Mitu (bird) Curassows Birds of the Atlantic Forest Endemic birds of Brazil Birds described in 1766 Species extinct in the wild Taxa named by Carl Linnaeus
Alagoas curassow
[ "Biology" ]
1,492
[ "Species extinct in the wild", "Biota by conservation status" ]
7,070,579
https://en.wikipedia.org/wiki/Genetics%20of%20aggression
The field of psychology has been greatly influenced by the study of genetics. Decades of research have demonstrated that both genetic and environmental factors play a role in a variety of behaviors in humans and animals (e.g. Grigorenko & Sternberg, 2003). The genetic basis of aggression, however, remains poorly understood. Aggression is a multi-dimensional concept, but it can be generally defined as behavior that inflicts pain or harm on another. The genetic-developmental theory states that individual differences in a continuous phenotype result from the action of a large number of genes, each exerting an effect that works with environmental factors to produce the trait. This type of trait is influenced by multiple factors making it more complex and difficult to study than a simple Mendelian trait (one gene for one phenotype). History Past thoughts on genetic factors influencing aggression, specifically in regard to sex chromosomes, tended to seek answers from chromosomal abnormalities. Four decades ago, the XYY genotype was (erroneously) believed by many to be correlated with aggression. In 1965 and 1966, researchers at the MRC Clinical & Population Cytogenetics Research Unit led by Dr. Court Brown at Western General Hospital in Edinburgh reported finding a much higher than expected nine XYY men (2.9%) averaging almost 6 ft. tall in a survey of 314 patients at the State Hospital for Scotland; seven of the nine XYY patients were mentally retarded. In their initial reports published before examining the XYY patients, the researchers suggested they might have been hospitalized because of aggressive behavior. When the XYY patients were examined, the researchers found their assumptions of aggressive behavior were incorrect. Unfortunately, many science and medicine textbooks quickly and uncritically incorporated the initial, incorrect assumptions about XYY and aggression—including psychology textbooks on aggression. The XYY genotype first gained wide notoriety in 1968 when it was raised as a part of a defense in two murder trials in Australia and France. In the United States, five attempts to use the XYY genotype as a defense were unsuccessful—in only one case in 1969 was it allowed to go to a jury—which rejected it. Results from several decades of long-term follow-up of scores of unselected XYY males identified in eight international newborn chromosome screening studies in the 1960s and 1970s have replaced pioneering but biased studies from the 1960s (that used only institutionalized XYY men), as the basis for current understanding of the XYY genotype and established that XYY males are characterized by increased height but are not characterized by aggressive behavior. Though the link currently between genetics and aggression has turned to an aspect of genetics different from chromosomal abnormalities, it is important to understand where the research started and the direction it is moving in today. Heritability As with other topics in behavioral genetics, aggression is studied in three main experimental ways to help identify what role genetics plays in the behavior: Heritability studies – studies focused to determine whether a trait, such as aggression, is heritable and how it is inherited from parent to offspring. These studies make use of genetic linkage maps to identify genes associated with certain behaviors such as aggression. Mechanism experiments – studies to determine the biological mechanisms that lead certain genes to influence types of behavior like aggression. Genetic behavior correlation studies – studies that use scientific data and attempt to correlate it with actual human behavior. Examples include twin studies and adoption studies. These three main experimental types are used in animal studies, studies testing heritability and molecular genetics, and gene/environment interaction studies. Recently, important links between aggression and genetics have been studied and the results are allowing scientists to better understand the connections. Selective breeding The heritability of aggression has been observed in many animal strains after noting that some strains of birds, dogs, fish, and mice seem to be more aggressive than other strains. Selective breeding has demonstrated that it is possible to select for genes that lead to more aggressive behavior in animals. Selective breeding examples also allow researchers to understand the importance of developmental timing for genetic influences on aggressive behavior. A study done in 1983 (Cairns) produced both highly aggressive male and female strains of mice dependent on certain developmental periods to have this more aggressive behavior expressed. These mice were not observed to be more aggressive during the early and later stages of their lives, but during certain periods of time (in their middle-age period) were more violent and aggressive in their attacks on other mice. Selective breeding is a quick way to select for specific traits and see those selected traits within a few generations of breeding. These characteristics make selective breeding an important tool in the study of genetics and aggressive behavior. Mouse studies Mice are often used as a model for human genetic behavior since mice and humans have homologous genes coding for homologous proteins that are used for similar functions at some biological levels. Mice aggression studies have led to some interesting insight in human aggression. Using reverse genetics, the DNA of genes for the receptors of many neurotransmitters have been cloned and sequenced, and the role of neurotransmitters in rodent aggression has been investigated using pharmacological manipulations. Serotonin has been identified in the offensive attack by male mice against intruder male mice. Mutants were made by manipulating a receptor for serotonin by deleting a gene for the serotonin receptor. These mutant male mice with the knockout alleles exhibited normal behavior in everyday activities such as eating and exploration, but when prompted, attacked intruders with twice the intensity of normal male mice. In offense aggression in mice, males with the same or similar genotypes were more likely to fight than males that encountered males of other genotypes. Another interesting finding in mice dealt with mice reared alone. These mice showed a strong tendency to attack other male mice upon their first exposure to the other animals. The mice reared alone were not taught to be more aggressive; they simply exhibited the behavior. This implicates the natural tendency related to biological aggression in mice since the mice reared alone lacked a parent to model aggressive behavior. Oxidative stress arises as a result of excess production of reactive oxygen species in relation to defense mechanisms, including the action of antioxidants such as superoxide dismutase 1 (SOD1). Knockout of the Sod1 gene was experimentally introduced in male mice leading to impaired antioxidant defense. These mice were designated (Sod1-/-). The Sod1-/- male mice proved to be more aggressive than both heterozygous knockout males (Sod1+/-) that were 50% deficient in SOD1, and wild-type males (Sod1+/+). The basis for the association of oxidative stress with increased aggression has not yet been determined. Biological mechanisms Experiments designed to study biological mechanisms are utilized when exploring how aggression is influenced by genetics. Molecular genetics studies allow many different types of behavioral traits to be examined by manipulating genes and studying the effect(s) of the manipulation. Molecular genetics A number of molecular genetics studies have focused on manipulating candidate aggression genes in mice and other animals to induce effects that can be possibly applied to humans. Most studies have focused on polymorphisms of serotonin receptors, dopamine receptors, and neurotransmitter metabolizing enzymes. Results of these studies have led to linkage analysis to map the serotonin-related genes and impulsive aggression, as well as dopamin-related genes and proactive aggression. In particular, the serotonin 5-HT seems to be an influence in inter-male aggression either directly or through other molecules that use the 5-HT pathway. 5-HT normally dampens aggression in animals and humans. Mice missing specific genes for 5-HT were observed to be more aggressive than normal mice and were more rapid and violent in their attacks. Other studies have been focused on neurotransmitters. Studies of a mutation in the neurotransmitter metabolizing enzyme monoamine oxidase A (MAO-A) have been shown to cause a syndrome that includes violence and impulsivity in humans. Studies of the molecular genetics pathways are leading to the production of pharmaceuticals to fix the pathway problems and hopefully show an observed change in aggressive behavior. Human behavior genetics In determining if a trait is related to genetic factors or environmental factors, twin studies and adoption studies are used. These studies examine correlations based on similarity of a trait and a person's genetic or environmental factors that could influence the trait. Aggression has been examined via both twin studies and adoption studies. The human genetics related to aggression have been studied and the main genes have been identified. The DAT1 and DRD2 genes are heavily related to the genetics of aggression. The DAT1 gene plays a role for its heavy relation to regulation of neurotransmission. The DRD2 Gene results in humans finding seemingly rewarding paths such as drug abuse. Through studies, DRD2 seems to be a risk factor in delinquency when children have related family trauma events. DAT1 is a gene that regulates dopamine levels in the brain. This study revealed that variations in the DAT1 gene were correlated with higher levels of aggression. Some people that have variations of the DAT1 gene exhibit more aggressive behaviors. DAT2 controls how the brain responds to dopamine. Certain combinations of DAT1 and DAT2 genes were linked to an increase or decrease in aggressive behaviors. While the relationship remains unclear, there is certainly a correlation between certain forms of DAT1 and DAT2 and varying combinations of each. Changes in these genes can cause changes in neurotransmitter levels. When typical neurotransmitter levels change, other bodily behaviors are also affected. Examples of other functions that are impacted are intelligence, mood, and memory. Twin studies Twin studies are studies typically conducted comparing identical and fraternal twins. They aim to reveal the importance of environmental and genetic influences for traits, phenotypes, and disorders. Before the advancement of molecular genetics, twin studies were almost the only mode of investigation of genetic influences on personality. Heritability was estimated as twice the difference between the correlation for identical, or monozygotic, twins and that for fraternal, or dizygotic, twins. Early studies indicated that personality was fifty percent genetic. Current thinking holds that each individual picks and chooses from a range of stimuli and events largely on the basis of their genotype creating a unique set of experiences; basically meaning that people create their own environments. It has been determined that there is a window in childhood that certain trauma events can trigger a lifetime of aggressive behavior. Adoption studies Adoption studies are a specific research designs that involve comparing traits between an adopted child and their biological and adoptive parents. These experiments aim to assess both biological and environmental factors that may be attributed to aggression. Adoption studies have shown stronger similarities between adopted children and their biological parents, indicating that there is a genetic component at play. However, children have also shown similarities with their adoptive parents, indicating that there are environmental factors as well. These studies further support the complex nature of aggression by proving that there are both biological and environmental factors involved. More research needs to be conducted to truly prove the causes of aggression. Genetics of aggression over time Over time, studies pertaining to the genetics of aggression have improved, and become an interesting research topic for those looking for research opportunities. Experiments started in 1963 with the Milgram's experiment. The results of this experiment proved that in certain situations, people can be coaxed into aggression and violence. The next big experiment pertaining to the genetics of aggression took place in 1973 as part of the Stanford prison experiment. The conclusion drawn from this experiment was that in many cases, aggression is very unexpected at certain points. It was considered to be "elicited." This also revealed that aggression is often triggered in situations where someone feels an ideology that they hold a very powerful authority over someone else. It was concluded from both experiments that social aspects prove to be a big factor in the way people act aggressively. It was also concluded that every person has a potential to output aggressive behavior, but what is different between people is the extent of the point it takes to reach that output. Male vs female aggression Aggression can manifest in different ways between biological males and females. A study evaluated these differences by using EEG and ECG to monitor neurobiological responses to aggravating stimuli. It was shown that anger and physical aggression was much greater in men than women. Men also scored higher on a scale regarding reactive aggression. The EEG test also supported the idea that women show weaker responses regarding aggression. It was also shown that men and women follow different pathways in the brain when aggression is invoked, although further studies are needed in order to solidify these findings. Environmental factors Aggression can have many causes, including environmental factors. Environmental factors include any physical, chemical, and biological environmental factors that can influence aggression. Studies have shown that neighborhood greenspace can vastly reduce aggressive behaviors in children and adolescents. One proposed explanation for this finding is that greenspace has been proven to reduce stress and depression. HIgher stress and depression levels in parents have been shown to increase aggressive behaviors in children. By lowering stress and depression in parents, children are more likely to show a decrease in aggressive behaviors. In addition, greenspace promotes participation in physical activity and social involvement. Another study revealed that low-frequency, high-intensity, and continuous noises were associated with more aggressive behaviors. See also Anthropological criminology Behavioural genetics Notes References Grigorenko, E.L. & Sternberg, R.J. (2003). The nature nurture issue. In A. Slater & G. Bremner (Eds.), An introduction to developmental psychology. Malden, MA: Blackwell. Pomp, D. (2010). Genomic mapping of social behavior traits in a F2 cross derived from mice selectively bred for high aggression. BCM Genetics, 11:113. doi:10.1186/1471-2156-11-113 Aggression Animal genetics Forensic psychology Criminology Behavioural sciences Behavioral neuroscience Behavioural genetics
Genetics of aggression
[ "Biology" ]
2,904
[ "Behavior", "Behavioral neuroscience", "Behavioural sciences", "Aggression", "Human behavior" ]
7,070,643
https://en.wikipedia.org/wiki/Petroleum%20geologist
A petroleum geologist is an earth scientist who works in the field of petroleum geology, which involves all aspects of oil discovery and production. Petroleum geologists are usually linked to the actual discovery of oil and the identification of possible oil deposits, gas caps, or leads. It can be a very labor-intensive task involving several different fields of science and elaborate equipment. Petroleum geologists look at the structural and sedimentary aspects of the stratum/strata to identify possible oil traps or tight shale plays. Profile Petroleum geologists make the decision of where to drill for petroleum. This is done by locating prospects within a sedimentary basin. Petroleum geologists determine a prospect's viability looking at seven main aspects in conventional petroleum geology: Source: the presence of an organic-rich source rock capable of generating hydrocarbons during deep burial. Reservoir: the usually porous rock and permeable unit that collects the hydrocarbons expelled from the source rock and holds them inside a trap. Seal: the rock unit that inhibits the oil or gas from escaping from a hydrocarbon-bearing reservoir rock. Trap: structural or stratigraphic feature that captures migrating hydrocarbons into an economically producible accumulation. Timing: geologic events must occur in a certain order, e.g. that the trap formed before migration rather than after. Maturation: the process of alteration of a source rock under heat and pressure, leading to the cracking of its organic matter into oil and gas. Migration: the movement of the (less dense) oil or gas from the source rock into a reservoir rock and then into a trap. These seven key aspects require the petroleum geologist to obtain a 4-dimensional idea of the subsurface (the three spatial dimensions, plus time). Data may be obtained via geophysical methods. Geophysical surveys show the seismology data of elastic waves, mainly seismic reflection. This provides a 3-dimensional look of the trap, and source rock. More data may be obtained from the mudlogger, who analyzes the drill cuttings and the rock formation thicknesses. Today, there are also unconventional tight plays. Petroleum geologists for these plays work with petroleum engineers and other specialists to make decisions of where to drill for oil. Data is also obtained via geophysical methods (the same as conventional plays, plus fracture data), but these are modernly analyzed with various statistical methods. The geological analysis is done by looking at a combination of geological aspects, with completion analogs. The geological aspects are as follows: Source: the presence of an organic-rich source rock. Unlike conventional plays, where the source rock typically underlays the reservoir rock and the oil/gas migrates into the reservoir, tight shale plays can be their own source rock. Reservoir: the usually porous rock with lower permeability rock. This rock could have collected hydrocarbons expelled from a source rock, or be its own source rock. Seal: often, due to the low permeability, oil/gas is unable to migrate out of this rock, but it is common to also have a sealing rock above the reservoir rock that inhibits further migration of oil or gas. Timing: geologic events must occur in a certain order, e.g. a seal to trap gas must be in place before kerogen cracking. Maturation: the process of alteration of a source rock under head and pressure, leading to the cracking of its organic matter into oil and gas. Migration: the movement of the (less dense) oil or gas from the source rock into a reservoir rock and then into a trap. The 'trap' aspect is absent. Tight shale plays, or unconventional plays, do not require a trap to contain hydrocarbons due to the low permeability preventing further migration. See also Petroleum industry Petroleum geology Science occupations
Petroleum geologist
[ "Chemistry" ]
770
[ "Petroleum", "Petroleum geology", "Petroleum stubs" ]
7,071,096
https://en.wikipedia.org/wiki/Engineering%20design%20process
The engineering design process, also known as the engineering method, is a common series of steps that engineers use in creating functional products and processes. The process is highly iterative – parts of the process often need to be repeated many times before another can be entered – though the part(s) that get iterated and the number of such cycles in any given project may vary. It is a decision making process (often iterative) in which the engineering sciences, basic sciences and mathematics are applied to convert resources optimally to meet a stated objective. Among the fundamental elements of the design process are the establishment of objectives and criteria, synthesis, analysis, construction, testing and evaluation. Common stages of the engineering design process It's important to understand that there are various framings/articulations of the engineering design process. Different terminology employed may have varying degrees of overlap, which affects what steps get stated explicitly or deemed "high level" versus subordinate in any given model. This, of course, applies as much to any particular example steps/sequences given here. One example framing of the engineering design process delineates the following stages: research, conceptualization, feasibility assessment, establishing design requirements, preliminary design, detailed design, production planning and tool design, and production. Others, noting that "different authors (in both research literature and in textbooks) define different phases of the design process with varying activities occurring within them," have suggested more simplified/generalized models – such as problem definition, conceptual design, preliminary design, detailed design, and design communication. Another summary of the process, from European engineering design literature, includes clarification of the task, conceptual design, embodiment design, detail design. (NOTE: In these examples, other key aspects – such as concept evaluation and prototyping – are subsets and/or extensions of one or more of the listed steps.) Research Various stages of the design process (and even earlier) can involve a significant amount of time spent on locating information and research. Consideration should be given to the existing applicable literature, problems and successes associated with existing solutions, costs, and marketplace needs. The source of information should be relevant. Reverse engineering can be an effective technique if other solutions are available on the market. Other sources of information include the Internet, local libraries, available government documents, personal organizations, trade journals, vendor catalogs and individual experts available. Design requirements Establishing design requirements and conducting requirement analysis, sometimes termed problem definition (or deemed a related activity), is one of the most important elements in the design process in certain industries, and this task is often performed at the same time as a feasibility analysis. The design requirements control the design of the product or process being developed, throughout the engineering design process. These include basic things like the functions, attributes, and specifications – determined after assessing user needs. Some design requirements include hardware and software parameters, maintainability, availability, and testability. Feasibility In some cases, a feasibility study is carried out after which schedules, resource plans and estimates for the next phase are developed. The feasibility study is an evaluation and analysis of the potential of a proposed project to support the process of decision making. It outlines and analyses alternatives or methods of achieving the desired outcome. The feasibility study helps to narrow the scope of the project to identify the best scenario. A feasibility report is generated following which Post Feasibility Review is performed. The purpose of a feasibility assessment is to determine whether the engineer's project can proceed into the design phase. This is based on two criteria: the project needs to be based on an achievable idea, and it needs to be within cost constraints. It is important to have engineers with experience and good judgment to be involved in this portion of the feasibility study. Concept generation A concept study (conceptualization, conceptual design) is often a phase of project planning that includes producing ideas and taking into account the pros and cons of implementing those ideas. This stage of a project is done to minimize the likelihood of error, manage costs, assess risks, and evaluate the potential success of the intended project. In any event, once an engineering issue or problem is defined, potential solutions must be identified. These solutions can be found by using ideation, the mental process by which ideas are generated. In fact, this step is often termed Ideation or "Concept Generation." The following are widely used techniques: trigger word – a word or phrase associated with the issue at hand is stated, and subsequent words and phrases are evoked. morphological analysis – independent design characteristics are listed in a chart, and different engineering solutions are proposed for each solution. Normally, a preliminary sketch and short report accompany the morphological chart. synectics – the engineer imagines him or herself as the item and asks, "What would I do if I were the system?" This unconventional method of thinking may find a solution to the problem at hand. The vital aspects of the conceptualization step is synthesis. Synthesis is the process of taking the element of the concept and arranging them in the proper way. Synthesis creative process is present in every design. brainstorming – this popular method involves thinking of different ideas, typically as part of a small group, and adopting these ideas in some form as a solution to the problem Various generated ideas must then undergo a concept evaluation step, which utilizes various tools to compare and contrast the relative strengths and weakness of possible alternatives. Preliminary design The preliminary design, or high-level design includes (also called FEED or Basic design), often bridges a gap between design conception and detailed design, particularly in cases where the level of conceptualization achieved during ideation is not sufficient for full evaluation. So in this task, the overall system configuration is defined, and schematics, diagrams, and layouts of the project may provide early project configuration. (This notably varies a lot by field, industry, and product.) During detailed design and optimization, the parameters of the part being created will change, but the preliminary design focuses on creating the general framework to build the project on. S. Blanchard and J. Fabrycky describe it as: “The ‘whats’ initiating conceptual design produce ‘hows’ from the conceptual design evaluation effort applied to feasible conceptual design concepts. Next, the ‘hows’ are taken into preliminary design through the means of allocated requirements. There they become ‘whats’ and drive preliminary design to address ‘hows’ at this lower level.” Detailed design Following FEED is the Detailed Design (Detailed Engineering) phase, which may consist of procurement of materials as well. This phase further elaborates each aspect of the project/product by complete description through solid modeling, drawings as well as specifications. Computer-aided design (CAD) programs have made the detailed design phase more efficient. For example, a CAD program can provide optimization to reduce volume without hindering a part's quality. It can also calculate stress and displacement using the finite element method to determine stresses throughout the part. Production planning The production planning and tool design consists of planning how to mass-produce the product and which tools should be used in the manufacturing process. Tasks to complete in this step include selecting materials, selection of the production processes, determination of the sequence of operations, and selection of tools such as jigs, fixtures, metal cutting and metal or plastics forming tools. This task also involves additional prototype testing iterations to ensure the mass-produced version meets qualification testing standards. Comparison with the scientific method Engineering is formulating a problem that can be solved through design. Science is formulating a question that can be solved through investigation. The engineering design process bears some similarity to the scientific method. Both processes begin with existing knowledge, and gradually become more specific in the search for knowledge (in the case of "pure" or basic science) or a solution (in the case of "applied" science, such as engineering). The key difference between the engineering process and the scientific process is that the engineering process focuses on design, creativity and innovation while the scientific process emphasizes explanation, prediction and discovery (observation). Degree programs Methods are being taught and developed in Universities including: Engineering Design, University of Bristol Faculty of Engineering Dyson School of Design Engineering, Imperial College London TU Delft, Industrial Design Engineering. University of Waterloo, Systems Design Engineering See also Applied science Computer-automated design Design engineer Engineering analysis Engineering optimization New product development Systems engineering process Surrogate model Traditional engineering References External links Ullman, David G. (2009) The Mechanical Design Process, Mc Graw Hill, 4th edition, Eggert, Rudolph J. (2010) Engineering Design, Second Edition, High Peak Press, Meridian, Idaho, Engineering concepts Mechanical engineering Systems engineering
Engineering design process
[ "Physics", "Engineering" ]
1,757
[ "Systems engineering", "nan", "Applied and interdisciplinary physics", "Mechanical engineering" ]
7,071,234
https://en.wikipedia.org/wiki/Rose%20madder
Rose madder (also known as madder) is a red paint made from the pigment madder lake, a traditional lake pigment extracted from the common madder plant Rubia tinctorum. Madder lake contains two organic red dyes: alizarin and purpurin. As a paint, it has been described as a fugitive, transparent, nonstaining, mid valued, moderately dull violet red pigment in tints and medium solutions, darkening to an impermanent, dull magenta red in masstone. History Madder has been cultivated as a dyestuff since antiquity in Central Asia, South Asia, and Egypt, where it was grown as early as 1500 BC. Cloth dyed with madder root dye was found in the tomb of the Pharaoh Tutankhamun and on an Egyptian tomb painting from the Graeco-Roman period, diluted with gypsum to produce a pink color. It was also found in ancient Greece (in Corinth), and in Italy in the Baths of Titus and the ruins of Pompeii. It is referred to in the Talmud as well as mentioned in writings by Dioscorides (who referred to it as ἐρυθρόδανον, "erythródanon"), Hippocrates, and other literary figures, and in artwork where it is referred to as rubio and used in paintings by J. M. W. Turner and as a color for ceramics. In Spain, madder was introduced and then cultivated by the Moors. The production of a lake pigment from madder seems to have been first invented by the ancient Egyptians. Several techniques and recipes developed. Ideal color was said to come from plants 18 to 28 months old that had been grown in calcareous soil, which is full of lime and typically chalky. Most were considered relatively weak and extremely fugitive until 1804, when the English dye maker George Field refined the technique of making a lake from madder by treating it with alum and an alkali. The resulting madder lake had a less fugitive color and could be used more efficaciously, for example by blending it into a paint. Over the following years, other metal salts, including those containing chromium, iron, and tin, were found to be usable in place of alum to give madder-based pigments of various other colors. In 1827, the French chemists Pierre-Jean Robiquet and Colin began producing garancine, the concentrated version of natural madder. They then found that madder lake contained two colorants, the red alizarin and the more rapidly fading purpurin. Purpurin is only present in the natural form of madder and gives a distinctive orange/red generally warmer tone that pure synthetic alizarin does not. Purpurin fluoresces yellow to red under ultraviolet light, while synthetic alizarin slightly shows violet. Alizarin was discovered before purpurin, by heating the ground madder with acid and potash. A yellow vapor crystallized into bright red needles: alizarin. This alizarin concentrate comprises only 1% of the madder root. Natural rose madder supplied half the world with red, until 1868, when its alizarin component became the first natural dye to be synthetically duplicated by Carl Gräbe and Carl Liebermann. Advances in the understanding of chemistry, such as chemical structures, chemical formulas, and elemental formulas, aided these Berlin-based scientists in discovering that alizarin had an anthracene base. However, their recipe was not feasible for large-scale production; it required expensive and volatile substances, specifically bromine. William Perkin, the inventor of mauveine, filed a patent in June 1869 for a new way to produce alizarin without bromine. Gräbe, Liebermann, and Heinrich Caro filed a patent for a similar process just one day before Perkin did – yet both patents were granted, as Perkin's had been sealed first. They divided the market in half: Perkin sold to the English market, and the scientists from Berlin to the United States and mainland Europe. Because this synthetic alizarin dye could be produced for a fraction of the cost of the natural madder dye, it quickly replaced all madder-based colorants then in use (in, for instance, British army red coats that had been a shade of madder from the late 17th century to 1870, and French military cloth, often called "Turkey Red"). In turn, alizarin itself has now been largely replaced by the more light-resistant quinacridone pigments originally developed at DuPont in 1958. It is still manufactured in traditional ways to meet the demands of the fine art market. Other names Alizarin's chemical composition: 1,2 dihydroxyanthraquinone (C14H8O4) Alizarin crimson, a paint very similar in color to Rose Madder Genuine but derived from synthetic Alizarin Lacca di robbia, Italian name Laque de garance, French name Natural Red 9 abbreviated NR9, Color Index name Purpurin's chemical composition: 1,2,4 trihydroxyanthraquinone (C14H8O5) Rose madder genuine, sometimes used to specify a paint derived from the root of the madder plant in the traditional manner It is still manufactured and used by some, but is too fugitive for professional artistic use. Rose madder hue, sometimes used to specify a paint made from other pigments but meant to approximate the color of rose madder Rubia tinctorum, the herbaceous perennial from which the rose madder pigment is derived Turkey red Substitutes As all madder-based pigments are fugitive, artists have long sought a more permanent and lightfast replacement for rose madder and alizarin. Alternative pigments include: Anthraquinone red (PR177), a chemical cousin of Alizarin Benzamida carmine (PR176) Perylene maroon (PR179), for mixing dull violets Pyrrole rubine (PR264) Quinacridone magenta (PR122), for a brighter violet Quinacridone pyrrolodone Quinacridone rose (PV19), for a brighter violet Quinacridone violet (PV19), particularly dark and reddish varieties In art, entertainment, and media HMS Surprise is a 1973 novel by Patrick O'Brian which mentions rose madder. Rose Madder is the title of a 1995 novel by Stephen King, in which a woman named Rose Daniels escapes her abusive husband and travels through time by entering a painting of a woman in a gown dyed with rose madder. "Madder Red" is the title of a 2009 song by Yeasayer on the album Odd Blood. Jonathon Keats uses the gradual fading of rose madder oil paint to record a single image over the course of 1000 years in his "millennium camera". Blue Madder is the third album released by Savoy Brown in May 1969 on Decca Records. Yukino in The Garden of Words is described as having 'a madder-red ribbon' in her school uniform. The Maddermarket Theatre in Norwich has connections with the use of madder as a dye in the city. References Further reading Biological pigments Organic pigments
Rose madder
[ "Biology" ]
1,518
[ "Biological pigments", "Pigmentation" ]
7,071,490
https://en.wikipedia.org/wiki/History%20of%20molecular%20theory
In chemistry, the history of molecular theory traces the origins of the concept or idea of the existence of strong chemical bonds between two or more atoms. A modern conceptualization of molecules began to develop in the 19th century along with experimental evidence for pure chemical elements and how individual atoms of different chemical elements such as hydrogen and oxygen can combine to form chemically stable molecules such as water molecules. Ancient world The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids. Circa 450 BC Empedocles imagined fundamental elements (fire (), earth (), air (), and water ()) and "forces" of attraction and repulsion allowing the elements to interact. Prior to this, Heraclitus had claimed that fire or change was fundamental to our existence, created through the combination of opposite properties. In the Timaeus, Plato, following Pythagoras, considered mathematical entities such as number, point, line and triangle as the fundamental building blocks or elements of this ephemeral world, and considered the four elements of fire, air, water and earth as states of substances through which the true mathematical principles or elements would pass. A fifth element, the incorruptible quintessence aether, was considered to be the fundamental building block of the heavenly bodies. The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe. Greek atomism The earliest views on the shapes and connectivity of atoms was that proposed by Leucippus, Democritus, and Epicurus who reasoned that the solidness of the material corresponded to the shape of the atoms involved. Thus, iron atoms are solid and strong with hooks that lock them into a solid; water atoms are smooth and slippery; salt atoms, because of their taste, are sharp and pointed; and air atoms are light and whirling, pervading all other materials. It was Democritus that was the main proponent of this view. Using analogies based on the experiences of the senses, he gave a picture or an image of an atom in which atoms were distinguished from each other by their shape, their size, and the arrangement of their parts. Moreover, connections were explained by material links in which single atoms were supplied with attachments: some with hooks and eyes others with balls and sockets (see diagram). 17th century With the rise of scholasticism and the decline of the Roman Empire, the atomic theory was abandoned for many ages in favor of the various four element theories and later alchemical theories. The 17th century, however, saw a resurgence in the atomic theory primarily through the works of Gassendi, and Newton. Among other scientists of that time Gassendi deeply studied ancient history, wrote major works about Epicurus natural philosophy and was a persuasive propagandist of it. He reasoned that to account for the size and shape of atoms moving in a void could account for the properties of matter. Heat was due to small, round atoms; cold, to pyramidal atoms with sharp points, which accounted for the pricking sensation of severe cold; and solids were held together by interlacing hooks. Newton, though he acknowledged the various atom attachment theories in vogue at the time, i.e. "hooked atoms", "glued atoms" (bodies at rest), and the "stick together by conspiring motions" theory, rather believed, as famously stated in "Query 31" of his 1704 Opticks, that particles attract one another by some force, which "in immediate contact is extremely strong, at small distances performs the chemical operations, and reaches not far from particles with any sensible effect." In a more concrete manner, however, the concept of aggregates or units of bonded atoms, i.e. "molecules", traces its origins to Robert Boyle's 1661 hypothesis, in his famous treatise The Sceptical Chymist, that matter is composed of clusters of particles and that chemical change results from the rearrangement of the clusters. Boyle argued that matter's basic elements consisted of various sorts and sizes of particles, called "corpuscles", which were capable of arranging themselves into groups. In 1680, using the corpuscular theory as a basis, French chemist Nicolas Lemery stipulated that the acidity of any substance consisted in its pointed particles, while alkalis were endowed with pores of various sizes. A molecule, according to this view, consisted of corpuscles united through a geometric locking of points and pores. 18th century An early precursor to the idea of bonded "combinations of atoms", was the theory of "combination via chemical affinity". For example, in 1718, building on Boyle's conception of combinations of clusters, the French chemist Étienne François Geoffroy developed theories of chemical affinity to explain combinations of particles, reasoning that a certain alchemical "force" draws certain alchemical components together. Geoffroy's name is best known in connection with his tables of "affinities" (tables des rapports), which he presented to the French Academy in 1718 and 1720. These were lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents. These tables retained their vogue for the rest of the century, until displaced by the profounder conceptions introduced by CL Berthollet. In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli positioned the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic. In 1789, William Higgins published views on what he called combinations of "ultimate" particles, which foreshadowed the concept of valency bonds. If, for example, according to Higgins, the force between the ultimate particle of oxygen and the ultimate particle of nitrogen were 6, then the strength of the force would be divided accordingly, and similarly for the other combinations of ultimate particles: 19th century Similar to these views, in 1803 John Dalton took the atomic weight of hydrogen, the lightest element, as unity, and determined, for example, that the ratio for nitrous anhydride was 2 to 3 which gives the formula N2O3. Dalton incorrectly imagined that atoms "hooked" together to form molecules. Later, in 1808, Dalton published his famous diagram of combined "atoms": Amedeo Avogadro created the word "molecule". His 1811 paper "Essay on Determining the Relative Masses of the Elementary Molecules of Bodies", he essentially states, i.e. according to Partington's A Short History of Chemistry, that: Note that this quote is not a literal translation. Avogadro uses the name "molecule" for both atoms and molecules. Specifically, he uses the name "elementary molecule" when referring to atoms and to complicate the matter also speaks of "compound molecules" and "composite molecules". During his stay in Vercelli, Avogadro wrote a concise note (memoria) in which he declared the hypothesis of what we now call Avogadro's law: equal volumes of gases, at the same temperature and pressure, contain the same number of molecules. This law implies that the relationship occurring between the weights of same volumes of different gases, at the same temperature and pressure, corresponds to the relationship between respective molecular weights. Hence, relative molecular masses could now be calculated from the masses of gas samples. Avogadro developed this hypothesis to reconcile Joseph Louis Gay-Lussac's 1808 law on volumes and combining gases with Dalton's 1803 atomic theory. The greatest difficulty Avogadro had to resolve was the huge confusion at that time regarding atoms and molecules—one of the most important contributions of Avogadro's work was clearly distinguishing one from the other, admitting that simple particles too could be composed of molecules and that these are composed of atoms. Dalton, by contrast, did not consider this possibility. Curiously, Avogadro considers only molecules containing even numbers of atoms; he does not say why odd numbers are left out. In 1826, building on the work of Avogadro, the French chemist Jean-Baptiste Dumas states: In coordination with these concepts, in 1833 the French chemist Marc Antoine Auguste Gaudin presented a clear account of Avogadro's hypothesis, regarding atomic weights, by making use of "volume diagrams", which clearly show both semi-correct molecular geometries, such as a linear water molecule, and correct molecular formulas, such as H2O: In two papers outlining his "theory of atomicity of the elements" (1857–58), Friedrich August Kekulé was the first to offer a theory of how every atom in an organic molecule was bonded to every other atom. He proposed that carbon atoms were tetravalent, and could bond to themselves to form the carbon skeletons of organic molecules. In 1856, Scottish chemist Archibald Couper began research on the bromination of benzene at the laboratory of Charles Wurtz in Paris. One month after Kekulé's second paper appeared, Couper's independent and largely identical theory of molecular structure was published. He offered a very concrete idea of molecular structure, proposing that atoms joined to each other like modern-day Tinkertoys in specific three-dimensional structures. Couper was the first to use lines between atoms, in conjunction with the older method of using brackets, to represent bonds, and also postulated straight chains of atoms as the structures of some molecules, ring-shaped molecules of others, such as in tartaric acid and cyanuric acid. In later publications, Couper's bonds were represented using straight dotted lines (although it is not known if this is the typesetter's preference) such as with alcohol and oxalic acid below: In 1861, an unknown Vienna high-school teacher named Joseph Loschmidt published, at his own expense, a booklet entitled Chemische Studien I, containing pioneering molecular images which showed both "ringed" structures as well as double-bonded structures, such as: Loschmidt also suggested a possible formula for benzene, but left the issue open. The first proposal of the modern structure for benzene was due to Kekulé, in 1865. The cyclic nature of benzene was finally confirmed by the crystallographer Kathleen Lonsdale. Benzene presents a special problem in that, to account for all the bonds, there must be alternating double carbon bonds: In 1865, German chemist August Wilhelm von Hofmann was the first to make stick-and-ball molecular models, which he used in lecture at the Royal Institution of Great Britain, such as methane shown below: The basis of this model followed the earlier 1855 suggestion by his colleague William Odling that carbon is tetravalent. Hofmann's color scheme, to note, is still used to this day: carbon = black, nitrogen = blue, oxygen = red, chlorine = green, sulfur = yellow, hydrogen = white. The deficiencies in Hofmann's model were essentially geometric: carbon bonding was shown as planar, rather than tetrahedral, and the atoms were out of proportion, e.g. carbon was smaller in size than the hydrogen. In 1864, Scottish organic chemist Alexander Crum Brown began to draw pictures of molecules, in which he enclosed the symbols for atoms in circles, and used broken lines to connect the atoms together in a way that satisfied each atom's valence. The year 1873, by many accounts, was a seminal point in the history of the development of the concept of the "molecule". In this year, the renowned Scottish physicist James Clerk Maxwell published his famous thirteen page article 'Molecules' in the September issue of Nature. In the opening section to this article, Maxwell clearly states: After speaking about the atomic theory of Democritus, Maxwell goes on to tell us that the word 'molecule' is a modern word. He states, "it does not occur in Johnson's Dictionary. The ideas it embodies are those belonging to modern chemistry." We are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases. At this point, however, Maxwell notes that no one has ever seen or handled a molecule. In 1874, Jacobus Henricus van 't Hoff and Joseph Achille Le Bel independently proposed that the phenomenon of optical activity could be explained by assuming that the chemical bonds between carbon atoms and their neighbors were directed towards the corners of a regular tetrahedron. This led to a better understanding of the three-dimensional nature of molecules. Emil Fischer developed the Fischer projection technique for viewing 3-D molecules on a 2-D sheet of paper: In 1898, Ludwig Boltzmann, in his Lectures on Gas Theory, used the theory of valence to explain the phenomenon of gas phase molecular dissociation, and in doing so drew one of the first rudimentary yet detailed atomic orbital overlap drawings. Noting first the known fact that molecular iodine vapor dissociates into atoms at higher temperatures, Boltzmann states that we must explain the existence of molecules composed of two atoms, the "double atom" as Boltzmann calls it, by an attractive force acting between the two atoms. Boltzmann states that this chemical attraction, owing to certain facts of chemical valence, must be associated with a relatively small region on the surface of the atom called the sensitive region. Boltzmann states that this "sensitive region" will lie on the surface of the atom, or may partially lie inside the atom, and will firmly be connected to it. Specifically, he states "only when two atoms are situated so that their sensitive regions are in contact, or partly overlap, will there be a chemical attraction between them. We then say that they are chemically bound to each other." This picture is detailed below, showing the α-sensitive region of atom-A overlapping with the β-sensitive region of atom-B: 20th century In the early 20th century, the American chemist Gilbert N. Lewis began to use dots in lecture, while teaching undergraduates at Harvard, to represent the electrons around atoms. His students favored these drawings, which stimulated him in this direction. From these lectures, Lewis noted that elements with a certain number of electrons seemed to have a special stability. This phenomenon was pointed out by the German chemist Richard Abegg in 1904, to which Lewis referred to as "Abegg's law of valence" (now generally known as Abegg's rule). To Lewis it appeared that once a core of eight electrons has formed around a nucleus, the layer is filled, and a new layer is started. Lewis also noted that various ions with eight electrons also seemed to have a special stability. On these views, he proposed the rule of eight or octet rule: Ions or atoms with a filled layer of eight electrons have a special stability. Moreover, noting that a cube has eight corners Lewis envisioned an atom as having eight sides available for electrons, like the corner of a cube. Subsequently, in 1902 he devised a conception in which cubic atoms can bond on their sides to form cubic-structured molecules. In other words, electron-pair bonds are formed when two atoms share an edge, as in structure C below. This results in the sharing of two electrons. Similarly, charged ionic-bonds are formed by the transfer of an electron from one cube to another, without sharing an edge A. An intermediate state B where only one corner is shared was also postulated by Lewis. Hence, double bonds are formed by sharing a face between two cubic atoms. This results in the sharing of four electrons. In 1913, while working as the chair of the department of chemistry at the University of California, Berkeley, Lewis read a preliminary outline of paper by an English graduate student, Alfred Lauck Parson, who was visiting Berkeley for a year. In this paper, Parson suggested that the electron is not merely an electric charge but is also a small magnet (or "magneton" as he called it) and furthermore that a chemical bond results from two electrons being shared between two atoms. This, according to Lewis, meant that bonding occurred when two electrons formed a shared edge between two complete cubes. On these views, in his famous 1916 article The Atom and the Molecule, Lewis introduced the "Lewis structure" to represent atoms and molecules, where dots represent electrons and lines represent covalent bonds. In this article, he developed the concept of the electron-pair bond, in which two atoms may share one to six electrons, thus forming the single electron bond, a single bond, a double bond, or a triple bond. In Lewis' own words: Moreover, he proposed that an atom tended to form an ion by gaining or losing the number of electrons needed to complete a cube. Thus, Lewis structures show each atom in the structure of the molecule using its chemical symbol. Lines are drawn between atoms that are bonded to one another; occasionally, pairs of dots are used instead of lines. Excess electrons that form lone pairs are represented as pair of dots, and are placed next to the atoms on which they reside: To summarize his views on his new bonding model, Lewis states: The following year, in 1917, an unknown American undergraduate chemical engineer named Linus Pauling was learning the Dalton hook-and-eye bonding method at the Oregon Agricultural College, which was the vogue description of bonds between atoms at the time. Each atom had a certain number of hooks that allowed it to attach to other atoms, and a certain number of eyes that allowed other atoms to attach to it. A chemical bond resulted when a hook and eye connected. Pauling, however, wasn't satisfied with this archaic method and looked to the newly emerging field of quantum physics for a new method. In 1927, the physicists Fritz London and Walter Heitler applied the new quantum mechanics to the deal with the saturable, nondynamic forces of attraction and repulsion, i.e., exchange forces, of the hydrogen molecule. Their valence bond treatment of this problem, in their joint paper, was a landmark in that it brought chemistry under quantum mechanics. Their work was an influence on Pauling, who had just received his doctorate and visited Heitler and London in Zürich on a Guggenheim Fellowship. Subsequently, in 1931, building on the work of Heitler and London and on theories found in Lewis' famous article, Pauling published his ground-breaking article "The Nature of the Chemical Bond" (see: manuscript) in which he used quantum mechanics to calculate properties and structures of molecules, such as angles between bonds and rotation about bonds. On these concepts, Pauling developed hybridization theory to account for bonds in molecules such as CH4, in which four sp³ hybridised orbitals are overlapped by hydrogen's 1s orbital, yielding four sigma (σ) bonds. The four bonds are of the same length and strength, which yields a molecular structure as shown below: Owing to these exceptional theories, Pauling won the 1954 Nobel Prize in Chemistry. Notably he has been the only person to ever win two unshared Nobel Prizes, winning the Nobel Peace Prize in 1963. In 1926, French physicist Jean Perrin received the Nobel Prize in physics for proving, conclusively, the existence of molecules. He did this by calculating the Avogadro number using three different methods, all involving liquid phase systems. First, he used a gamboge soap-like emulsion, second by doing experimental work on Brownian motion, and third by confirming Einstein's theory of particle rotation in the liquid phase. In 1937, chemist K.L. Wolf introduced the concept of supermolecules (Übermoleküle) to describe hydrogen bonding in acetic acid dimers. This would eventually lead to the area of supermolecular chemistry, which is the study of non-covalent bonding. In 1951, physicist Erwin Wilhelm Müller invents the field ion microscope and is the first to see atoms, e.g. bonded atomic arrangements at the tip of a metal point. In 1968-1970 Leroy Cooper, PhD of the University of California at Davis completed his thesis which showed what molecules looked like. He used x-ray deflection off crystals and a complex computer program written by Bill Pentz of the UC Davis Computer Center. This program took the mapped deflections and used them to calculate the basic shapes of crystal molecules. His work showed that actual molecular shapes in quartz crystals and other tested crystals looked similar to the long envisioned merged various sized soap bubbles theorized, except instead of being merged spheres of different sizes, actual shapes were rigid mergers of more tear dropped shapes that stayed fixed in orientation. This work verified for the first time that crystal molecules are actually linked or stacked merged tear drop constructions. In 1999, researchers from the University of Vienna reported results from experiments on wave-particle duality for C60 molecules. The data published by Anton Zeilinger et al. were consistent with Louis de Broglie's matter waves. This experiment was noted for extending the applicability of wave–particle duality by about one order of magnitude in the macroscopic direction. In 2009, researchers from IBM managed to take the first picture of a real molecule. Using an atomic force microscope every single atom and bond of a pentacene molecule could be imaged. See also History of chemistry History of quantum mechanics History of thermodynamics History of molecular biology Kinetic theory of gases Atomic theory References Further reading External links Geometric Structures of Molecules - Middlebury College Atoms and Molecules - McMaster University 3D Molecule Viewer - The Wileys Family Molecule of the Month - School of Chemistry, University of Bristol - Eric Scerri's history & philosophy of chemistry website Types Antibody Molecule - The National Health Museum 15 Types of Molecules - IUPAC Definitions Definitions Molecule Definition - Frostburg State University (Department of Chemistry) Definition of Molecule - IUPAC Articles Molecules Used to Make Nano-sized Containers - TRN Newswire Molecular Computer Processors - HP Labs History of chemistry Molecules General chemistry
History of molecular theory
[ "Physics", "Chemistry" ]
4,654
[ "Molecular physics", "Molecules", "Physical objects", "nan", "Atoms", "Matter" ]
7,072,506
https://en.wikipedia.org/wiki/Ignition%20timing
In a spark ignition internal combustion engine, ignition timing is the timing, relative to the current piston position and crankshaft angle, of the release of a spark in the combustion chamber near the end of the compression stroke. The need for advancing (or retarding) the timing of the spark is because fuel does not completely burn the instant the spark fires. The combustion gases take a period of time to expand and the angular or rotational speed of the engine can lengthen or shorten the time frame in which the burning and expansion should occur. In a vast majority of cases, the angle will be described as a certain angle advanced before top dead center (BTDC). Advancing the spark BTDC means that the spark is energized prior to the point where the combustion chamber reaches its minimum size, since the purpose of the power stroke in the engine is to force the combustion chamber to expand. Sparks occurring after top dead center (ATDC) are usually counter-productive (producing wasted spark, back-fire, engine knock, etc.) unless there is need for a supplemental or continuing spark prior to the exhaust stroke. Setting the correct ignition timing is crucial in the performance of an engine. Sparks occurring too soon or too late in the engine cycle are often responsible for excessive vibrations and even engine damage. The ignition timing affects many variables including engine longevity, fuel economy, and engine power. Many variables also affect what the "best" timing is. Modern engines that are controlled in real time by an engine control unit use a computer to control the timing throughout the engine's RPM and load range. Older engines that use mechanical distributors rely on inertia (by using rotating weights and springs) and manifold vacuum in order to set the ignition timing throughout the engine's RPM and load range. Early cars required the driver to adjust timing via controls according to driving conditions, but this is now automated. There are many factors that influence proper ignition timing for a given engine. These include the timing of the intake valve(s) or fuel injector(s), the type of ignition system used, the type and condition of the spark plugs, the contents and impurities of the fuel, fuel temperature and pressure, engine speed and load, air and engine temperature, turbo boost pressure or intake air pressure, the components used in the ignition system, and the settings of the ignition system components. Usually, any major engine changes or upgrades will require a change to the ignition timing settings of the engine. Background The spark ignition system of mechanically controlled gasoline internal combustion engines consists of a mechanical device, known as a distributor, that triggers and distributes ignition spark to each cylinder relative to piston position—in crankshaft degrees relative to top dead centre (TDC). Spark timing, relative to piston position, is based on static (initial or base) timing without mechanical advance. The distributor's centrifugal timing advance mechanism makes the spark occur sooner as engine speed increases. Many of these engines will also use a vacuum advance that advances timing during light loads and deceleration, independent of the centrifugal advance. This typically applies to automotive use; marine gasoline engines generally use a similar system but without vacuum advance. In mid-1963, Ford offered transistorized ignition on their new 427 FE V8. This system only passed a very low current through the ignition points, using a PNP transistor to perform high-voltage switching of the ignition current, allowing for a higher voltage ignition spark, as well as reducing variations in ignition timing due to arc-wear of the breaker points. Engines so equipped carried special stickers on their valve covers reading “427-T.” AC Delco’s Delcotron Transistor Control Magnetic Pulse Ignition System became optional on a number of General Motors vehicles beginning in 1964. The Delco system eliminated the mechanical points completely, using magnetic flux variation for current switching, virtually eliminating point wear concerns. In 1967, Ferrari and Fiat Dinos came equipped with Magneti Marelli Dinoplex electronic ignition, and all Porsche 911s had electronic ignition beginning with the B-Series 1969 models. In 1972, Chrysler introduced a magnetically-triggered pointless electronic ignition system as standard equipment on some production cars, and included it as standard across the board by 1973. Electronic control of ignition timing was introduced a few years later in 1975-'76 with the introduction of Chrysler's computer-controlled "Lean-Burn" electronic spark advance system. By 1979 with the Bosch Motronic engine management system, technology had advanced to include simultaneous control of both the ignition timing and fuel delivery. These systems form the basis of modern engine management systems. Setting the ignition timing "Timing advance" refers to the number of degrees before top dead center (BTDC) that the sparkplug will fire to ignite the air-fuel mixture in the combustion chamber before the end of the compression stroke. Retarded timing can be defined as changing the timing so that fuel ignition happens later than the manufacturer's specified time. For example, if the timing specified by the manufacturer was set at 12 degrees BTDC initially and adjusted to 11 degrees BTDC, it would be referred to as retarded. In a classic ignition system with breaker points, the basic timing can be set statically using a test light or dynamically using the timing marks and a timing light. Timing advance is required because it takes time to burn the air-fuel mixture. Igniting the mixture before the piston reaches TDC will allow the mixture to fully burn soon after the piston reaches TDC. If the mixture is ignited at the correct time, maximum pressure in the cylinder will occur sometime after the piston reaches TDC allowing the ignited mixture to push the piston down the cylinder with the greatest force. Ideally, the time at which the mixture should be fully burnt is about 20 degrees ATDC. This will maximize the engine's power producing potential. If the ignition spark occurs at a position that is too advanced relative to piston position, the rapidly combusting mixture can actually push against the piston still moving up in its compression stroke, causing knocking (pinking or pinging) and possible engine damage, this usually occurs at low RPM and is known as pre-ignition or in severe cases detonation. If the spark occurs too retarded relative to the piston position, maximum cylinder pressure will occur after the piston is already too far down in the cylinder on its power stroke. This results in lost power, overheating tendencies, high emissions, and unburned fuel. The ignition timing will need to become increasingly advanced (relative to TDC) as the engine speed increases so that the air-fuel mixture has the correct amount of time to fully burn. As the engine speed (RPM) increases, the time available to burn the mixture decreases but the burning itself proceeds at the same speed, it needs to be started increasingly earlier to complete in time. Poor volumetric efficiency at higher engine speeds also requires increased advancement of ignition timing. The correct timing advance for a given engine speed will allow for maximum cylinder pressure to be achieved at the correct crankshaft angular position. When setting the timing for an automobile engine, the factory timing setting can usually be found on a sticker in the engine bay. The ignition timing is also dependent on the load of the engine with more load (larger throttle opening and therefore air:fuel ratio) requiring less advance (the mixture burns faster). Also it is dependent on the temperature of the engine with lower temperature allowing for more advance. The speed with which the mixture burns depends on the type of fuel, the amount of turbulence in the airflow (which is tied to the design the cylinder head and valvetrain system) and on the air-fuel ratio. It is a common myth that burn speed is linked with octane rating. Dynamometer tuning Setting the ignition timing while monitoring engine power output with a dynamometer is one way to correctly set the ignition timing. After advancing or retarding the timing, a corresponding change in power output will usually occur. A load type dynamometer is the best way to accomplish this as the engine can be held at a steady speed and load while the timing is adjusted for maximum output. Using a knock sensor to find the correct timing is one method used to tune an engine. In this method, the timing is advanced until knock occurs. The timing is then retarded one or two degrees and set there. This method is inferior to tuning with a dynamometer since it often leads to ignition timing which is excessively advanced particularly on modern engines which do not require as much advance to deliver peak torque. With excessive advance, the engine will be prone to pinging and detonation when conditions change (fuel quality, temperature, sensor issues, etc). After achieving the desired power characteristics for a given engine load/rpm, the spark plugs should be inspected for signs of engine detonation. If there are any such signs, the ignition timing should be retarded until there are none. The best way to set ignition timing on a load type dynamometer is to slowly advance the timing until peak torque output is reached. Some engines (particularly turbo or supercharged) will not reach peak torque at a given engine speed before they begin to knock (pinging or minor detonation). In this case, engine timing should be retarded slightly below this timing value (known as the "knock limit"). Engine combustion efficiency and volumetric efficiency will change as ignition timing is varied, which means fuel quantity must also be changed as the ignition is varied. After each change in ignition timing, fuel is adjusted also to deliver peak torque. Mechanical ignition systems Mechanical ignition systems use a mechanical spark distributor to distribute a high voltage current to the correct spark plug at the correct time. In order to set an initial timing advance or timing retard for an engine, the engine is allowed to idle and the distributor is adjusted to achieve the best ignition timing for the engine at idle speed. This process is called "setting the base advance". There are two methods of increasing timing advance past the base advance. The advances achieved by these methods are added to the base advance number in order to achieve a total timing advance number. Mechanical timing advance An increasing mechanical advancement of the timing takes place with increasing engine speed. This is possible by using the law of inertia. Weights and springs inside the distributor rotate and affect the timing advance according to engine speed by altering the angular position of the timing sensor shaft with respect to the actual engine position. This type of timing advance is also referred to as centrifugal timing advance. The amount of mechanical advance is dependent solely on the speed at which the distributor is rotating. In a 2-stroke engine, this is the same as engine RPM. In a 4-stroke engine, this is half the engine RPM. The relationship between advance in degrees and distributor RPM can be drawn as a simple 2-dimensional graph. Lighter weights or heavier springs can be used to reduce the timing advance at lower engine RPM. Heavier weights or lighter springs can be used to advance the timing at lower engine RPM. Usually, at some point in the engine's RPM range, these weights contact their travel limits, and the amount of centrifugal ignition advance is then fixed above that rpm. Vacuum timing advance The second method used to advance (or retard) the ignition timing is called vacuum timing advance. This method is almost always used in addition to mechanical timing advance. It generally increases fuel economy and driveability, particularly at lean mixtures. It also increases engine life through more complete combustion, leaving less unburned fuel to wash away the cylinder wall lubrication (piston ring wear), and less lubricating oil dilution (bearings, camshaft life, etc.). Vacuum advance works by using a manifold vacuum source to advance the timing at low to mid engine load conditions by rotating the position sensor (contact points, hall effect or optical sensor, reluctor stator, etc.) mounting plate in the distributor with respect to the distributor shaft. Vacuum advance is diminished at wide open throttle (WOT), causing the timing advance to return to the base advance in addition to the mechanical advance. One source for vacuum advance is a small opening located in the wall of the throttle body or carburetor adjacent to but slightly upstream of the edge of the throttle plate. This is called a ported vacuum. The effect of having the opening here is that there is little or no vacuum at idle, hence little or no advance. Other vehicles use vacuum directly from the intake manifold. This provides full engine vacuum (and hence, full vacuum advance) at idle. Some vacuum advance units have two vacuum connections, one at each side of the actuator membrane, connected to both manifold vacuum and ported vacuum. These units will both advance and retard the ignition timing. On some vehicles, a temperature sensing switch will apply manifold vacuum to the vacuum advance system when the engine is hot or cold, and ported vacuum at normal operating temperature. This is a version of emissions control; the ported vacuum allowed carburetor adjustments for a leaner idle mixture. At high engine temperature, the increased advance raised engine speed to allow the cooling system to operate more efficiently. At low temperature the advance allowed the enriched warm-up mixture to burn more completely, providing better cold-engine running. Electrical or mechanical switches may be used to prevent or alter vacuum advance under certain conditions. Early emissions electronics would engage some in relation to oxygen sensor signals or activation of emissions-related equipment. It was also common to prevent some or all of the vacuum advance in certain gears to prevent detonation due to lean-burning engines. Computer-controlled ignition systems Newer engines typically use computerized ignition systems. The computer has a timing map (lookup table) with spark advance values for all combinations of engine speed and engine load. The computer will send a signal to the ignition coil at the indicated time in the timing map in order to fire the spark plug. Most computers from original equipment manufacturers (OEM) cannot be modified so changing the timing advance curve is not possible. Overall timing changes are still possible, depending on the engine design. Aftermarket engine control units allow the tuner to make changes to the timing map. This allows the timing to be advanced or retarded based on various engine applications. A knock sensor may be used by the ignition system to allow for fuel quality variation. Bibliography Hartman, J. (2004). How to Tune and Modify Engine Management Systems. Motorbooks See also Electronic fuel injection (EFI) Firing order Valve timing References External links Setting Ignition Timing Curves Getting the Ignition Timing Right Ignition systems Synchronization
Ignition timing
[ "Engineering" ]
2,979
[ "Telecommunications engineering", "Synchronization" ]
7,073,120
https://en.wikipedia.org/wiki/Nuclear%20criticality%20safety
Nuclear criticality safety is a field of nuclear engineering dedicated to the prevention of nuclear and radiation accidents resulting from an inadvertent, self-sustaining nuclear chain reaction. Nuclear criticality safety is concerned with mitigating the consequences of a nuclear criticality accident. A nuclear criticality accident occurs from operations that involve fissile material and results in a sudden and potentially lethal release of radiation. Nuclear criticality safety practitioners attempt to prevent nuclear criticality accidents by analyzing normal and credible abnormal conditions in fissile material operations and designing safe arrangements for the processing of fissile materials. A common practice is to apply a double contingency analysis to the operation in which two or more independent, concurrent and unlikely changes in process conditions must occur before a nuclear criticality accident can occur. For example, the first change in conditions may be complete or partial flooding and the second change a re-arrangement of the fissile material. Controls (requirements) on process parameters (e.g., fissile material mass, equipment) result from this analysis. These controls, either passive (physical), active (mechanical), or administrative (human), are implemented by inherently safe or fault-tolerant plant designs, or, if such designs are not practicable, by administrative controls such as operating procedures, job instructions and other means to minimize the potential for significant process changes that could lead to a nuclear criticality accident. Principles As a simplistic analysis, a system will be exactly critical if the rate of neutron production from fission is exactly balanced by the rate at which neutrons are either absorbed or lost from the system due to leakage. Safely subcritical systems can be designed by ensuring that the potential combined rate of absorption and leakage always exceeds the potential rate of neutron production. The parameters affecting the criticality of the system may be remembered using the mnemonic MAGICMERV. Some these parameters are not independent from one another; for example, changing mass will result in a change of volume, among others. Mass: The probability of fission increases as the total number of fissile nuclei increases. The relationship is not linear. If a fissile body has a given size and shape but varying density and mass, there is a threshold below which criticality cannot occur. This threshold is called the critical mass. Absorption: Absorption removes neutrons from the system. Large amounts of absorbers are used to control or reduce the probability of a criticality. Good absorbers are boron, cadmium, gadolinium, silver, and indium. Geometry/shape: The shape of the fissile system affects how easily neutrons can escape (leak out) from it, in which case they are not available to cause fission events in the fissile material. Therefore, the shape of the fissile material affects the probability of occurrence of fission events. A shape with a large surface area, such as a thin slab, favors leakage and is safer than the same amount of fissile material in a small, compact shape such as a cube or sphere. Interaction of units: Neutrons leaking from one unit can enter another. Two units, which by themselves are sub-critical, could interact with each other to form a critical system. The distance separating the units and any material between them influences the effect. Concentration/Density: Neutron reactions leading to scattering, capture or fission reactions are more likely to occur in dense materials; conversely, neutrons are more likely to escape (leak) from low density materials. Moderation: Neutrons resulting from fission are typically fast (high energy). These fast neutrons do not cause fission as readily as slower (less energetic) ones. Neutrons are slowed down (moderated) by collision with atomic nuclei. The most effective moderating nuclei are hydrogen, deuterium, beryllium and carbon. Hence hydrogenous materials including oil, polyethylene, water, wood, paraffin, and the human body are good moderators. Note that moderation comes from collisions; therefore most moderators are also good reflectors. Enrichment: The probability of a neutron reacting with a fissile nucleus is influenced by the relative numbers of fissile and non-fissile nuclei in a system. The process of increasing the relative number of fissile nuclei in a system is called enrichment. Typically, low enrichment means less likelihood of a criticality and high enrichment means a greater likelihood. Reflection: When neutrons collide with other atomic particles (primarily nuclei) and are not absorbed, they are scattered (i.e. they change direction). If the change in direction is large enough, neutrons that have just escaped from a fissile body may be deflected back into it, increasing the likelihood of fission. This is called 'reflection'. Good reflectors include hydrogen, beryllium, carbon, lead, uranium, water, polyethylene, concrete, Tungsten carbide and steel. Volume: For a body of fissile material in any given shape, increasing the size of the body increases the average distance that neutrons must travel before they can reach the surface and escape. Hence, increasing the size of the body increases the likelihood of fission and decreases the likelihood of leakage. Hence, for any given shape (and reflection conditions - see below) there will be a size that gives an exact balance between the rate of neutron production and the combined rate of absorption and leakage. This is the critical size. Other parameters include: Temperature: This particular parameter is less commonly considered by criticality safety practitioners, as variations in temperature in a typical operating environment are often minimal or unlikely to adversely affect the criticality of the system. Often, it is assumed the actual temperature of the system being analyzed is close to room temperature. Notable exceptions to this assumption include high-temperature reactors and low-temperature cryogenic experiments. Heterogeneity: Blending fissile powders into solution, milling of powders or scraps, or other processes that affect the small-scale structure of fissile materials is important. While normally referred to as heterogeneity control, generally the concern is maintaining homogeneity because the homogeneous case is usually less reactive. Particularly, at lower enrichment, a system may be more reactive in a heterogeneous configuration compared to a homogeneous configuration. Physicochemical Form: Consists of controlling the physical state (i.e., solid, liquid, or gas) and form (e.g., solution, powder, green or sintered pellets, or metal) and/or chemical composition (e.g., uranium hexafluoride, uranyl fluoride, plutonium nitrate, or mixed oxide) of a particular fissile material. The physicochemical form could indirectly affect other parameters, such as density, moderation, and neutron absorption. Calculations and analyses To determine if any given system containing fissile material is safe, its neutron balance must be calculated. In all but very simple cases, this usually requires the use of computer programs to model the system geometry and its material properties. The analyst describes the geometry of the system and the materials, usually with conservative or pessimistic assumptions. The density and size of any neutron absorbers is minimised while the amount of fissile material is maximised. As some moderators are also absorbers, the analyst must be careful when modelling these to be pessimistic. Computer codes allow analysts to describe a three-dimensional system with boundary conditions. These boundary conditions can represent real boundaries such as concrete walls or the surface of a pond, or can be used to represent an artificial infinite system using a periodic boundary condition. These are useful when representing a large system consisting of many repeated units. Computer codes used for criticality safety analyses include OPENMC (MIT), COG (US), MONK (UK), SCALE/KENO (US), MCNP (US), and CRISTAL (France). Burnup credit Traditional criticality analyses assume that the fissile material is in its most reactive condition, which is usually at maximum enrichment, with no irradiation. For spent nuclear fuel storage and transport, burnup credit may be used to allow fuel to be more closely packed, reducing space and allowing more fuel to be handled safely. In order to implement burnup credit, fuel is modeled as irradiated using pessimistic conditions which produce an isotopic composition representative of all irradiated fuel. Fuel irradiation produces actinides consisting of both neutron absorbers and fissionable isotopes as well as fission products which absorb neutrons. In fuel storage pools using burnup credit, separate regions are designed for storage of fresh and irradiated fuel. In order to store fuel in the irradiated fuel store it must satisfy a loading curve which is dependent on initial enrichment and irradiation. See also Critical mass Criticality accident Nuclear and radiation accidents and incidents World Association of Nuclear Operators References Nuclear safety and security Nuclear technology
Nuclear criticality safety
[ "Physics" ]
1,846
[ "Nuclear technology", "Nuclear physics" ]
7,073,138
https://en.wikipedia.org/wiki/Fin%20%28extended%20surface%29
In the study of heat transfer, fins are surfaces that extend from an object to increase the rate of heat transfer to or from the environment by increasing convection. The amount of conduction, convection, or radiation of an object determines the amount of heat it transfers. Increasing the temperature gradient between the object and the environment, increasing the convection heat transfer coefficient, or increasing the surface area of the object increases the heat transfer. Sometimes it is not feasible or economical to change the first two options. Thus, adding a fin to an object, increases the surface area and can sometimes be an economical solution to heat transfer problems. One-piece finned heat sinks are produced by extrusion, casting, skiving, or milling. General case To create a tractable equation for the heat transfer of a fin, many assumptions need to be made: Steady state Constant material properties (independent of temperature) No internal heat generation One-dimensional conduction Uniform cross-sectional area Uniform convection across the surface area With these assumptions, conservation of energy can be used to create an energy balance for a differential cross section of the fin: Fourier’s law states that where is the cross-sectional area of the differential element. Furthermore, the convective heat flux can be determined via the definition of the heat transfer coefficient h, where is the temperature of the surroundings. The differential convective heat flux can then be determined from the perimeter of the fin cross-section P, The equation of energy conservation can now be expressed in terms of temperature, Rearranging this equation and using the definition of the derivative yields the following differential equation for temperature, ; the derivative on the left can be expanded to the most general form of the fin equation, The cross-sectional area, perimeter, and temperature can all be functions of x. Uniform cross-sectional area If the fin has a constant cross-section along its length, the area and perimeter are constant and the differential equation for temperature is greatly simplified to where and . The constants and can now be found by applying the proper boundary conditions. Solutions The base of the fin is typically set to a constant reference temperature, . There are four commonly possible fin tip () conditions, however: the tip can be exposed to convective heat transfer, insulated, held at a constant temperature, or so far away from the base as to reach the ambient temperature. For the first case, the second boundary condition is that there is free convection at the tip. Therefore, which simplifies to The two boundary conditions can now be combined to produce This equation can be solved for the constants and to find the temperature distribution, which is in the table below. A similar approach can be used to find the constants of integration for the remaining cases. For the second case, the tip is assumed to be insulated, or in other words to have a heat flux of zero. Therefore, For the third case, the temperature at the tip is held constant. Therefore, the boundary condition is: For the fourth and final case, the fin is assumed to be infinitely long. Therefore, the boundary condition is: Finally, we can use the temperature distribution and Fourier's law at the base of the fin to determine the overall rate of heat transfer, The results of the solution process are summarized in the table below. Performance Fin performance can be described in three different ways. The first is fin effectiveness. It is the ratio of the fin heat transfer rate () to the heat transfer rate of the object if it had no fin. The formula for this is: where is the fin cross-sectional area at the base. Fin performance can also be characterized by fin efficiency. This is the ratio of the fin heat transfer rate to the heat transfer rate of the fin if the entire fin were at the base temperature, in this equation is equal to the surface area of the fin. The fin efficiency will always be less than one, as assuming the temperature throughout the fin is at the base temperature would increase the heat transfer rate. The third way fin performance can be described is with overall surface efficiency, where is the total area and is the sum of the heat transfer from the unfinned base area and all of the fins. This is the efficiency for an array of fins. Inverted fins (cavities) Open cavities are defined as the regions formed between adjacent fins and stand for the essential promoters of nucleate boiling or condensation. These cavities are usually utilized to extract heat from a variety of heat generating bodies. From 2004 until now, many researchers have been motivated to search for the optimal design of cavities. Uses Fins are most commonly used in heat exchanging devices such as radiators in cars, computer CPU heatsinks, and heat exchangers in power plants. They are also used in newer technology such as hydrogen fuel cells. Nature has also taken advantage of the phenomena of fins; the ears of jackrabbits and fennec foxes act as fins to release heat from the blood that flows through them. References Heat transfer
Fin (extended surface)
[ "Physics", "Chemistry" ]
1,019
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
7,073,526
https://en.wikipedia.org/wiki/Management%20of%20Pacific%20Northwest%20riparian%20forests
Management of Pacific Northwest riparian forests is necessary because many of these forests have been dramatically changed from their original makeup. The primary interest in riparian forest and aquatic ecosystems under the Northwest Forest Plan (NWFP) is the need to restore stream habitat for fish populations, particularly anadromous salmonids. Some of these forests have been grazed by cattle or other livestock. The heavy hooves of these animals compact the soil. This compaction does not allow the water to be absorbed into the ground, so the water runs off into the stream carrying topsoil along the way. The simplification of the stream itself has also had negative effects. The large woody debris in the streams has been removed to allow for easy access to the stream and for better travel in the streams themselves. But the faster moving current erodes the stream banks, filling the stream with more sediment. The removal of trees on the stream banks also leads to erosion and stream degradation. Another effect of the removal of trees is an increase in stream temperatures because of the lack of shade. These changes to riparian forests can be fixed through three steps; Creation of riparian reserves Restoration of channel complexity Silviculture practices These steps will help restore riparian forest ecosystems which will directly help the salmon populations. Riparian forest restoration The following steps used to help restore and maintain healthy riparian forests came from the Bureau of Land Management’s best management practices (BMPs) in the Roseburg District. The first step towards riparian forest restoration should be the establishment of riparian reserves. The second step is to restore channel complexity. The third step is to apply silvicultural treatments to restore large conifers. The large conifer species would be western red cedar Thuja plicata and western hemlock Tsuga heterophylla. These three steps will help direct the ecosystem back to its pre-disturbed state. Riparian reserves The riparian reserve is the designated width from the stream where restrictions on what can be done are placed in order to protect the functions of the land and water in that reserved area. There are three different riparian reserve widths: Fish bearing stream widths are on each side of the stream. Permanently flowing non-fish bearing stream widths are . Seasonally following or intermittent stream widths are . Some activities that are restricted or limited in the riparian reserve include: Cattle grazing. Mineral lease operations. Chemical loading operations or similar toxic activities. Disturbance of unstable banks and headwalls. Operation of tracked equipment on slopes greater than 30% Chemical applications Timber harvest or fuel wood cutting ( except for salvage operations & management of stands) Road construction. Channel complexity restoration The placement of large woody debris (LWD) in streams creates pools and side channels. The pools provide habitat for aquatic organisms while the side channels help alleviate flooding. The LWD also controls the routing of sedimentation. The source of the LWD should be outside of the riparian reserve whenever possible so as not to promote erosion in the riparian reserve. However, if usable trees are generated during management, then they can be used to add LWD. Any trees that naturally fall in the stream are an advantage and should be left. Silviculture techniques There are three silvicultural techniques that will help restore large conifers (western red cedar & western hemlock) in riparian forests. Since silviculture is a cyclical process, the numbering of the techniques doesn't denote the order in which these operations should begin or the importance of the step. Site preparation Seeding Single tree selection Site preparation The role of site preparation is to modify current growing vegetative conditions making the site suitable for the desired seedlings. Western red cedar and western hemlock are the desired seedlings. The goals of site preparation in this case are: Control competing ground vegetation Erosion control Nutrient balancing Promote decomposition of surface litter layer Expose mineral soil. Mechanical site preparation will be difficult Because of the heavy equipment's size, and inability to maneuver in the small spaces left by single tree selection. Prescribed burning is another method of site preparation, but will not work because the shallow roots of western hemlock would get damaged, hurting the seed sources. Prescribed burning would also damage the thin bark of both western hemlock and red cedar girdling the trees. Chemical applications are restricted in the riparian reserves because of the danger of runoff or leaching of chemicals into the stream. So the methods that will be used for site preparation are: Passive site preparation Manual site preparation. The passive site preparation will entail keeping the debris created by naturally falling trees where they land. Keeping the slash and smaller trees that are generated by tree selections on the ground is another passive site preparation that will work well within the riparian forest. This will supply a good rotting seed bed for both Western Redcedar and Western Hemlock. Both species also can use disturbed mineral seed beds for regeneration from seed. To obtain disturbed mineral soil in the small areas that single tree selection creates manually turning up the soil with hand tools or small tillers is the manual site preparation option. Seeding Seeding is done following site preparation. Seeding is one way to ensure the survival of the desired species on a site. With seeding, foresters have control of genetic makeup of the species and the source of the seed. Natural regeneration may be obtained because of the high numbers of annual seed crops (100,000–1 million/acre). Where annual seed production is low western red cedar can be direct seeded in the fall if the soil moisture is adequate. High numbers of seeds will be needed to reach the desired stocking level. Containerized stocking also works well. In the coastal ranges, 2-year-old bare-root stock seems to be most efficient. Containerized stock plantings in the spring perform better than bare-root stock in the interior. Western hemlock has a good rate of survival in a wide range of conditions. This will allow for natural regeneration on sites that have good organic or mineral soil. If the site is not suitable for natural regeneration then the use of container-grown stock should be used. Hemlock doesn't survive well with the bare-root stock method. Both western red cedar and western hemlock are able to reproduce by some form of vegetative reproduction. Western red cedar reproduces in three ways of vegetative form; layering, rooting of fallen branches, and branch development on fallen trees. In some areas of the Cascades, this form of regeneration is the most successful. Another option for the establishment of red cedar is the use of stem cuttings. Western hemlock also has vegetative reproduction capabilities. Hemlock can be propagated by layering and from cuttings. Single-tree selection Single-tree selection harvest method works best within the riparian ecosystem. Single-tree selection is a good method to keep the western hemlock and western red cedar on the site. If the stand was left alone and the forest naturally created gaps for succession then other species that are less tolerant than the desired tree species of western red cedar and western hemlock could overtake the created gaps. Single-tree selection will contribute minimally to erosion, still provide habitat for wildlife, be aesthetically pleasing to the eye, and follow the best management practices (BMPs) that are associated with riparian forests. Single-tree selection gives the forest a great vertical distribution of foliage. The last reason for doing a single tree selection is that it spreads out income over longer period of time. This could help pay for any costs associated with the stand's management. Single-tree selection replicates the natural process called gap-phase. Gap-phase is an event that happens in a forest when a tree in the upper canopy of the forest falls down, usually form a strong wind. The gap formed in the upper canopy allows enough sunlight to come through the opening and reach saplings at the forest floor. These saplings can grow and eventually penetrate the canopy. The natural gap-phase process may only open the total stand by 1 percent. Single tee selection is different from the natural process because the openings are created more often. Since both species are tolerant an opening of the stand by 10 percent each cutting cycle would be enough for a stand to do well. An uneven-aged forest is a result of periodically opening the canopy. Since both Western Hemlock and Western Redcedar are shade tolerant species, a basal area of is recommended and would be the maximum basal area the stand could support. The q-factor for these species is 1.2 because of their tolerance. So the number of trees in the size class would be 7 trees. A cutting cycle of 20 years is recommended to ensure the stand is following the ideal stocking curve for western red cedar and western hemlock. Western Redcedar can tolerate mixed-species conditions in the understory and is often overtopped by species such as Douglas-fir, Western White Pine and Western Hemlock (Minore, 1990). Western Hemlock responds well to release after long periods of suppression. After 50 to 60 years, the advanced regeneration will grow vigorously after overstory removal (Packee, 1990). The single-tree selection works well with these growth characteristics of both species. See also Riparian zone Riparian-zone restoration Fir and spruce forests References Barnes, Burton V., et al. Forest Ecology. New York. John Wiley & Sons Inc. Ch5 pp. 113–114. 1998 Conway, Flaxen D. L. "Timber in Oregon: History and Projected Trends" Oregon State University Extension Service. February 2005. [Online] URL: http://cesc.orst.edu/agcomwebfile/edmat/html/em/em8544/em8544po4.htm Dwire, Kate. "Riparian Resources." USDA Forest Service, Rocky Mountain Research Station. Forest Service Handbook 2509.25 page 4 of 23; section 12, page 13–19 section; 13 Centennial, Wyoming. September 23, 2004. Gray, A. N. 2000. Adaptive ecosystem management in the Pacific Northwest: a case study from coastal Oregon. Conservation Ecology 4(2): 6. [Online] URL: http://www.consecol.org/vol4/iss2/art6/htm Nyland, Ralph D. Silviculture. Boston. McGraw-Hill. Ch. 5 pp: 88–106; ch. 8 pp: 177–180; ch. 11 pp. 237–243; ch. 22 pp: 507–511, 518. 2002. Packee, E.C. "Silvics of North America vol1. Conifers" United States Department of Agriculture Forest Service Agriculture Hdbk 654 western hemlock. 11 February 2005 [Online] http://www.na.fs.fed.us/spfo/pubs/silvics_manual/Volume_1/tsuga/heterophylla.htm Southerland, Doug. "Washington Forest Health issue in 2002" Washington State Department of Natural Resources. February 2005. [Online] URL: https://web.archive.org/web/20060924203631/http://www.dnr.wa.gov/htdocs/rp/forhealth/issues/2002issues.htm Steiner, Linda. "Trout & Salmon" Pennsylvania Fishes Chapter 16. April 2005 [Online] URL: http://sites.state.pa.us/PA_Exec/Fish_Boat/pafish/fishhtms/chapter16.htm Zielinski, Elaine. "Record of Decision of the Roseburg District Resource Management Plan" Bureau of Land Management. February 2005. [Online] URL: http://www.or.blm.gov/roseburg/rod_rmp/rod.htm Riparian zone Forestry in the United States Forestry in Canada Flora of the Northwestern United States Land management in the United States Northwestern United States
Management of Pacific Northwest riparian forests
[ "Environmental_science" ]
2,477
[ "Riparian zone", "Hydrology" ]
7,073,994
https://en.wikipedia.org/wiki/National%20Organization%20for%20the%20Professional%20Advancement%20of%20Black%20Chemists%20and%20Chemical%20Engineers
The National Organization for the Professional Advancement of Black Chemists and Chemical Engineers or NOBCChE (pronounced No-be-shay) is a nonprofit, professional organization. NOBCChE's goal is to increase the number of minorities in science, technology, and engineering fields. The organization accomplishes this by creating bonds with professionals working at science-related companies and faculty at local school districts in order to get more minorities to pursue a career in science and engineering fields. NOBCChE focuses on establishing diversity programs for the professional development of young kids and to spread knowledge in science and engineering. NOBCChE chapters can be found nationwide. History NOBCChE was co-founded in 1972 by a group of chemists and chemical engineers. Initially, the organization was financially aided by the Haas Community Fund and Drexel University. After receiving positive feedback and interest from other black chemists and chemical engineers, the founders decided to expand on their idea and set up a structured idea of what they wanted the society to emphasize. Two years later, the first national meeting was held in New Orleans. At the conference, black chemists and chemical engineers found that they could discuss career-related issues with others who were in similar fields. Today, the national conference features various workshops, research presentations, and high school science bowls. NOBCChE also presents the Percy L. Julian Award, given to African-American scientists who have made significant contributions to the areas of pure or applied research in science or engineering. Founders of NOBCChE Joseph N. Cannon, Chemical engineer and professor - Howard University Lloyd Ferguson, Chemist and professor - California State University William M. Jackson, Chemist and professor - Howard University William Guillory, Chemist and professor - Drexel University Henry C. McBay, Chemist and professor - Morehouse College Charles Merideth, Chemist and chancellor of the Atlanta University Center, Inc. James Porter, Chemical engineer and professor - MIT Presidents The President has the overall responsibility for affecting the objectives of NOBCChE, oversees the day-to-day activities of the organization, and is the official representative of the organization. For over 45 years, professionals from industry, academia, and government have volunteered their time to lead the organization in the mission of encouraging education and careers in STEM for people of color. Each NOBCChE President develops his or her own set of goals with corresponding initiatives and events. *Affiliation at the time of election References External links NOBCChE: Welcome to NOBCChE NOBCChE Midwest Regional Homepage National Society of Black Engineers (NSBE) Chemistry societies African-American professional organizations
National Organization for the Professional Advancement of Black Chemists and Chemical Engineers
[ "Chemistry" ]
522
[ "Chemistry societies", "nan" ]
7,074,070
https://en.wikipedia.org/wiki/Suillus%20granulatus
Suillus granulatus is a pored mushroom of the genus Suillus in the family Suillaceae. It is similar to the related S. luteus, but can be distinguished by its ringless stalk. Like S. luteus, it is an edible mushroom that often grows in a symbiosis (mycorrhiza) with pine. It has been commonly known as the weeping bolete, or the granulated bolete. Previously thought to exist in North America, that species has now been confirmed to be the rediscovered Suillus weaverae. Taxonomy Suillus granulatus was first described by Carl Linnaeus in 1753 as a species of Boletus. It was given its current name by French naturalist Henri François Anne de Roussel when he transferred it to Suillus in 1796. Suillus is an ancient term for fungi, and is derived from the word "swine". Granulatus means "grainy" and refers to the granular dots on the upper part of the stem. However, in some specimens the granular dots may be inconspicuous and not darkening with age; thus the name S. lactifluus, "oozing milk" was formerly applied to this form as it is not notably characterized by granular dots. Description The orange-brown, to brown-yellow cap is viscid (sticky) when wet, and shiny when dry, and is usually 4 to 12 cm in diameter. The stem is pale yellow, of uniform thickness, with tiny brownish granules at the apex, and about 4–8 tall, 1–2 cm wide. It is without a ring. The tubes and pores are small, pale yellow, and exude pale milky droplets when young. The flesh is also pale yellow. Suillus granulatus is often confused with Suillus luteus, which is another common and widely distributed species occurring in the same habitat. S. luteus has conspicuous a partial veil and ring, and lacks the milky droplets on the pores. Also similar is Suillus brevipes, which has a short stipe in relation to the cap, and which does not ooze droplets from the pore surface. Suillus pungens is also similar. Bioleaching Bioleaching is the industrial process of using living organisms to extract metals from ores, typically where there is only a trace amount of the metal to be extracted. It has been found that Suillus granulatus can extract trace elements (titanium, calcium, potassium, magnesium and lead) from wood ash and apatite. Distribution and habitat Grows with Pinus (pine trees) on both calcareous and acid soils, and sometimes occurs in large numbers. Suillus granulatus is the most widespread pine-associating Suillus species in warm climates. It is common in Britain, Europe. It is associated with Japanese red pine (Pinus densiflora) in South Korea. A native to the Northern Hemisphere, the fungus has been introduced into Australia under Pinus radiata. It is also found in Africa, New Zealand, Hawaii, Argentina and southern Chile. Edibility Suillus granulatus is edible and variously considered to be of either good or poor quality. The gelatinous pileipellis should be removed first, and like all Suillus species, the tubes are best removed before cooking. It has been reported to cause gastric upset in some cases. It is sometimes included in commercially produced mushroom preserves. The fruit bodies—low in fat, high in fiber and carbohydrates, and a source of nutraceutical compounds—can be considered a functional food. Toxicity Suillus granulatus sometimes causes contact dermatitis to those who handle it. References R.Phillips-Mushrooms 2006 Marcel Bonn-Mushrooms and Toadstools of Britain and North West Europe. granulatus Edible fungi Fungi described in 1753 Fungi of Europe Fungi of Africa Fungi of Australia Fungi native to Australia Fungi of New Zealand Fungi of North America Fungi of South America Taxa named by Carl Linnaeus Fungi of Oceania Fungi without expected TNC conservation status Fungus species
Suillus granulatus
[ "Biology" ]
838
[ "Fungi", "Fungus species" ]
7,074,436
https://en.wikipedia.org/wiki/List%20of%20graphical%20user%20interface%20elements
Graphical user interface elements are those elements used by graphical user interfaces (GUIs) to offer a consistent visual language to represent information stored in computers. These make it easier for people with few computer skills to work with and use computer software. This article explains the most common elements of visual language interfaces found in the WIMP ("window, icon, menu, pointer") paradigm, although many are also used at other graphical post-WIMP interfaces. These elements are usually embodied in an interface using a widget toolkit or desktop environment. Structural elements Graphical user interfaces use visual conventions to represent the generic information shown. Some conventions are used to build the structure of the static elements on which the user can interact, and define the appearance of the interface. Window A window is an area on the screen that displays information, with its contents being displayed independently from the rest of the screen. An example of a window is what appears on the screen when the "My Documents" icon is clicked in Microsoft Windows. It is easy for a user to manipulate a window: it can be shown and hidden by clicking on an icon or application, and it can be moved to any area by dragging it (that is, by clicking in a certain area of the window – usually the title bar along the top – and keeping the pointing device's button pressed, then moving the pointing device). A window can be placed in front or behind another window, its size can be adjusted, and scrollbars can be used to navigate the sections within it. Multiple windows can also be open at one time, in which case each window can display a different application or file – this is very useful when working in a multitasking environment. The system memory is the only limitation to the number of windows that can be open at once. There are also many types of specialized windows. A container window encloses other windows or controls. When it is moved or resized, the enclosed items move, resize, reorient, or are clipped by the container window. A browser window allows the user to view and navigate through a collection of items, such as files or web pages. Web browsers are an example of these types of windows. Text terminal windows present a character-based, command-driven text user interfaces within the overall graphical interface. MS-DOS and Unix consoles are examples of these types of windows. Terminal windows often conform to the hotkey and display conventions of CRT-based terminals that predate GUIs, such as the VT-100. A child window opens automatically or as a result of a user activity in a parent window. Pop-up windows on the Internet can be child windows. A message window, or dialog box, is a type of child window. These are usually small and basic windows that are opened by a program to display information to the user and/or get information from the user. They almost always have one or more buttons, which allow the user to dismiss the dialog with an affirmative, negative, or neutral response. Menu Menus allow the user to execute commands by selecting from a list of choices. Options are selected with a mouse or other pointing device within a GUI. A keyboard may also be used. Menus are convenient because they show what commands are available within the software. This limits the amount of documentation the user reads to understand the software. A menu bar is displayed horizontally across the top of the screen and/or along the tops of some or all windows. A pull-down menu is commonly associated with this menu type. When a user clicks on a menu option the pull-down menu will appear. A menu has a visible title within the menu bar. Its contents are only revealed when the user selects it with a pointer. The user is then able to select the items within the pull-down menu. When the user clicks elsewhere the content of the menu will disappear. A context menu is invisible until the user performs a specific mouse action, like pressing the right mouse button. When the software-specific mouse action occurs the menu will appear under the cursor. Menu extras are individual items within or at the side of a menu. Icons An icon is a small picture that represents objects such as a file, program, web page, or command. They are a quick way to execute commands, open documents, and run programs. Icons are also very useful when searching for an object in a browser list, because in many operating systems all documents using the same extension will have the same icon. Controls (or widgets) Interface elements known as graphical control elements, controls or widgets are software components that a computer user interacts with through direct manipulation to read or edit information about an application. Each widget facilitates a specific user-computer interaction. Structuring a user interface with Widget toolkits allow developers to reuse code for similar tasks, and provides users with a common language for interaction, maintaining consistency throughout the whole information system. Common uses for widgets involve the display of collections of related items (such as with various list and canvas controls), initiation of actions and processes within the interface (buttons and menus), navigation within the space of the information system (links, tabs and scrollbars), and representing and manipulating data values (such as labels, check boxes, radio buttons, sliders, and spinners.) Tabs A tab is typically a rectangular small box which usually contains a text label or graphical icon associated with a view pane. When activated the view pane, or window, displays widgets associated with that tab; groups of tabs allow the user to switch quickly between different widgets. This is used in all modern web browsers. With these browsers, you can have multiple web pages open at once in one window, and quickly navigate between them by clicking on the tabs associated with the pages. Tabs are usually placed in groups at the top of a window, but may also be grouped on the side or bottom of a window. Tabs are also present in the settings panes of many applications. Microsoft Windows, for example, uses tabs in most of its control panel dialogues. Interaction elements Some common idioms for interaction have evolved in the visual language used in GUIs. Interaction elements are interface objects that represent the state of an ongoing operation or transformation, either as visual remainders of the user intent (such as the pointer), or as affordances showing places where the user may interact. Cursor A cursor is an indicator used to show the position on a computer monitor or other display device that will respond to input from a text input or pointing device. Pointer The pointer echoes movements of the pointing device, commonly a mouse or touchpad. The pointer is the place where actions take place that are initiated through direct manipulation gestures such as click, touch and drag. Insertion point The caret, text cursor or insertion point represents the point of the user interface where the focus is located. It represents the object that will be used as the default subject of user-initiated commands such as writing text, starting a selection or a copy-paste operation through the keyboard. Selection A selection is a list of items on which user operations will take place. The user typically adds items to the list manually, although the computer may create a selection automatically. Adjustment handle A handle is an indicator of a starting point for a drag and drop operation. Usually the pointer shape changes when placed on the handle, showing an icon that represents the supported drag operation. See also Interaction technique Geometric primitive References
List of graphical user interface elements
[ "Technology" ]
1,530
[ "Components", "Graphical user interface elements" ]
7,074,545
https://en.wikipedia.org/wiki/Dawn%20simulation
Dawn simulation is a technique that involves timing a light, often called a wake-up light, sunrise alarm clock, or natural light alarm clock, in the bedroom to come on gradually, over a period of 30 minutes to 2 hours, before awakening to simulate dawn. History The concept of dawn simulation was first patented in 1890 as "mechanical sunrise". Modern electronic units were patented in 1973. Variations and improvements seem to get patented every few years. Clinical trials were conducted by David Avery, MD, in the 1980s at Columbia University following a long line of basic laboratory research that showed animals' circadian rhythms to be exquisitely sensitive to the dim, gradually rising dawn signal at the end of the night. The first modern commercial product was created by Outside In Cambridge UK (now known as Lumie) in 1993. https://www.lumie.com/30-years-of-light-therapy/5/ Clinical use There are two types of dawn that have been used effectively in a clinical setting: a naturalistic dawn mimicking a springtime sunrise (but used in mid-winter when it is still dark outside), and a sigmoidal-shaped dawn (30 minutes to 2 hours). When used successfully, patients are able to sleep through the dawn and wake up easily at the simulated sunrise, after which the day's treatment is over. The theory behind dawn simulation is based on the fact that early morning light signals are much more effective at advancing the biological clock than are light signals given at other times of day (see Phase response curve). Comparison with bright light therapy Dawn simulation generally uses light sources that range in illuminance from 100 to 300 lux, while bright light boxes are usually in the 10,000-lux range. Approximately 19% of patients discontinue post-awakening bright light therapy due to inconvenience. Because the entire treatment is complete before awakening, dawn simulation may be a more convenient alternative to post-awakening bright light therapy. In terms of efficacy, some studies have shown dawn simulation to be more effective than standard bright light therapy while others have shown no difference or shown that bright light therapy is superior. Some patients with seasonal affective disorder use both dawn simulation and bright light therapy to provide maximum effect at the start of the day. Other uses In an elaboration of the method, patients have also been presented with a dim dusk signal at bedtime, with indications that it eases sleep onset. In addition, the technique has been used clinically with patients who suffer from delayed sleep phase syndrome, helping them to awaken earlier in gradual steps, as the simulated dawn is moved earlier. Non-clinical sleep and wake-up uses A dawn simulator can be used as an alarm clock. Light enters through the eyelids triggering the body to begin its wake-up cycle, including the release of cortisol, so that by the time the light is at full brightness, sleepers wake up on their own, without the need for an alarm. Most commercial alarm clocks include a "dusk" mode as well for bedtime. References Further reading Circadian rhythm
Dawn simulation
[ "Biology" ]
631
[ "Behavior", "Sleep", "Circadian rhythm" ]
7,074,591
https://en.wikipedia.org/wiki/Burra%20Charter
The Burra Charter is a document published by the Australian ICOMOS which defines the basic principles and procedures to be followed in the conservation of Australian heritage places. The Charter was first endorsed in 1979 as an Australian adaptation of the Venice Charter, but with the introduction of a new analytical conservation model of heritage assessment that recognised forms of cultural heritage beyond tangible and physical forms. The Charter was the first national heritage document to replace the Venice Charter as the basis of national heritage practice. The Charter has been revised on four occasions since 1979, and has been internationally influential in providing standard guidelines for heritage conservation practice. History and development In 1979, the Australia ICOMOS Charter for the Conservation of Places of Cultural Significance was adopted at a meeting of Australia ICOMOS (International Council on Monuments and Sites) at the historic mining town of Burra, South Australia. It was given the short title of The Burra Charter. The Charter accepted the philosophy and concepts of the ICOMOS Venice Charter, but wrote them in a form which would be practical and useful in Australia. The Charter is periodically revised and updated, and the 2004 publication The Illustrated Burra Charter elaborates and explains the principles of the 1999 version in an easy to understand form. In 2013 the Charter was again revised and updated. The Burra Charter has been adopted by the Australian Heritage Council (December 2004), the Heritage Council of New South Wales (December 2004), the Queensland Heritage Council (January 2005) and the Heritage Council of Victoria (July 2010). It is also recommended by the Heritage Council of Western Australia and the Tasmanian Heritage Council. Contents The Burra Charter begins with a series of definitions, such as : Cultural significance means aesthetic, historic, scientific, social or spiritual value for past, present or future generations. Conservation means all the processes of looking after a place so as to retain its cultural significance. The types of actions that might be taken in the Conservation of a heritage place are defined as : Preservation: Maintaining a place in its existing state and preventing further deterioration. Restoration: Returning a place to a known earlier state by removing accretions or by reassembling existing elements without the introduction of new material. ''Reconstruction: Returning a place to a known if there is sufficient evidence. and is distinguished from restoration by the introduction of new material.' References International cultural heritage documents Architectural history Urban planning Historic preservation Historic preservation in Australia Conservation and restoration of cultural heritage
Burra Charter
[ "Engineering" ]
488
[ "Urban planning", "Architectural history", "Architecture" ]
7,074,727
https://en.wikipedia.org/wiki/Mesoporous%20silicate
Mesoporous silicates are silicates with a special morphology. Background Porous inorganic solids have found great utility as catalysts and sorption media because of their large internal surface area, i.e. the presence of voids of controllable dimensions at the atomic, molecular, and nanometer scales. With increasing environmental concerns worldwide, nanoporous materials have become more important and useful for the separation of polluting species and the recovery of useful ones. In recent years there has been great progress in applying environmentally friendly zeolites in heterogeneous reaction catalysis. The reason for their success is related to their specific features in converting molecules having kinetic diameter below 1 nm, but they become inadequate when reactants with sizes above the dimensions of the pores have to be processed. Research efforts to synthesize zeolites with larger pore diameter, high structural stability and catalytic activity have not given the expected results yet. Characteristics The discovery of a new family of mesoporous molecular sieves in the early 1990s by Kuroda et al., known as KSW-1 and FSM-16, and by ExxonMobil, called M41S, opened new possibilities to prepare catalysts for reactions of relatively large molecules. The silicate wall of the pores is amorphous. Mesoporous silicates, such as MCM-41 and SBA-15 (the most common mesoporous silicates), are porous silicates with huge surface areas (normally ≥1000 m2/g), large pore sizes (2 nm ≤ size ≤ 20 nm) and ordered arrays of cylindrical mesopores with very regular pore morphology. The large surface areas of these solids increase the probability that a reactant molecule in solution will come into contact with the catalyst surface and react. The large pore size and ordered pore morphology allow one to be sure that the reactant molecules are small enough to diffuse into the pores. See also High-performance liquid chromatography Gas-liquid chromatography Silica gel Nanotechnology References Silicates Catalysts Silicate
Mesoporous silicate
[ "Chemistry", "Materials_science", "Engineering" ]
439
[ "Catalysis", "Catalysts", "Materials science stubs", "Porous media", "Mesoporous material", "Materials science", "Chemical kinetics" ]
7,074,871
https://en.wikipedia.org/wiki/Lighting%20ratio
Lighting ratio in photography refers to the comparison of key light (the main source of light from which shadows fall) to the total fill light (the light that fills in the shadow areas). The higher the lighting ratio, the higher the contrast of the image; the lower the ratio, the lower the contrast. The lighting ratio is the ratio of the light levels on the brightest-lit to the least-lit parts of the subject; the brightest-lit areas are lit by both key (K) and fill (F). The American Society of Cinematographers (ASC) defines lighting ratio as (key+fill):fill, or (key+Σfill):Σfill, where Σfill is the sum of all fill lights. Light can be measured in footcandles. A key light of 200 footcandles and fill light of 100 footcandles have a 3:1 ratio (a ratio of three to one) — (200 + 100):100. A key light of 800 footcandles and a fill light of 200 footcandles has a ratio of 5:1 according to the lighting ratio formula — (800 + 200):200 = 1000 / 200 = 5 : 1. The ratio can be determined in relation to F stops since each increase in f-stop is equal to double the amount of light: 2 to the power of the difference in f stops is equal to the first factor in the ratio. For example, a difference in two f-stops between key and fill is 2 squared, or 4:1 ratio. A difference in 3 stops is 2 cubed, or an 8:1 ratio. No difference is equal to 2 to the power of 0, for a 1:1 ratio. See also High-key lighting Low-key lighting Silhouette References Science of photography Engineering ratios
Lighting ratio
[ "Mathematics", "Engineering" ]
373
[ "Quantity", "Metrics", "Engineering ratios" ]
7,075,005
https://en.wikipedia.org/wiki/Lithium%20tetrafluoroborate
Lithium tetrafluoroborate is an inorganic compound with the formula LiBF4. It is a white crystalline powder. It has been extensively tested for use in commercial secondary batteries, an application that exploits its high solubility in nonpolar solvents. Applications Although BF4− has high ionic mobility, solutions of its Li+ salt are less conductive than other less associated salts. As an electrolyte in lithium-ion batteries, LiBF4 offers some advantages relative to the more common LiPF6. It exhibits greater thermal stability and moisture tolerance. For example, LiBF4 can tolerate a moisture content up to 620 ppm at room temperature whereas LiPF6 readily hydrolyzes into toxic POF3 and HF gases, often destroying the battery's electrode materials. Disadvantages of the electrolyte include a relatively low conductivity and difficulties forming a stable solid electrolyte interface with graphite electrodes. Thermal stability Because LiBF4 and other alkali-metal salts thermally decompose to evolve boron trifluoride, the salt is commonly used as a convenient source of the chemical at the laboratory scale: LiBF4 → LiF + BF3 Production LiBF4 is a byproduct in the industrial synthesis of diborane: 8 BF3 + 6 LiH → B2H6 + 6 LiBF4 LiBF4 can also be synthesized from LiF and BF3 in an appropriate solvent that is resistant to fluorination by BF3 (e.g. HF, BrF3, or liquified SO2): LiF + BF3 → LiBF4 References Tetrafluoroborates Lithium salts Electrolytes
Lithium tetrafluoroborate
[ "Chemistry" ]
350
[ "Inorganic compounds", "Electrolytes", "Lithium salts", "Inorganic compound stubs", "Salts", "Electrochemistry", "Electrochemistry stubs", "Physical chemistry stubs" ]
7,075,186
https://en.wikipedia.org/wiki/Mere%20%28lake%29
A mere is a shallow lake, pond, or wetland, particularly in Great Britain and other parts of western Europe. Derivation of the word Etymology The word mere is recorded in Old English as mere ″sea, lake″, corresponding to Old Saxon meri, Old Low Franconian *meri (Dutch meer ″lake, pool″, Picard mer ″pool, lake″, Northern French toponymic element -mer), Old High German mari / meri (German Meer ″sea″, but also Maar ″circular lake″), Goth. mari-, marei, Old Norse marr ″sea″ (Norwegian mar ″sea″, Shetland Norn mar ″mer, deep water fishing area″, Faroese marrur ″mud, sludge″, Swedish place name element mar-, French mare ″pool, pond″). They derive from reconstituted Proto-Germanic *mari, itself from Indo-European *mori, the same root as marsh and moor. The Indo-European root *mori gave also birth to similar words in other European languages: Latin mare, ″sea″ (Italian mare, Spanish mar, French mer); Old Celtic *mori, ″sea″ (Gaulish mori-, more, Irish muir, Welsh môr, Breton mor); and Old Slavic morje. Signification The word once included the sea or an arm of the sea in its range of meaning but this marine usage is now obsolete (OED). It is a poetical or dialect word meaning a sheet of standing water, a lake or a pond (OED). The OED fourth definition ("A marsh, a fen.") includes wetland such as fen amongst usages of the word which is reflected in the lexicographers' recording of it. In a quotation from the year 598, mere is contrasted against moss (bog) and field against fen. The OED quotation from 1609 does not say what a mere is, except that it looks black. In 1629 mere and marsh were becoming interchangeable but in 1876 mere was "heard, at times, applied to ground permanently under water": in other words, a very shallow lake. The online edition of the OED quoted examples relate to: the sea: Old English to 1530: 7 quotations standing water: Old English to 1998: 22 quotations arm of the sea: 1573 to 1676: 4 quotations marsh or fen: 1609 to 1995: 7 quotations Characteristics Where land similar to that of Martin Mere, gently undulating glacial till, becomes flooded and develops fen and bog, the remnants of the original mere remain until the whole is filled with peat. This can be delayed where the mere is fed by lime-rich water from chalk or limestone upland and a significant proportion of the outflow from the mere takes the form of evaporation. In these circumstances, the lime (typically calcium carbonate) is deposited on the peaty bed and inhibits plant growth, therefore, peat formation. A typical feature of these meres is that they are alongside a river rather than having the river flowing through them. In this way, the mere is replenished by seepage from the bed of the lime-rich river, through the river's natural levée, or by winter floods. The water of the mere is then static through the summer, when the concentration of the calcium carbonate rises until it is precipitated on the bed of the mere. Even quite shallow lake water can develop a thermocline in the short term but where there is a moderately windy climate, the circulation caused by wind drift is sufficient to break this up. (The surface is blown down-wind in a seiche and a return current passes either near the bottom or just above the thermocline if that is present at a sufficient depth.) This means that the bed of the shallow mere is aerated and bottom-feeding fish and wildfowl can survive, providing a livelihood for people around. Expressed more technically, the mere consists entirely of the epilimnion. This is quite unlike Windermere where in summer, there is a sharp thermocline at a depth of 9 to 15 metres, well above the maximum depth of 60 metres or so. (M&W p36) At first sight, the defining feature of a mere is its breadth in relation to its shallow depth. This means that it has a large surface in proportion to the volume of water it contains. However, there is a limiting depth beyond which a lake does not behave as a mere since the sun does not warm the deeper water and the wind does not mix it. Here, a thermocline develops but where the limiting dimensions lie is influenced by the sunniness and windiness of the site and the murkiness of the water. This last usually depends on how eutrophic (rich in plant nutrients) the water is. Nonetheless, in general, with the enlargement of the extent of a mere, the depth has to become proportionately less if it is to behave as a mere. English meres Aqualate Mere, Staffordshire Cop Mere, Staffordshire Bomere Pool, Shropshire Buttermere, Cumbria (Lake District) Diss Mere, Norfolk Brooke Mere, Norfolk Fowlmere, Cambridgeshire Grasmere, Cumbria (Lake District) Hornsea Mere, East Riding of Yorkshire Horsey Mere, Norfolk Martin Mere, Lancashire The Meres, south and east of Ellesmere, Shropshire (see below) Orton Mere, Cambridgeshire Quidenham Mere, Norfolk Raby Mere, Merseyside Scarborough Mere, North Yorkshire Scoulton Mere, Norfolk Sea Mere, Norfolk Thirlmere, Cumbria (Lake District) Thorpeness Meare (Suffolk) Windermere, Cumbria (Lake District) Marton Mere, Blackpool (Lancashire) There are many examples in Cheshire, including: Alsager Mere Budworth Mere Comber Mere Hatch Mere Mere Oak Mere Pick Mere Radnor Mere Redes Mere Rostherne Mere Shakerley Mere Tatton Mere Many examples also occur in north Shropshire, especially around the town of Ellesmere, which is sometimes known as 'the Shropshire lake district', such as: Blakemere Colemere Crosemere Ellesmere (The Mere) Kettlemere Newtonmere Sweatmere Whitemere Fenland The Fens of eastern England, as well as fen, lowland moor (bog) and other habitats, included a number of meres. As at Martin Mere in Lancashire, when the fens were being drained to convert the land to pasture and arable agriculture, the meres went too but some are easily traced owing to the characteristic soil. For the reasons given above, it is rich in both calcium carbonate and humus. On the ground, its paleness stands out against the surrounding black, humic soils and on the soil map, the former meres show as patches of the Willingham soil association, code number 372 (Soil Map). Apart from those drained in the medieval period, they are shown in Saxton's map of the counties (as they were in his time) of Cambridgeshire and Huntingdonshire. The following is a list of known meres of the eastern English Fenland with their grid references. Saxton's meres are named as: Trundle Mere TL2091 Whittlesey Mere TL2291 Stretham Mere TL5272 Soham Mere. TL5773 Ug Mere TL2487 Ramsey Mere TL3189 In Jonas Moor's "map of the Great Levell of the Fenns" of 1720, though Trundle Mere is not named, the above are all named but one, included with the addition of: Benwick Mere TL3489 In the interval, Stretham Mere had gone and the main features of the modern drainage pattern had appeared. Ugg, Ramsey and Benwick meres do not show in the soil map. Others which do but which appear to have been drained before Saxton's mapping in 1576 are at: TL630875 TL6884 TL5375 TL5898 The last appears to be the "mare 'Wide' vocatum" of Robert of Swaffham's version of the Hereward story (Chapter XXVI). If it is, it will have been in existence in the 1070s, when the events of the story took place. Meres in Wales Hanmer Mere, Clwyd Marloes Mere, Pembrokeshire Meres in the Netherlands Meres similar to those of the English Fens but more numerous and extensive used to exist in the Netherlands, particularly in Holland. See Haarlemmermeer, for example. However, the Dutch word meer is used more generally than the English mere. It means "lake", as also seen in the names of lakes containing meer in Northern Germany, e.g. Steinhuder Meer. When the Zuiderzee was enclosed by a dam and its saltwater became fresh, it changed its status from a sea (zee) to being known as the IJsselmeer, the lake into which the River IJssel flows. Australian meres Beachmere, Queensland Austinmer, New South Wales Citations General sources Crossley-Holland, K. (1987). The Poetry of Legend: Classics of the Medieval World Beowulf. (C-H) Macan, T. T. and Worthington, E. B. (1972). Life in Lakes and Rivers Fontana. (M&W) Moor, J. (c1980s). A Map of the Great Levell of the Fenns Extending into ye Countyes of Norfolk, Suffolke, Northampton, Lincoln, Cambridge, Huntingdon and the Isle of Ely facsimile edition by Cambridgeshire Library Service Ordnance Survey 1:50,000 Sheets 142 & 143 Oxford English Dictionary (OED) Saxton, C. (1992)[1576]. Christopher Saxton's 16th Century Maps. The counties of England & Wales. With Introduction by William Ravenhill. . Cambridgeshire map. Soils of England and Wales, Sheet 4 Eastern England. Soil Survey of England and Wales (1983). (Soil Map) Swaffham, R. (1895-7)[c. 1260]. Gesta Herwardi. Transcribed by S. H. Miller and translated by W. D. Sweeting. External links Lakes Limnology
Mere (lake)
[ "Environmental_science" ]
2,148
[ "Lakes", "Hydrology" ]
7,075,191
https://en.wikipedia.org/wiki/Azrieli%20Center
Azrieli Center (; Merkaz Azrieli) is a complex of three skyscrapers in Tel Aviv. At the base of the complex lies a large shopping mall. The complex was designed by Israeli-American architect Eli Attia. After Attia and the developer of the complex David Azrieli (after whom it is named) fell out, completion of the project was passed on to the Tel Aviv firm of Moore Yaski Sivan Architects. Site The Azrieli Center is located on a site in Tel Aviv, Israel, which was previously used as Tel Aviv's dumpster-truck parking garage. The tower cost $420 million to build. Circular Tower The Azrieli Center Circular Tower is the tallest of the three towers, measuring in height. Construction of this tower began in 1996 and was completed in 1999. The tower has 49 floors, making it at the time of its construction the tallest building in Tel Aviv, only to be surpassed by the Moshe Aviv Tower in Ramat Gan in 2001. The top floor has an indoor observation deck and a high-end restaurant, and the 48th floor is home to Mr. Azrieli's personal office. Each floor of the Circular Tower has 84 windows, giving the tower more than 4,000 windows. The tower's perimeter is ; its diameter is . Each floor covers . On October 31, 2003, the first annual Azrieli Circular Tower run-up competition was held, in which the participants ran up the 1,144 stairs to the tower's roof. Winners of the contest had the chance to participate in the following year's Empire State Building run-up competition in New York City. Triangular Tower The Azrieli Center Triangular Tower has a height of . Construction of this tower, like the circular tower, began in 1996 and was completed in 1999. It has 46 floors and its main occupant is Bezeq, Israel's largest telecommunications company; Bezeq occupies 13 floors of the tower. The tower's cross-section is an equilateral triangle. Square Tower The Azrieli Center Square Tower was completed in June 2007. The tower has 42 floors, and is high. It is the shortest of the three towers in the Azrieli complex. Construction of the third tower was stopped in 1998 due to urban planning disagreements and was resumed in 2006. The lower 13 floors house Africa Israel's Crowne Plaza business hotel. The upper floors are used as office space. Shopping center The Azrieli Center Mall is one of the largest in Israel. There are about 30 restaurants, fast-food counters, cafes and food stands in the mall. The top floor of the mall is a popular hangout spot for teens, and many online message boards arrange get-togethers there during national holidays. There are over 300 stores in the mall. Due to high, constant terrorism threats, the Azrieli towers are guarded to deter terrorist action, like many buildings in Israel. Other features The large complex boasted an 8-screen cinema until 2010, when H&M took over the space. Azrieli also features a large fitness club, night schools, a small kid-focused amusement park and a pedestrian bridge leading to Tel Aviv HaShalom Railway Station. A second pedestrian bridge, completed in March 2003, connects the Azrieli Center with the other side of Begin Road, the Shaul HaMelech light rail station and HaKirya. It is expected that a connection between Kaplan underpass and the project's underground carpark, which is one of the largest ever built in the region, will be constructed. When completed, the plot which the Center occupies will offer a 400-seat, open air auditorium. Access The Azrieli Center is bordered by the Ayalon Highway that crosses Tel Aviv from North to South, Begin Road and Giv'at HaTahmoshet Street (a short section that connects Kaplan Street with HaShalom Road). It is situated next to the HaShalom Interchange on the Ayalon Highway. The center can be easily accessed from most parts of Israel by train to the Tel Aviv HaShalom Railway Station which is connected to the center by an enclosed pedestrian bridge or by one of the many buses that stop on Begin Road. In addition, the Tel Aviv Arlozorov Bus Terminal is located north of the complex. Sha'ul HaMelekh LRT Station is located 5 minute walk from the complex. Spiral Tower The Spiral Tower is an under-construction 91-floor, 350 meter high skyscraper which will be incorporated into the Azrieli Center complex. The Azrieli Tower will be the second-tallest in Israel if completed (after Bein Arim tower), surpassing the current tallest skyscraper, Azrieli Sarona Tower, and ToHa Tower 2. The building's floor area ratio will be 20. Gallery See also List of skyscrapers in Israel List of tallest buildings in Tel Aviv List of shopping malls in Israel YOO Towers References External links Catch up for Tower 3 - World Architecture News Office buildings completed in 1999 Skyscrapers in Tel Aviv Postmodern architecture Shopping malls in Israel Tourist attractions in Tel Aviv Skyscraper hotels Skyscraper office buildings in Israel Skyscrapers in Israel Residential skyscrapers in Israel 1999 establishments in Israel 20th-century architecture in Israel
Azrieli Center
[ "Engineering" ]
1,074
[ "Postmodern architecture", "Architecture" ]
7,075,448
https://en.wikipedia.org/wiki/Amphiphysin
Amphiphysin is a protein that in humans is encoded by the AMPH gene. Function This gene encodes a protein associated with the cytoplasmic surface of synaptic vesicles. A subset of patients with stiff person syndrome who were also affected by breast cancer are positive for autoantibodies against this protein. Alternate splicing of this gene results in two transcript variants encoding different isoforms. Additional splice variants have been described, but their full length sequences have not been determined. Amphiphysin is a brain-enriched protein with an N-terminal lipid interaction, dimerisation and membrane bending BAR domain, a middle clathrin and adaptor binding domain and a C-terminal SH3 domain. In the brain, its primary function is thought to be the recruitment of dynamin to sites of clathrin-mediated endocytosis. There are 2 mammalian amphiphysins with similar overall structure. A ubiquitous splice form of amphiphysin-2 (BIN1) that does not contain clathrin or adaptor interactions is highly expressed in muscle tissue and is involved in the formation and stabilization of the T-tubule network. In other tissues amphiphysin is likely involved in other membrane bending and curvature stabilization events. Interactions Amphiphysin has been shown to interact with DNM1, Phospholipase D1, CDK5R1, PLD2, CABIN1 and SH3GL2. See also AP180 Epsin References Further reading Review. External links Bringing your curves to the bar, amphiphysin home page Proteins
Amphiphysin
[ "Chemistry" ]
334
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
7,075,607
https://en.wikipedia.org/wiki/Sequence%20%28biology%29
A sequence in biology is the one-dimensional ordering of monomers, covalently linked within a biopolymer; it is also referred to as the primary structure of a biological macromolecule. While it can refer to many different molecules, the term sequence is most often used to refer to a DNA sequence or a protein sequence. See also Dot plot (bioinformatics) Sequence analysis References Molecular biology
Sequence (biology)
[ "Chemistry", "Biology" ]
86
[ "Biochemistry", "Molecular biology stubs", "Molecular biology" ]
7,075,619
https://en.wikipedia.org/wiki/National%20oil%20company
A national oil company (NOC) is a petroleum company that is fully or partly owned by the government of a sovereign nation. NOCs produce about half the world’s oil and gas. Due to their increasing dominance over global reserves, the importance of NOCs has risen dramatically in recent decades relative to International Oil Companies (IOCs), such as BP, ExxonMobil or Shell plc. NOCs are also increasingly investing outside their national borders. See also List of petroleum companies Nationalization of oil supplies State-owned enterprise References External links National Oil Company Database Petroleum economics
National oil company
[ "Chemistry" ]
118
[ "Petroleum", "Petroleum stubs" ]
7,075,678
https://en.wikipedia.org/wiki/Difference%20density%20map
In X-ray crystallography, a difference density map or Fo–Fc map shows the spatial distribution of the difference between the measured electron density of the crystal and the electron density explained by the current model. A way to compute this map has been formulated for cryo-EM. Display Conventionally, they are displayed as isosurfaces with positive density—electron density where there's nothing in the model, usually corresponding to some constituent of the crystal that hasn't been modelled, for example a ligand or a crystallisation adjutant -- in green, and negative density—parts of the model not backed up by electron density, indicating either that an atom has been disordered by radiation damage or that it is modelled in the wrong place—in red. The typical contouring (display threshold) is set at 3σ. Calculation Difference density maps are usually calculated using Fourier coefficients which are the differences between the observed structure factor amplitudes from the X-ray diffraction experiment and the calculated structure factor amplitudes from the current model, using the phase from the model for both terms (since no phases are available for the observed data). The two sets of structure factors must be on the same scale. It is now normal to also include maximum-likelihood weighting terms which take into account the estimated errors in the current model: where m is a figure of merit which is an estimate of the cosine of the error in the phase, and D is a "σA" scale factor. These coefficients are derived from the gradient of the likelihood function of the observed structure factors on the basis of the current model. A difference map built with m and D is known as a mFo – DFc map. The use of ML weighting reduces model bias (due to using the model's phase) in the 2 Fo–Fc map, which is the main estimate of the true density. However, it does not fully eliminate such bias. References Further reading Electron density maps on Proteopedia Crystallography
Difference density map
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
412
[ "Crystallography", "Condensed matter physics", "Materials science" ]
7,075,785
https://en.wikipedia.org/wiki/Spaghetti%20plot
A spaghetti plot (also known as a spaghetti chart, spaghetti diagram, or spaghetti model) is a method of viewing data to visualize possible flows through systems. Flows depicted in this manner appear like noodles, hence the coining of this term. This method of statistics was first used to track routing through factories. Visualizing flow in this manner can reduce inefficiency within the flow of a system. In regards to animal populations and weather buoys drifting through the ocean, they are drawn to study distribution and migration patterns. Within meteorology, these diagrams can help determine confidence in a specific weather forecast, as well as positions and intensities of high and low pressure systems. They are composed of deterministic forecasts from atmospheric models or their various ensemble members. Within medicine, they can illustrate the effects of drugs on patients during drug trials. Applications Biology Spaghetti diagrams have been used to study why butterflies are found where they are, and to see how topographic features (such as mountain ranges) limit their migration and range. Within mammal distributions across central North America, these plots have correlated their edges to regions which were glaciated within the previous ice age, as well as certain types of vegetation. Meteorology Within meteorology, spaghetti diagrams are normally drawn from ensemble forecasts. A meteorological variable e.g. pressure, temperature, or precipitation amount is drawn on a chart for a number of slightly different model runs from an ensemble. The model can then be stepped forward in time and the results compared and be used to gauge the amount of uncertainty in the forecast. If there is good agreement and the contours follow a recognizable pattern through the sequence, then the confidence in the forecast can be high. Conversely, if the pattern is chaotic, i.e., resembling a plate of spaghetti, then confidence will be low. Ensemble members will generally diverge over time and spaghetti plots are a quick way to see when this happens. Spaghetti plots can be a more favorable choice compared to the mean-spread ensemble in determining the intensity of a coming cyclone, anticyclone, or upper-level ridge or trough. Because ensemble forecasts naturally diverge as the days progress, the projected locations of meteorological features will spread further apart. A mean-spread diagram will take a mean of the calculated pressure from each spot on the map as calculated by each permutation in the ensemble, thus effectively smoothing out the projected low and making it appear broader in size but weaker in intensity than the ensemble's permutations had actually indicated. It can also depict two features instead of one if the ensemble clustering is around two different solutions. Various forecast models within tropical cyclone track forecasting can be plotted on a spaghetti diagram to show confidence in five-day track forecasts. When track models diverge late in the forecast period, the plot takes on the shape of a squashed spider, and can be referred to as such in National Hurricane Center discussions. Within the field of climatology and paleotempestology, spaghetti plots have been used to correlate ground temperature information derived from boreholes across central and eastern Canada. As in other disciplines, spaghetti diagrams can be used to show the motion of objects, such as drifting weather buoys over time. Business Spaghetti diagrams were first used to track routing through a factory. Spaghetti plots are a simple tool to visualize movement and transportation. Analyzing flows through systems can determine where time and energy are wasted, and identify where streamlining would be beneficial. This is true not only with physical travel through a physical place, but also during more abstract processes such as the application of a mortgage loan. Medicine Spaghetti plots can be used to track the results of drug trials amongst a number of patients on one individual graph to determine their benefit. They have also been used to correlate progesterone levels to early pregnancy loss. The half-life of drugs within people's blood plasma, as well as discriminating effects between different populations, can be diagnosed quickly via these diagrams. References External links TIGGE Project at NCAR Graphic software in meteorology Climate and weather statistics Statistical charts and diagrams Metaphors referring to spaghetti
Spaghetti plot
[ "Physics" ]
837
[ "Weather", "Physical phenomena", "Climate and weather statistics" ]
7,075,895
https://en.wikipedia.org/wiki/Mount%20Oread
Mount Oread is a hill in Lawrence, Kansas, upon which the University of Kansas, and parts of the city of Lawrence, Kansas, are located. It sits on the water divide between the Kansas River and the Wakarusa River. It was named after the long defunct Oread Institute in Worcester, Massachusetts, where many of the settlers of Lawrence moved from prior to the American Civil War. The hill was originally called Hogback Ridge by many Lawrence residents until the Oread name was adopted in 1864, two years after the university was founded. For settlers going westward by wagon train on the Oregon Trail, "The Hill", as Mount Oread is now commonly referred to by Kansans, was the next big topographical challenge after crossing the Wakarusa River at Blue Jacket's Crossing, which is today located southeast of the city of Lawrence. James Lane and Governor Charles Robinson erected a fort on the hill in the 1850s, during the Bleeding Kansas conflict in order to protect Lawrence. A 1857 Harper's Weekly report deemed the fort to be valueless as a military work. Governor Robinson's first home in Lawrence would be built on the hill, which was burned down on May 21, 1855, by pro-slavery border ruffians. According to the United States Geological Survey, Mount Oread is located approximately above sea level. By way of comparison, downtown Lawrence is about above sea level. Mount Oread is perhaps best known for being the staging area of William Quantrill's raid into Lawrence on August 21, 1863, during the American Civil War. Presently, the campus of the University of Kansas (KU) rests on Mount Oread. Mount Oread is the type locality for the Oread Limestone, and so, gives its name to the Oread Escarpment rising in this region of Kansas. Oread Limestone was quarried from the hilltop and used in the earliest of the campus buildings of KU, including Spooner Hall and Dyche Hall. See also Oread Mount Oread Civil War posts References External links Lawrence Kansas looking northeast from Mt. Oread (1859) Mt. Oread campus buildings at the University of Kansas. TopoQuest map of Mt. Oread Mount Oread trail route marker for KU GIS Day 2006 social geocoding pioneers Bleeding Kansas Lawrence, Kansas Hills of Kansas Kansas in the American Civil War Hydrology Oregon Trail Drainage divides Landforms of Douglas County, Kansas
Mount Oread
[ "Chemistry", "Engineering", "Environmental_science" ]
492
[ "Hydrology", "Environmental engineering" ]
7,075,936
https://en.wikipedia.org/wiki/London%20Centre%20for%20Nanotechnology
The London Centre for Nanotechnology is a multidisciplinary research centre in physical and biomedical nanotechnology in London, United Kingdom. It brings together three institutions that are referents in nanotechnology, University College London, Imperial College London and King's College London. It was conceived from the outset with a management structure allowing for a clear focus on exploitation and commercialisation. Although based at UCL's campus in Bloomsbury, the LCN includes research in departments of Imperial's South Kensington campus and in King's Strand campus. The LCN's work requires it to draw on the combined skills of multiple departments, including medicine, chemistry, physics, electrical and electronic engineering, biochemical engineering, materials and earth sciences, and two leading business centres. History The London Centre for Nanotechnology was established as a joint venture between UCL and Imperial College London in 2003 following the award of a £13.65m higher education grant under the Science Research Infrastructure Fund. In October 2006 the LCN installed the first monochromated electron microscope in the UK at its site on the Imperial College London campus. In October 2008 the LCN published research about the possibility of using microscopic "nanoprobes" to discover new drugs to combat antibiotic resistance. In October 2009 a team at the Science and Technology Facilities Council's ISIS facility led by Steven Bramwell of the LCN published research showing that single magnetic charges be made to behave and interact like electrical ones through the use of the magnetic monopoles that exist in spin ice. King's College London joined the LCN in 2018. Research areas LCN's research is organised around three themes, which it characterizes as follows: • Information Technology: Computing and communications needs continue to grow and underpin all other human endeavours. Current technologies are limited and new nanotechnology-driven paradigms such as quantum computing and spintronics are needed. • Health care: Under development are specialised sensors and novel cancer-diagnosis systems, as well as new insights into cellular biophysics and nanotechnology-based instrumentation. • Planet Care: The LCN uses its expertise, ranging from biology to chemistry and materials science, to conduct research in areas including novel photovoltaics, new approaches to exploring current energy supplies, low-power lighting and computing, new materials, instrumentation for the nuclear industry, and storing hydrogen efficiently at room temperature. Facilities LCN has access to a range of facilities include: • Nano-CAD: techniques to simulate, visualize and design nano-scale structures and devices in the biological and non-biological areas; first principles atomic/molecular level theory, systems modelling and other powerful computational tools. • Nano-characterisation: the full range of optical, electron, ion and scan-probe based technologies required to image and understand nanostructures in both the biological and non-biological areas - measuring nano-electrical, structural, mechanical, rheological, acoustic, thermal and magnetic properties. • Nano-fabrication: large clean-room space with the ability to produce nano-materials and devices using various biological and non-biological materials; silicon, III-V fabrication and unconventional fabrication – for example, of organics and diamond. • Systems: the range of techniques required to translate nanotechnology into workable products for industry; hybridisation and integration techniques, error handling and re-routing algorithms, methods to connect bio- and non-bio systems. References Nanotechnology institutions Research institutes in London University College London Research institutes established in 2003 2003 establishments in England
London Centre for Nanotechnology
[ "Materials_science" ]
719
[ "Nanotechnology", "Nanotechnology institutions" ]
7,076,369
https://en.wikipedia.org/wiki/Sonido%2013
Sonido 13 is a theory of microtonal music created by the Mexican composer Julián Carrillo around 1900 and described by Nicolas Slonimsky as "the field of sounds smaller than the twelve semitones of the tempered scale." Carrillo developed this theory in 1895 while he was experimenting with his violin. Though he became internationally recognized for his system of notation, it was never widely applied. His first composition in demonstration of his theories was Preludio a Colón (1922). The Western musical convention up to this day divides an octave into twelve different pitches that can be arranged or tempered in different intervals. Carrillo termed his new system Sonido 13, which is Spanish for "Thirteenth Sound" or Sound 13, because it enabled musicians to go beyond the twelve notes that comprise an octave in conventional Western music. Julián Carrillo wrote: "The thirteenth sound will be the beginning of the end and the point of departure of a new musical generation which will transform everything." History Early life Carrillo attended the National Conservatory of Music in Mexico City, where he studied violin, composition, physics, acoustics, and mathematics. The laws that define music intervals instantly amazed Carrillo, which led him to conduct experiments on his violin. He began analyzing the way the pitch of a string changed depending on the finger position, concluding that there had to be a way to split the string into an infinite number of parts. One day, Carrillo was able to divide the fourth string of his violin with a razor into 16 parts in the interval between the notes G and A, thus creating 16 unique sounds. This event was the beginning of Sonido 13 that led Carrillo to study more about physics and the nature of intervals. Professional life Carrillo was, "closely associated with the Díaz regime," and preferred neo-classicism to nationalism. References Further reading Mena, María Cristina (1914). "Julian Carrillo: The Herald of a Musical Monroe Doctrine", The Century Illustrated Monthly Magazine, vol. 89. Josiah Gilbert Holland and Richard Watson Gilder, eds. Digitized 2008. External links Equal temperaments Musical notation Microtonality
Sonido 13
[ "Physics" ]
430
[ "Physical quantities", "Musical symmetry", "Logarithmic scales of measurement", "Equal temperaments", "Symmetry" ]
7,076,617
https://en.wikipedia.org/wiki/Million%20standard%20cubic%20feet%20per%20day
Million standard cubic feet per day is a unit of measurement for gases that is predominantly used in the United States. It is frequently abbreviated MMSCFD. MMSCFD is commonly used as a measure of natural gas, liquefied petroleum gas, compressed natural gas and other gases that are extracted, processed or transported in large quantities. A related measure is "mega standard cubic metres per day" (MSm3/d), which is equal to 106 Sm3/d used in many countries outside the United States. One MMSCFD equals 1177.6 Sm3/h. When converting to mass flowrate, the density of the gas should be used at Standard temperature and pressure. See also SCFM Standard cubic foot External links checalc.com Gas Volume Conversion onlineflow.de Online calculator for conversion of volume, mass and molar flows (SCFM, MMSCFD, Nm3/hr, kg/s, kmol/hr and more) References Units of flow
Million standard cubic feet per day
[ "Mathematics" ]
208
[ "Units of flow", "Quantity", "Units of measurement" ]