id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
12,757,598
https://en.wikipedia.org/wiki/Altai%20falcon
The Altai falcon has been identified as a color morph of the Central Asian saker falcon (Falco cherrug milvipes), as per the latest genetic research (Zinevich et al. 2023). Previously, it was variously classified as a morph, a subspecies (Falco cherrug altaicus), and even separate species (Falco altaicus). It used to have a high reputation among Central Asian falconers. Distribution and taxonomy The Altai falcon breeds in a relatively small area of Central Asia across the Altai and Sayan Mountains. This area overlaps with the much larger breeding area of the saker falcon (Falco cherrug). Previously, it was believed that Altai falcons were either natural hybrids between sakers and gyrfalcons (Falco rusticolus), or rather the descendants of such rare hybrids backcrossing into the large population of sakers. However, the most recent research has demonstrated that Altai falcons are genetically intermingled with the broader Asian Saker population and do not constitute a distinct cluster, indicating that they do not represent a separate taxonomic entity. Literature Almásy Gy 1903. Vándor-utam Ázsia szívébe. (My Travels to the Heart of Asia – in Hungarian) Budapest, Természettudományi Könyvkiadó-vállalat. Eastham CP, Nicholls MK, Fox NC 2002. Morphological variation of the saker (Falco cherrug) and the implications for conservation. Biodiversity and Conservation, 11, 305–325. Ellis DH 1995. What is Falco altaicus Menzbier? Journal of Raptor Research, 29, 15–25. Zinevich, L, Prommer, M, Laczkó, L, Rozhkova, D, Sorokin, A, Karyakin, I, Bagyura, J, Cserkész, T, Sramkó 2023. Phylogenomic insights into the polyphyletic nature of Altai falcons within eastern sakers (Falco cherrug) and the origins of gyrfalcons (Falco rusticolus) Scientific Reports, 13:17800. Menzbier MA 1891. (1888–1893). Ornithologie du Turkestan et des pays adjacents (Partie No. -O. de la Mongolie, steppes Kirghiz, contree Aralo-Caspienne, partie superieure du bassin d'Oxus, Pamir). Vol. 12. Publiee par l'Auteur, Moscow, Russia. Nittinger F, Gamauf A, Pinsker W, Wink M, Haring E 2007. Phylogeography and population structure of the saker falcon (Falco cherrug) and the influence of hybridization: mitochondrial and microsatellite data. Molecular Ecology, 16, 1497–1517. Orta J 1994. 57. Saker Falcon. In: del Hoyo J, Elliott A, Sargatal J (eds.): Handbook of Birds of the World, Volume 2: New World Vultures to Guineafowl: 273–274, plate 28. Lynx Edicions, Barcelona. Pfander 2011. Semispecies and Unidentified Hidden Hybrids (for Example of Birds of Prey) Raptors Conservation 23: 74-105. Potapov E, Sale R 2005. The Gyrfalcon. Poyser Species Monographs. A & C Black Publishers, London. Sushkin PP 1938. Birds of the Soviet Altai and adjacent parts of north-western Mongolia. Vol. 1. [In Russian.] Academy of Science of USSR Press, Moscow, Russia. External links to rare photos Altai falcon, Western Mongolia Altai falcon, Western Mongolia Altai falcon, Kazakhstan Altai falcon Falconry Birds of Mongolia Controversial bird taxa Bird hybrids Altai falcon
Altai falcon
[ "Biology" ]
811
[ "Biological hypotheses", "Controversial bird taxa", "Controversial taxa" ]
12,759,394
https://en.wikipedia.org/wiki/Lung%20counter
A lung counter is a system consisting of a radiation detector, or detectors, and associated electronics that is used to measure radiation emitted from radioactive material that has been inhaled by a person and is sufficiently insoluble as to remain in the lung for weeks, months, or years. They are frequently used in occupations where workers may be exposed to radiation. The lung counter may be placed on or near the body. These systems are also often housed in a low background counting chamber. Such a chamber may have thick walls made of low-background steel (~20–25 cm thick) and lined with lead, cadmium, tin, or polypropylene, with a final layer of copper. The purpose of the lead, cadmium (or tin), and copper is to reduce the background in the low energy region of a gamma spectrum (typically less than 200 keV). Calibration As a lung counter is primarily measuring radioactive materials that emit low energy gamma rays or x-rays, the phantom used to calibrate the system must be anthropometric. An example of such a phantom is the Lawrence Livermore National Laboratory Torso Phantom. See also Bomab References Medical equipment Radiobiology
Lung counter
[ "Chemistry", "Biology" ]
245
[ "Radiobiology", "Medical equipment", "Radioactivity", "Medical technology" ]
12,760,610
https://en.wikipedia.org/wiki/RuBee
RuBee (IEEE standard 1902.1) is a two-way active wireless protocol designed for harsh environments and high-security asset visibility applications. RuBee utilizes longwave signals to send and receive short (128 byte) data packets in a local regional network. The protocol is similar to the IEEE 802 protocols in that RuBee is networked by using on-demand, peer-to-peer and active radiating transceivers. RuBee is different in that it uses a low frequency (131 kHz) carrier. The IEEE 1902.1 protocol details 1902.1 is the "physical layer" workgroup with 17 corporate members. The work group was formed in late 2006. The final specification was issued as an IEEE standard in March 2009. The standard includes such things as packet encoding and addressing specifications. The protocol has already been in commercial use by several companies, in asset visibility systems and networks. However, IEEE 1902.1 will be used in many sensor network applications, requiring this physical layer standard in order to establish interoperability between manufacturers. A second standard has been drafted 1902.2 for higher level data functions required in Visibility networks. Visibility networks provide the real-time status, pedigree, and location of people, livestock, medical supplies or other high-value assets within a local network. The second standard will address the data-link layers based on existing uses of the RuBee protocol. This standard, which will be essential for the widespread use of RuBee in visibility applications, will support the interoperability of RuBee tags, RuBee chips, RuBee network routers, and other RuBee equipment at the data-link layer. RuBee tag details A RuBee tag has a 4 bit CPU, 1 to 5 kB of sRAM, a crystal, and a lithium battery with an expected life of five years. It can optionally have sensors, displays, and buttons. The RuBee protocol is bidirectional, on-demand, and peer-to-peer. It can operate at other frequencies (e.g. 450 kHz) but 131 kHz is the most widely used one. The RuBee protocol uses an IP Address (Internet Protocol Address). A tag may hold data in its own memory (instead or in addition to having data stored on a server). RuBee functions successfully in harsh environments (one or both ends of the communication are near steel or water), with networks consisting of many thousands of tags, and has a range of 1 to 30 m (3 to 100 ft) depending on the antenna configuration. This allows RuBee radio tags function in environments where other radio tags and RFID may have problems. RuBee networks are in use in many visibility applications, including exit-entry detection in high-security government facilities, weapons and small arms in high-security armories, mission-critical specialized tools, smart shelves and racks for high-value assets; and smart entry/exit portals. RuBee disadvantages and advantages The major disadvantage RuBee has over other protocols is speed and packet size. The RuBee protocol is limited to 1,200 baud in existing applications. The IEEE 1902.1 specifies 1,200 baud. The protocol could go to 9,600 baud with some loss of range. However, most visibility applications work well at 1,200 baud. Packet size is limited to tens to hundreds of bytes. RuBee's design forgoes high bandwidth, and high-speed communication because most visibility applications do not require them. The use of LW magnetic energy brings about a number of advantages: Long battery life – Because of the use of low frequencies and data rates, the chips and detectors can run at low speeds. Using (lowest cost) 4 micrometer CMOS chip technology leads to extremely low power consumption. LW magnetic wave tag systems can achieve 5 to 25 year lives using low-cost lithium batteries. This is also the expected battery shelf life. Tag data travels with the asset – Because data is stored in the tag, IT (Information Technology) costs are reduced. This means that with a low-cost handheld reader, one can read a RuBee tag and learn about the asset — manufacturing date, expiry date, lot number, etc. — without having to go to an IT system to look it up. In addition, the distance between the reader and the asset is not critical. RuBee can also write to a tag at the same range as it can read it. RFID, on the other hand, uses EEPROM memory, and writing to the tag is awkward. (In the case of RFID, the range is limited, more power is required and writes times are long.) Human-safe – A RuBee base station produces only nanowatts of radio energy. RuBee's LW magnetic waves are not absorbed by biological tissues and are not regulated by OSHA. In fact, RuBee produces less power and lower field strengths than the metal detectors in airports and the anti-theft detectors in retail stores operating at similar frequencies — by a factor of about 10 to 100. Recently published studies show that RuBee has no effect on pacemakers or other implantable devices (Hayes et al., 2007). Intrinsically safe – A RuBee base station and tag produce a low level of magnetic energy not capable of heating explosives or creating a spark. In independent studies carried out by the Department of Energy RuBee was given a Safe Separation Distance (SSD) of zero, and is the only wireless technology to have that rating. That means tags and base stations can be placed directly on high explosives with no risk of accidental ignition or any heating. High security and privacy RuBee tags have many unique advantages in high security applications. The eavesdropping range (the range at which a person with unlimited funds can listen to tag conversations) is the same as the tag range. That means if someone is listening, they must be close enough for you to be able to see them. This is not true for RFID or 802 protocols. That means no one can secretly listen to tag/base station conversations. In addition, since RuBee tags have a battery, a crystal, and sRAM memory, they can use strong encryption with nearly uncrackable one-time keys, or totally uncrackable one-time pads. RuBee is in use today in many high-security applications for these reasons. RuBee is the only wireless technology approved for use in secure US government sites. Controlled volumetric range – RuBee has a maximum volumetric range of approximately 10,000 square feet (900 m²), using volumetric loop antennas — From even a small volumetric antenna of 1 sq ft (900 cm²), RuBee can read a tag within an egg-shaped (ellipsoid) volume of about 10 x 10 x 15  ft (3 x 3 x 5 m). A special feature of IEEE P1902.1 known as Clip makes it possible to place many adjacent loop antennas in an antenna farm, and read from tens to hundreds of base stations simultaneously. Cost effective - With RuBee, relatively simple base stations and routers can be employed, which means receivers and card readers can be reasonably priced as compared to higher frequency transceivers. In addition, the tags often include a single chip, a battery, a crystal, and an antenna, and can be priced competitively with respect to active RFID tags (those including a battery). Less noise – Because ambient noise in a region falls off as 1/r³, RuBee exhibits reduced susceptibility to extraneous noise. The major limit to antenna size is deep space noise. Compare to NFC and Qi inductive power transfer This protocol is similar at the physical level to NFC (13.56  MHz carrier, basically an air-core transformer pair) and also Qi's inductive energy transfer (100 kHz-300 kHz carrier). Both modulate the receiver's coil load to communicate with the sender. Some NFC tags can support simple processors and a handful of storage like this protocol. NFC also shares the physical security properties of "magnetic" communications like RuBee, however, NFC signals can be detected miles from the source. RuBee signals are detectable at a maximum distance of from the source. References North American Supply Chain Visibility Solutions Technology Innovation of the Year Award, Prithvi Raj, Frost & Sullivan, 2007. "IEEE Begins Wireless, Long-Wave Standard for Healthcare, Retail and Livestock Visibility Networks; IEEE P1902.1 Standard to Offer Local Network Protocol for Thousands of Low-Cost Radio Tags Having a Long Battery Life," Business Wire, June 8, 2006. Visible Assets Promotes RuBee Tags for Tough-to-Track Goods, Mary Catherine O'Connor, RF Journal, June 19, 2006. Charles Capps, "Near Field or Far Field", EDN, August 16, 2001, pp. 95–102. Hayes DL, Eisinger G, Hyberger L, Stevens JK. Electromagnetic interference (EMI) and electromagnetic compatibility (EMC) of an active kHz radio tag (Rubee, IEEE P1901.1) with pacemakers (PM) and ICDs Heart Rhythm 2007;4:S398 (Supplement - Abs). Martin Roche MD, Cindy Waters RN, Eileen Walsh RN, Visibility Systems in Delivery of Orthopedic Care Enable Unprecedented Savings and Efficiencies, U.S. Orthopedic Product News, May/June 2007. Radio-frequency identification
RuBee
[ "Engineering" ]
1,943
[ "Radio-frequency identification", "Radio electronics" ]
3,492,508
https://en.wikipedia.org/wiki/Sosrobahu
Sosrobahu is a road construction technique that allows long stretches of flyovers to be constructed above existing major roads with minimum disruption to traffic. The technique was designed by Indonesian engineer Tjokorda Raka Sukawati and involves the construction of the horizontal supports for the highway beside the existing road, which are then lifted and turned 90 degrees before being placed on the top of the vertical supports to form the flyover pylons. The term Sosrobahu was derived from Old Javanese; it means "thousand shoulders". History Creation One day, Tjokorda was working on his 1974 Mercedes-Benz, which he had jacked up so that the back two wheels were supported on the slippery floor of the garage where some oil had been accidentally spilled. When the car was pushed, it pivoted with the jack as the axis. He noted that, when there is no friction, it is easy to move the heaviest of objects. This event made him realize that a hydraulic pump could be used to lift heavy objects. Tjokorda conducted trials with cylinders 20 cm in diameter converted into a hydraulic lift and loaded with 80 tonnes of concrete. The weight was successfully lifted and turned slightly, but could not then be lowered as the position of the hydraulic jack had shifted. Later, Tjokorda revised the design; after testing, the hydraulic jack stayed stable even with the full weight of the concrete above it. After the trials, Tjokorda finalised his design called the LBPH (Landasan Putar Bebas Hambatan; Free Moving Platform) which consisted of two concrete discs with a diameter of 80 cm enclosed in a container. Although only 5 cm thick, the discs are capable of supporting a weight of 625 tonnes each. Between the two plates is pumped lubricating oil. A rubber seal around the edges of the plates protected against the oil escaping under the high forces experienced during the lift. The oil in the container was connected to a hydraulic pump through a small pipe. Usages In the 1980s, a construction company, PT Hutama Karya, was granted a contract to build a highway above Jalan A. Yani. and to build a flyover between Cawang and Tanjung Priok in 1987. Tjokorda had the idea of initially erecting the concrete pier shafts and then building the poured concrete pier heads in the centre lane, parallel to the existing roadway, and then raising and turning the pier heads 90 degrees into place. Though the pier heads weigh approximately 480 tonnes each. On 27 July 1988, at 22:00 GMT+7, a 440-tonne concrete was moved using a hydraulic pump that was pressurized to 78 kgf/cm² (7.6 MPa). The pier head, despite a lack of iron supports, was lifted and placed on top of the pier shaft. The longest stretch of overpass built using this technique is in Metro Manila, Philippines, at the Metro Manila Skyway. In the Philippines, 298 supports have been erected, while in Kuala Lumpur, the figure is 135. Naming and patent In November 1989, President Soeharto of Indonesia gave it the name Sosrobahu. The name was taken from a character in the Ramayana, and derives from Old Javanese for 'thousand shoulders'. The Indonesian patent was granted in 1995, while the Japanese patent was granted in 1992. The technology has also been exported to the Philippines, Malaysia, Thailand, and Singapore. References Further reading "Sosrobahu Bertumpu di Atas Piring", Gatra, 21 August 2004; requests payment (in Indonesian) External links About Tjokorda Raka Sukawati and Sosrobahu; in Indonesian Hydraulic tools Road construction Science and technology in Indonesia Indonesian inventions
Sosrobahu
[ "Physics", "Engineering" ]
766
[ "Physical systems", "Road construction", "Construction", "Hydraulics", "Hydraulic tools" ]
3,492,663
https://en.wikipedia.org/wiki/%CE%91-Halo%20ketone
In organic chemistry, an α-halo ketone is a functional group consisting of a ketone group or more generally a carbonyl group with an α-halogen substituent. α-Halo ketones are alkylating agents. Prominent α-halo ketones include phenacyl bromide and chloroacetone. Structure The general structure is RR′C(X)C(=O)R where R is an alkyl or aryl residue and X any one of the halogens. The preferred conformation of a halo ketone is that of a cisoid with the halogen and carbonyl sharing the same plane as the steric hindrance with the carbonyl alkyl group is generally larger. Halo ketone synthesis Halo ketones and halo carbonyl compounds in general are synthesized by reaction of carbonyl compounds with sources of X+ (X = halogen), which is provided using halogens: RC(O)CH3 + X2 → RC(O)CH2X + HX Specialized sources of electrophilic halogenating agents include N-Bromosuccinimide and 1,3-dibromo-5,5-dimethylhydantoin (DBDMH). In the Nierenstein reaction an acyl chloride reacts with diazomethane Asymmetric synthesis Efforts are reported in asymmetric synthesis of halo carbonyls through organocatalysis. In one study an acid chloride is converted into an α-halo ester with a strong base (sodium hydride), a bromine donor and an organocatalyst based on proline and quinine: In the proposed reaction mechanism the base first converts the acid chloride to the ketene, the organocatalyst then introduces chirality through its quinonoid tertiary amine, forming a ketene adduct. Reactions Illustrative of their alkylating activity are reactions with potassium iodide in acetone, chloroacetone reacts faster than 1-chloropropane by a factor of 36,000. Halo ketones react with phosphites in the Perkow reaction. The halo group can be removed in reductive dehalogenation of halo ketones. α-Halo ketones can also be converted to alkenes by treatment with hydrazine. Due to the presence of two electron withdrawing groups (carbonyl and halide), the α-hydrogen is acidic. This property is exploited in the Favorskii rearrangement, where base abstracts first an acidic α-hydrogen and the resulting carbanion then displaces the halogen. In crossed aldol reactions between halo ketones and aldehydes, the initial reaction product is a halohydrin which can subsequently form an oxirane in the presence of base. α-Halo ketones can react with amines to form an α-halo imine, which can be converted back to the parent halo ketone by hydrolysis, so that halo imines may be used as masked versions of halo ketones. This allows some chemical transformations to be achieved that are not possible with the parent halo ketones directly. Precursors to heterocycles Halo ketones take part in several reaction types, especially since they are bifunctional, with two electrophilic sites (α-carbon and carbonyl carbon). In one manifestation of this duality, they are precursors to heterocycles. Thiazoles arise from reaction of chloroacetone with thioamides.2-Aminothiazoles are similarly produced by reaction of 2-chloroketones with thioureas. Pyrroles may be synthesized by reaction of halo ketones with dicarbonyls and ammonia in the Hantzsch pyrrole synthesis. References Functional groups
Α-Halo ketone
[ "Chemistry" ]
789
[ "Functional groups" ]
3,493,610
https://en.wikipedia.org/wiki/Trans%20effect
In inorganic chemistry, the trans effect is the increased lability of ligands that are trans to certain other ligands, which can thus be regarded as trans-directing ligands. It is attributed to electronic effects and it is most notable in square planar complexes, although it can also be observed for octahedral complexes. The analogous cis effect is most often observed in octahedral transition metal complexes. In addition to this kinetic trans effect, trans ligands also have an influence on the ground state of the molecule, the most notable ones being bond lengths and stability. Some authors prefer the term trans influence to distinguish it from the kinetic effect, while others use more specific terms such as structural trans effect or thermodynamic trans effect. The discovery of the trans effect is attributed to Ilya Ilich Chernyaev, who recognized it and gave it a name in 1926. Kinetic trans effect The intensity of the trans effect (as measured by the increase in rate of substitution of the trans ligand) follows this sequence: F−, H2O, OH− < NH3 < py < Cl− < Br− < I−, SCN−, NO2−, SC(NH2)2, Ph− < SO32− < PR3, AsR3, SR2, CH3− < H−, NO, CO, CN−, C2H4 One classic example of the trans effect is the synthesis of cisplatin and its trans isomer. The complex PtCl42− reacts with ammonia to give [PtCl3NH3]−. A second substitution by ammonia gives cis-[PtCl2(NH3)2], showing that Cl- has a greater trans effect than NH3. The procedure is however complicated by the production of Magnus's green salt. As a result, cisplatin is produced commercially via [PtI4]2− as first reported by Dhara in 1970. If, on the other hand, one starts from Pt(NH3)42+, the trans product is obtained instead: The trans effect in square complexes can be explained in terms of an addition/elimination mechanism that goes through a trigonal bipyramidal intermediate. Ligands with a high trans effect are in general those with high π acidity (as in the case of phosphines) or low-ligand lone-pair–dπ repulsions (as in the case of hydride), which prefer the more π-basic equatorial sites in the intermediate. The second equatorial position is occupied by the incoming ligand; due to the principle of microscopic reversibility, the departing ligand must also leave from an equatorial position. The third and final equatorial site is occupied by the trans ligand, so the net result is that the kinetically favored product is the one in which the ligand trans to the one with the largest trans effect is eliminated. Structural trans effect The structural trans effect can be measured experimentally using X-ray crystallography, and is observed as a stretching of the bonds between the metal and the ligand trans to a trans-influencing ligand. Stretching by as much as 0.2 Å occurs with strong trans-influencing ligands such as hydride. A cis influence can also be observed, but is smaller than the trans influence. The relative importance of the cis and trans influences depends on the formal electron configuration of the metal center, and explanations have been proposed based on the involvement of the atomic orbitals. References Further reading Coordination chemistry
Trans effect
[ "Chemistry" ]
712
[ "Coordination chemistry" ]
15,588,021
https://en.wikipedia.org/wiki/Coastal%20engineering
Coastal engineering is a branch of civil engineering concerned with the specific demands posed by constructing at or near the coast, as well as the development of the coast itself. The hydrodynamic impact of especially waves, tides, storm surges and tsunamis and (often) the harsh environment of salt seawater are typical challenges for the coastal engineer – as are the morphodynamic changes of the coastal topography, caused both by the autonomous development of the system and human-made changes. The areas of interest in coastal engineering include the coasts of the oceans, seas, marginal seas, estuaries and big lakes. Besides the design, building and maintenance of coastal structures, coastal engineers are often interdisciplinary involved in integrated coastal zone management, also because of their specific knowledge of the hydro- and morphodynamics of the coastal system. This may include providing input and technology for e.g. environmental impact assessment, port development, strategies for coastal defense, land reclamation, offshore wind farms and other energy-production facilities, etc. Specific challenges The coastal environment produces challenges specific for this branch of engineering: waves, storm surges, tides, tsunamis, sea level changes, sea water and the marine ecosystem. Most often, in coastal engineering projects there is a need for metocean conditions: local wind and wave climate, as well as statistics for and information on other hydrodynamic quantities of interest. Also, bathymetry and morphological changes are of direct interest. In case of studies of sediment transport and morphological changes, relevant properties of the sea bed sediments, water and ecosystem properties are needed. Long and short waves The occurrence of wave phenomena – like sea waves, swell, tides and tsunamis – require engineering knowledge of their physics, as well as models: both numerical models and physical models. The practices in present-day coastal engineering are more-and-more based on models verified and validated by experimental data. Apart from the wave transformations themselves, for the waves coming from deep water into the shallow coastal waters and surf zone, the effects of the waves are important. These effects include: the wave loading on coastal structures like breakwaters, groynes, jetties, sea walls and dikes wave-induced currents, like the longshore current in the surf zone, rip currents and Stokes drift, affecting sediment transport and morphodynamics wave agitation in harbors, which may result in harbor downtime wave overtopping over seawalls and dikes, which may e.g. threaten the stability of a dike Underwater construction Coastal engineering takes place at or near the interface between land and water. Consequently a significant part of coastal engineering involves underwater construction, particularly for foundations. Breakwaters, sea walls, harbour structures like jetties, wharves and docks, bridges, tunnels, outfalls and causeways usually involve underwater work. Sustainability and soft engineering In recent decades, coastal engineers have favored non-structural solutions, which avoid adverse impacts that are typically cause by structures, such as sea walls, bulkheads, jetties, etc. These solutions include beach nourishment, marsh restoration/creation, and habitat restoration. More recently, beneficial use of dredge material, which utilizes material dredged for navigation maintenance to nourish beaches and restore wetlands. Beneficial use is also employed to increase the elevation of marsh platforms in an attempt to adapt to sea level rise. Regional sediment management has also become a focus strategy for coastal practitioners. This essentially uses nearshore sediment sources and knowledge of coastal morphology to identify which accretional features can be harvested to bolster erosional areas, understanding the harvested material will continue to accumulate. A common regional sediment management option is to dredge ebb and flood shoals to nourish beaches. Both beneficial use and regional sediment management recognizes the scarcity of material resources offshore and upland. See also , to prevent coastal erosion and creation of beach Coastal engineering (CERF) Notes References External links – Proceedings of the International Conference on Coastal Engineering (ICCE), held since 1950 (biannually since 1960). Civil engineering Coastal construction
Coastal engineering
[ "Engineering" ]
828
[ "Construction", "Coastal engineering", "Coastal construction", "Civil engineering" ]
15,593,113
https://en.wikipedia.org/wiki/Barber%E2%80%93Layden%E2%80%93Power%20effect
The Barber–Layden–Power effect (BLP effect or colloquially Bleep) is a blast wave phenomenon observed in the immediate aftermath of the successful functioning of air-delivered high-drag ordnance at the target. In common with a typical blast wave, the flow field can be approximated as a lead shock wave, followed by a 'self-similar' subsonic flow field. The phenomenon appears to adhere to the basic principles of the Sedov solution. History The phenomenon is so named after the lead researchers from a joint team drawn from NASA Ames Research Center, the Field Artillery Training Center at Fort Sill, Oklahoma and instructors from the USAF Air Weapons School at Nellis AFB in response to a formal request for assistance from United States Central Command, MacDill AFB, Tampa, Florida, framed following events during Operation Anaconda. Instructors from the Royal School of Artillery's Gunnery Training Team also assisted. Application The effect is caused by extremely localised fluctuations in surface pressure and humidity, which cause the initial shock wave to distort momentarily and refocus on itself, leading to a double shock wave, each of markedly reduced effect. This has distinct utility in the employment of air delivered ordnance close to key urban structures as part of an ongoing influence campaign. The energy of the blast is so great that the pressure and temperature of the gas outside the shock front is negligible compared to the pressure and temperature inside. This substantially reduces the number of parameters available in the problem, leaving only the energy E of the blast, the resting density of the external gas, and the time t since the explosion. With only these three dimensional parameters, it is possible to form other quantities with unique functional dependences. In particular, the only length scale in the problem is The constant of proportionality will depend on the equation of state of the gas. R can be effectively treated as a constant due to the nature of blasting weapons versus heat/blast ordnance. Future developments Work is ongoing into capturing the exact environmental conditions in which the effect can be reliably repeated. This work is part of the 'Grays Study' and will report in late 2008. References Fluid dynamics
Barber–Layden–Power effect
[ "Chemistry", "Engineering" ]
438
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
23,573,013
https://en.wikipedia.org/wiki/Urban%20stream
An urban stream is a formerly natural waterway that flows through a heavily populated area. Often times, urban streams are low-lying points in the landscape that characterize catchment urbanization. Urban streams are often polluted by urban runoff and combined sewer outflows. Water scarcity makes flow management in the rehabilitation of urban streams problematic. Description Governments may alter the flow or course of an urban stream to prevent localized flooding by river engineering: lining stream beds with concrete or other hardscape materials, diverting the stream into culverts and storm sewers, or other means. Some urban streams, such as the subterranean rivers of London, run completely underground. These modifications have often reduced habitat for fish and other species, caused downstream flooding due to alterations of flood plains, and worsened water quality. Stressors Toxicants, ionic concentrations, available nutrients, temperature (and light), and dissolved oxygen are key stressors to urban streams. Restoration efforts Some communities have begun stream restoration projects in an attempt to correct the problems caused by alteration, using techniques such as daylighting and fixing stream bank erosion caused by heavy stormwater runoff. Streamflow augmentation to restore habitat and aesthetics is also an option, and recycled water can be used for this purpose. Urban stream syndrome Urban stream syndrome (USS) is a consistent observed ecological degradation of streams caused by urbanization. This kind of stream degradation is commonly found in areas near or in urban areas. USS also considers hydrogeomorphology changes which are characterized by a deeper, wider catchment, reduced living space for biota, and altered sediment transport rates. Keep in mind the status of water quality is difficult to assess in urban areas because of the complexity of the pollutions sources. This could be from mining and deforestation, but the main cause can be attributed to urban and suburban development. This is because such land use has a domino effect that can be felt tens of kilometers away. Consistent decrease to ecological health of streams can be from many things, but most can be directly or indirectly attributed to human infrastructure and action. Urban streams tend to be "flashier" meaning they have more frequent and larger high flow events. Urban streams also suffer from chemical alterations due to pollutants and waste being uncleanly dumped back into rivers and lakes. An example of this is Onondaga Lake. Historically one of the most polluted freshwater lakes in the world, its salinity and toxic constituents like mercury rose to unsafe levels as large corporations begun to set up shop around the lake. High levels of salinity would be disastrous for any native freshwater marine life and pollutants like mercury are dangerous to most organisms. Higher levels of urbanization typically mean a greater presence of urban stream syndrome. Hydrology plays a key role in urban stream syndrome. As urbanization of these streams continue, there is in turn a decrease in the perviousness of the catchment to precipitation, which leads to a decrease in the infiltration and an increase in the surface runoff. This can cause problems during flood discharges. For example, flood discharges in urban catchments were at least 250% higher in urban catchments than in forested catchments in New York and Texas during similar storms. Treatment Many water managers treat USS by directly addressing the symptoms, most commonly through channel reconfiguration that includes reshaping rock to address altered hydrology and sediment regimes. In spite of having ecological objectives, this approach has been criticized for addressing physical failures in the system without improving ecological conditions. See also Nationwide Urban Runoff Program (NURP) – US research program Nonpoint source pollution Subterranean river References Bibliography External links Urban Waters Program – U.S. Environmental Protection Agency (EPA) Ecosystem Effects of Urban Stream Restoration – EPA Suspended Sediment and Discharge in a West London River Hydrology and urban planning Water pollution Environmental engineering Water streams Rivers Hydrology Fluvial landforms
Urban stream
[ "Chemistry", "Engineering", "Environmental_science" ]
779
[ "Hydrology", "Chemical engineering", "Water pollution", "Civil engineering", "Hydrology and urban planning", "Environmental engineering" ]
23,573,423
https://en.wikipedia.org/wiki/C17H22N2O
{{DISPLAYTITLE:C17H22N2O}} The molecular formula C17H22N2O may refer to: 4,4'-Bis(dimethylamino)benzhydrol Doxylamine, a sedative antihistamine 5-MeO-DALT, or N,N-diallyl-5-methoxytryptamine Molecular formulas
C17H22N2O
[ "Physics", "Chemistry" ]
87
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,573,695
https://en.wikipedia.org/wiki/C22H42O2
The molecular formula C22H42O2, 338.57 g/mol, may refer to: Erucic acid Butyl oleate Molecular formulas
C22H42O2
[ "Physics", "Chemistry" ]
34
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,574,024
https://en.wikipedia.org/wiki/Eurocarbdb
EuroCarbDB was an EU-funded initiative for the creation of software and standards for the systematic collection of carbohydrate structures and their experimental data, which was discontinued in 2010 due to lack of funding. The project included a database of known carbohydrate structures and experimental data, specifically mass spectrometry, HPLC and NMR data, accessed via a web interface that provides for browsing, searching and contribution of structures and data to the database. The project also produces a number of associated bioinformatics tools for carbohydrate researchers: GlycanBuilder, a Java applet for drawing glycan structures GlycoWorkbench, a standalone Java application for semi-automated analysis and annotation of glycan mass spectra GlycoPeakfinder, a webapp for calculating glycan compositions from mass data The canonical online version of EuroCarbDB was hosted by the European Bioinformatics Institute at www.ebi.ac.uk up to 2012, and then relax.organ.su.se. EuroCarb code has since been incorporated into and extended by UniCarb-DB, which also includes the work of the defunct GlycoSuite database. References External links an online version of EuroCarbDB Eurocarbdb googlecode project initial publication of the EuroCarb project Official site for eurocarbdb reports and recommendations (no longer active) Bioinformatics software Biological databases Carbohydrates Science and technology in Cambridgeshire South Cambridgeshire District
Eurocarbdb
[ "Chemistry", "Biology" ]
317
[ "Biomolecules by chemical classification", "Carbohydrates", "Bioinformatics software", "Organic compounds", "Bioinformatics", "Carbohydrate chemistry", "Biological databases" ]
17,308,643
https://en.wikipedia.org/wiki/V%20speeds
In aviation, V-speeds are standard terms used to define airspeeds important or useful to the operation of all aircraft. These speeds are derived from data obtained by aircraft designers and manufacturers during flight testing for aircraft type-certification. Using them is considered a best practice to maximize aviation safety, aircraft performance, or both. The actual speeds represented by these designators are specific to a particular model of aircraft. They are expressed by the aircraft's indicated airspeed (and not by, for example, the ground speed), so that pilots may use them directly, without having to apply correction factors, as aircraft instruments also show indicated airspeed. In general aviation aircraft, the most commonly used and most safety-critical airspeeds are displayed as color-coded arcs and lines located on the face of an aircraft's airspeed indicator. The lower ends of the white arc and the green arc are the stalling speed with wing flaps in landing configuration, and stalling speed with wing flaps retracted, respectively. These are the stalling speeds for the aircraft at its maximum weight. The yellow band is the range in which the aircraft may be operated in smooth air, and then only with caution to avoid abrupt control movement. The red line is the VNE, the never-exceed speed. Proper display of V-speeds is an airworthiness requirement for type-certificated aircraft in most countries. Regulations The most common V-speeds are often defined by a particular government's aviation regulations. In the United States, these are defined in title 14 of the United States Code of Federal Regulations, known as the Federal Aviation Regulations (FARs). In Canada, the regulatory body, Transport Canada, defines 26 commonly used V-speeds in their Aeronautical Information Manual. V-speed definitions in FAR 23, 25 and equivalent are for designing and certification of airplanes, not for their operational use. The descriptions below are for use by pilots. Regulatory V-speeds These V-speeds are defined by regulations. They are typically defined with constraints such as weight, configuration, or phases of flight. Some of these constraints have been omitted to simplify the description. Other V-speeds Some of these V-speeds are specific to particular types of aircraft and are not defined by regulations. Mach numbers Whenever a limiting speed is expressed by a Mach number, it is expressed relative to the local speed of sound, e.g. VMO: Maximum operating speed, MMO: Maximum operating Mach number. V1 definitions V1 is the critical engine failure recognition speed or takeoff decision speed. It is the speed above which the takeoff will continue even if an engine fails or another problem occurs, such as a blown tire. The speed will vary among aircraft types and varies according to factors such as aircraft weight, runway length, wing flap setting, engine thrust used and runway surface contamination; thus, it must be determined by the pilot before takeoff. Aborting a takeoff after V1 is strongly discouraged because the aircraft may not be able to stop before the end of the runway, thus suffering a runway overrun. V1 is defined differently in different jurisdictions, and definitions change over time as aircraft regulations are amended. The US Federal Aviation Administration and the European Union Aviation Safety Agency define it as: "the maximum speed in the takeoff at which the pilot must take the first action (e.g., apply brakes, reduce thrust, deploy speed brakes) to stop the airplane within the accelerate-stop distance. V1 also means the minimum speed in the takeoff, following a failure of the critical engine at VEF, at which the pilot can continue the takeoff and achieve the required height above the takeoff surface within the takeoff distance." V1 thus includes reaction time. In addition to this reaction time, a safety margin equivalent to 2 seconds at V1 is added to the accelerate-stop distance. Transport Canada defines it as: "Critical engine failure recognition speed" and adds: "This definition is not restrictive. An operator may adopt any other definition outlined in the aircraft flight manual (AFM) of TC type-approved aircraft as long as such definition does not compromise operational safety of the aircraft." See also ICAO recommendations on use of the International System of Units Balanced field takeoff Notes References Further reading Airspeed Aircraft performance
V speeds
[ "Physics" ]
860
[ "Wikipedia categories named after physical quantities", "Airspeed", "Physical quantities" ]
17,314,163
https://en.wikipedia.org/wiki/Spin%20column-based%20nucleic%20acid%20purification
Spin column-based nucleic acid purification is a solid phase extraction method to quickly purify nucleic acids. This method relies on the fact that nucleic acid will bind to the solid phase of silica under certain conditions. Procedure The different stages of the method are lyse, bind, wash, and elute. More specifically, this entails the lysis of target cells to release nucleic acids, selective binding of nucleic acid to a silica membrane, washing away particulates and inhibitors that are not bound to the silica membrane, and elution of the nucleic acid, with the end result being purified nucleic acid in an aqueous solution. For lysis, the cells (blood, tissue, etc.) of the sample must undergo a treatment to break the cell membrane and free the nucleic acid. Depending on the target material, this can include the use of detergent or other buffers, proteinases or other enzymes, heating to various times/temperatures, or mechanical disruption such as cutting with a knife or homogenizer, using a mortar and pestle, or bead-beating with a bead mill. For binding, a buffer solution is then added to the lysed sample along with ethanol or isopropanol. The sample in binding solution is then transferred to a spin column, and the column is put either in a centrifuge or attached to a vacuum. The centrifuge/vacuum forces the solution through a silica membrane that is inside the spin column, where under the right ionic conditions, nucleic acids will bind to the silica membrane, as the rest of the solution passes through. With the target material bound, the flow-through can be removed. To wash, a new buffer is added onto the column, then centrifuged/vacuumed through the membrane. This buffer is intended to maintain binding conditions, while removing the binding salts and other remaining contaminants. Generally it takes several washes, often with increasing percentages of ethanol/isopropanol, until the nucleic acid on the silica membrane is free of contaminants. The last 'wash' is often a dry step to allow the alcohol to evaporate, leaving only purified nucleic acids bound to the column. Finally, elution is the process of adding an aqueous solution to the column, allowing the hydrophilic nucleic acid to leave the column and return to solution. This step may be improved with salt, pH, time, or heat. Finally, to capture the eluate/eluent, the column is transferred into a clean microtube prior to a last centrifugation step. Related methods Even prior to the nucleic acid methods employed today, it was known that in the presence of chaotropic agents, such as sodium iodide or sodium perchlorate, DNA binds to silica, glass particles or to unicellular algae called diatoms which shield their cell walls with silica. This property was used to purify nucleic acid using glass powder or silica beads under alkaline conditions. This was later improved using guanidinium thiocyanate or guanidinium hydrochloride as the chaotropic agent. For ease of handling, the use of glass beads was later changed to silica columns. And to enable use of automated extraction instruments, there was development of silica-coated paramagnetic beads, more commonly referred to as "magnetic bead" extraction. See also DNA separation by silica adsorption Guanidinium thiocyanate-phenol-chloroform extraction Ethanol precipitation SCODA DNA purification Plasmid preparation References Biochemistry methods Molecular biology
Spin column-based nucleic acid purification
[ "Chemistry", "Biology" ]
772
[ "Biochemistry methods", "Biochemistry", "Molecular biology" ]
17,314,993
https://en.wikipedia.org/wiki/Hosford%20yield%20criterion
The Hosford yield criterion is a function that is used to determine whether a material has undergone plastic yielding under the action of stress. Hosford yield criterion for isotropic plasticity The Hosford yield criterion for isotropic materials is a generalization of the von Mises yield criterion. It has the form where , i=1,2,3 are the principal stresses, is a material-dependent exponent and is the yield stress in uniaxial tension/compression. Alternatively, the yield criterion may be written as This expression has the form of an Lp norm which is defined as When , the we get the L∞ norm, . Comparing this with the Hosford criterion indicates that if n = ∞, we have This is identical to the Tresca yield criterion. Therefore, when n = 1 or n goes to infinity the Hosford criterion reduces to the Tresca yield criterion. When n = 2 the Hosford criterion reduces to the von Mises yield criterion. Note that the exponent n does not need to be an integer. Hosford yield criterion for plane stress For the practically important situation of plane stress, the Hosford yield criterion takes the form A plot of the yield locus in plane stress for various values of the exponent is shown in the adjacent figure. Logan-Hosford yield criterion for anisotropic plasticity The Logan-Hosford yield criterion for anisotropic plasticity is similar to Hill's generalized yield criterion and has the form where F,G,H are constants, are the principal stresses, and the exponent n depends on the type of crystal (bcc, fcc, hcp, etc.) and has a value much greater than 2. Accepted values of are 6 for bcc materials and 8 for fcc materials. Though the form is similar to Hill's generalized yield criterion, the exponent n is independent of the R-value unlike the Hill's criterion. Logan-Hosford criterion in plane stress Under plane stress conditions, the Logan-Hosford criterion can be expressed as where is the R-value and is the yield stress in uniaxial tension/compression. For a derivation of this relation see Hill's yield criteria for plane stress. A plot of the yield locus for the anisotropic Hosford criterion is shown in the adjacent figure. For values of that are less than 2, the yield locus exhibits corners and such values are not recommended. References See also Yield surface Yield (engineering) Plasticity (physics) Stress (physics) Plasticity (physics) Solid mechanics Mechanics Yield criteria
Hosford yield criterion
[ "Physics", "Materials_science", "Engineering" ]
522
[ "Deformation (mechanics)", "Mechanics", "Mechanical engineering", "Plasticity (physics)" ]
17,315,193
https://en.wikipedia.org/wiki/Lankford%20coefficient
The Lankford coefficient (also called Lankford value, R-value, or plastic strain ratio) is a measure of the plastic anisotropy of a rolled sheet metal. This scalar quantity is used extensively as an indicator of the formability of recrystallized low-carbon steel sheets. Definition If and are the coordinate directions in the plane of rolling and is the thickness direction, then the R-value is given by where is the in-plane plastic strain, transverse to the loading direction, and is the plastic strain through-the-thickness. More recent studies have shown that the R-value of a material can depend strongly on the strain even at small strains . In practice, the value is usually measured at 20% elongation in a tensile test. For sheet metals, the values are usually determined for three different directions of loading in-plane ( to the rolling direction) and the normal R-value is taken to be the average The planar anisotropy coefficient or planar R-value is a measure of the variation of with angle from the rolling direction. This quantity is defined as Anisotropy of steel sheets Generally, the Lankford value of cold rolled steel sheet acting for deep-drawability shows heavy orientation, and such deep-drawability is characterized by . However, in the actual press-working, the deep-drawability of steel sheets cannot be determined only by the value of and the measure of planar anisotropy, is more appropriate. In an ordinary cold rolled steel, is the highest, and is the lowest. Experience shows that even if is close to 1, and can be quite high leading to a high average value of . In such cases, any press-forming process design on the basis of does not lead to an improvement in deep-drawability. See also Yield surface References Plasticity (physics) Solid mechanics Metal forming
Lankford coefficient
[ "Physics", "Materials_science" ]
388
[ "Deformation (mechanics)", "Solid mechanics", "Mechanics", "Plasticity (physics)" ]
17,318,563
https://en.wikipedia.org/wiki/Deflection%20%28engineering%29
In structural engineering, deflection is the degree to which a part of a long structural element (such as beam) is deformed laterally (in the direction transverse to its longitudinal axis) under a load. It may be quantified in terms of an angle (angular displacement) or a distance (linear displacement). A longitudinal deformation (in the direction of the axis) is called elongation. The deflection distance of a member under a load can be calculated by integrating the function that mathematically describes the slope of the deflected shape of the member under that load. Standard formulas exist for the deflection of common beam configurations and load cases at discrete locations. Otherwise methods such as virtual work, direct integration, Castigliano's method, Macaulay's method or the direct stiffness method are used. The deflection of beam elements is usually calculated on the basis of the Euler–Bernoulli beam equation while that of a plate or shell element is calculated using plate or shell theory. An example of the use of deflection in this context is in building construction. Architects and engineers select materials for various applications. Beam deflection for various loads and supports Beams can vary greatly in their geometry and composition. For instance, a beam may be straight or curved. It may be of constant cross section, or it may taper. It may be made entirely of the same material (homogeneous), or it may be composed of different materials (composite). Some of these things make analysis difficult, but many engineering applications involve cases that are not so complicated. Analysis is simplified if: The beam is originally straight, and any taper is slight The beam experiences only linear elastic deformation The beam is slender (its length to height ratio is greater than 10) Only small deflections are considered (max deflection less than 1/10 of the span). In this case, the equation governing the beam's deflection () can be approximated as: where the second derivative of its deflected shape with respect to ( being the horizontal position along the length of the beam) is interpreted as its curvature, is the Young's modulus, is the area moment of inertia of the cross-section, and is the internal bending moment in the beam. If, in addition, the beam is not tapered and is homogeneous, and is acted upon by a distributed load , the above expression can be written as: This equation can be solved for a variety of loading and boundary conditions. A number of simple examples are shown below. The formulas expressed are approximations developed for long, slender, homogeneous, prismatic beams with small deflections, and linear elastic properties. Under these restrictions, the approximations should give results within 5% of the actual deflection. Cantilever beams Cantilever beams have one end fixed, so that the slope and deflection at that end must be zero. End-loaded cantilever beams The elastic deflection and angle of deflection (in radians) at the free end in the example image: A (weightless) cantilever beam, with an end load, can be calculated (at the free end B) using: where Note that if the span doubles, the deflection increases eightfold. The deflection at any point, , along the span of an end loaded cantilevered beam can be calculated using: Note: At (the end of the beam), the and equations are identical to the and equations above. Uniformly loaded cantilever beams The deflection, at the free end B, of a cantilevered beam under a uniform load is given by: where The deflection at any point, , along the span of a uniformly loaded cantilevered beam can be calculated using: Simply supported beams Simply supported beams have supports under their ends which allow rotation, but not deflection. Center-loaded simple beams The deflection at any point, , along the span of a center loaded simply supported beam can be calculated using: for The special case of elastic deflection at the midpoint C of a beam, loaded at its center, supported by two simple supports is then given by: where Off-center-loaded simple beams The maximum elastic deflection on a beam supported by two simple supports, loaded at a distance from the closest support, is given by: where This maximum deflection occurs at a distance from the closest support and is given by: Uniformly loaded simple beams The elastic deflection (at the midpoint C) on a beam supported by two simple supports, under a uniform load (as pictured) is given by: where The deflection at any point, , along the span of a uniformly loaded simply supported beam can be calculated using: Combined loads The deflection of beams with a combination of simple loads can be calculated using the superposition principle. Change in length The change in length of the beam, projected along the line of the unloaded beam, can be calculated by integrating the slope function, if the deflection function is known for all . Where: If the beam is uniform and the deflection at any point is known, this can be calculated without knowing other properties of the beam. Units The formulas supplied above require the use of a consistent set of units. Most calculations will be made in the International System of Units (SI) or US customary units, although there are many other systems of units. International system (SI) Force: newtons () Length: metres () Modulus of elasticity: Moment of inertia: US customary units (US) Force: pounds force () Length: inches () Modulus of elasticity: Moment of inertia: Others Other units may be used as well, as long as they are self-consistent. For example, sometimes the kilogram-force () unit is used to measure loads. In such a case, the modulus of elasticity must be converted to . Structural deflection Building codes determine the maximum deflection, usually as a fraction of the span e.g. 1/400 or 1/600. Either the strength limit state (allowable stress) or the serviceability limit state (deflection considerations among others) may govern the minimum dimensions of the member required. The deflection must be considered for the purpose of the structure. When designing a steel frame to hold a glazed panel, one allows only minimal deflection to prevent fracture of the glass. The deflected shape of a beam can be represented by the moment diagram, integrated (twice, rotated and translated to enforce support conditions). See also Slope deflection method References External links Deflection of beams Beam Deflections Calculation tools for Deflection & slope of beams Engineering mechanics Structural analysis
Deflection (engineering)
[ "Engineering" ]
1,398
[ "Structural engineering", "Structural analysis", "Civil engineering", "Mechanical engineering", "Aerospace engineering", "Engineering mechanics" ]
17,318,637
https://en.wikipedia.org/wiki/Coupling%20from%20the%20past
Among Markov chain Monte Carlo (MCMC) algorithms, coupling from the past is a method for sampling from the stationary distribution of a Markov chain. Contrary to many MCMC algorithms, coupling from the past gives in principle a perfect sample from the stationary distribution. It was invented by James Propp and David Wilson in 1996. The basic idea Consider a finite state irreducible aperiodic Markov chain with state space and (unique) stationary distribution ( is a probability vector). Suppose that we come up with a probability distribution on the set of maps with the property that for every fixed , its image is distributed according to the transition probability of from state . An example of such a probability distribution is the one where is independent from whenever , but it is often worthwhile to consider other distributions. Now let for be independent samples from . Suppose that is chosen randomly according to and is independent from the sequence . (We do not worry for now where this is coming from.) Then is also distributed according to , because is -stationary and our assumption on the law of . Define Then it follows by induction that is also distributed according to for every . However, it may happen that for some the image of the map is a single element of . In other words, for each . Therefore, we do not need to have access to in order to compute . The algorithm then involves finding some such that is a singleton, and outputting the element of that singleton. The design of a good distribution for which the task of finding such an and computing is not too costly is not always obvious, but has been accomplished successfully in several important instances. The monotone case There is a special class of Markov chains in which there are particularly good choices for and a tool for determining if . (Here denotes cardinality.) Suppose that is a partially ordered set with order , which has a unique minimal element and a unique maximal element ; that is, every satisfies . Also, suppose that may be chosen to be supported on the set of monotone maps . Then it is easy to see that if and only if , since is monotone. Thus, checking this becomes rather easy. The algorithm can proceed by choosing for some constant , sampling the maps , and outputting if . If the algorithm proceeds by doubling and repeating as necessary until an output is obtained. (But the algorithm does not resample the maps which were already sampled; it uses the previously sampled maps when needed.) References Monte Carlo methods Markov chain Monte Carlo
Coupling from the past
[ "Physics" ]
507
[ "Monte Carlo methods", "Computational physics" ]
20,720,013
https://en.wikipedia.org/wiki/MOWSE
MOWSE (for Molecular Weight Search) is a method to identify proteins from the molecular weight of peptides created by proteolytic digestion and measured with mass spectrometry. Development The MOWSE algorithm was developed by Darryl Pappin at the Imperial Cancer Research Fund and Alan Bleasby at the SERC Daresbury Laboratory. The probability-based MOWSE score formed the basis of development of Mascot, a proprietary software for protein identification from mass spectrometry data. See also Peptide mass fingerprinting Mascot (software) Genome-based peptide fingerprint scanning References Bioinformatics Mass spectrometry software Proteomics Science and technology in Cheshire de:MOWSE
MOWSE
[ "Physics", "Chemistry", "Engineering", "Biology" ]
145
[ "Biological engineering", "Spectrum (physical sciences)", "Chemistry software", "Bioinformatics stubs", "Biotechnology stubs", "Biochemistry stubs", "Bioinformatics", "Mass spectrometry software", "Mass spectrometry" ]
20,722,742
https://en.wikipedia.org/wiki/Pancratistatin
Pancratistatin (PST) is a natural compound initially extracted from spider lily, a Hawaiian native plant of the family Amaryllidaceae (AMD). Occurrence Pancratistatin occurs naturally in Hawaiian spider lily, a flowering plant within the family Amaryllidaceae. Pancratistatin is mostly found in the bulb tissues of spider lilies. It has been shown that the enrichment of atmospheric CO2 can enhance the production of antiviral secondary metabolites, including pancratistatin, in these plants. Pancratistatin can be isolated from the tropical bulbs of Hymenocallis littoralis in the order of 100 to 150 mg/kg when bulbs are obtained from the wild type in Hawaii. However, the compound has to be commercially extracted from field- and greenhouse-grown bulbs or from tissue cultures cultivated, for example, in Arizona, which generate lower levels of pancratistatin (a maximum of 22 mg/kg) even in the peak month of October. After October, when the bulb becomes dormant, levels of pancratistatin drop, down to only 4 mg/kg by May. Field-grown bulbs, which show monthly changes in pancratistatin content, generate somewhat smaller amounts (2–5 mg/kg) compared to those grown in greenhouses cultivated over the same period. There are about 40 different spider lily species worldwide and they are mainly native to the Andes of South America. Pharmaceutical research Pancratistatin is thought to have potential as a basis for the development of new pharmaceuticals, particularly in the field of cancer treatment. Biosynthesis Although there may not be a precise elucidation of pancratistatin biological synthesis, there have been speculations on biosynthesis of narciclasine and lycoricidine that are very similar to pancratistatin in terms of structure. The biosynthesis is accomplished via synthesis from O-methylnorbelladine by para-para phenol coupling to obtain vittatine as an intermediate. Subsequent elimination of two carbon atoms and hydroxylations of vittatine then leads to narciclasine. Total synthesis The first total synthesis of racemic (+/-)-pancratistatin was reported by Samuel Danishefsky and Joung Yon Lee, which involved a very complex and long (40 steps) total synthesis. According to Danishefsky and Joung, there were several weak steps in this synthesis that gave rise to a disappointing low synthetic yield. Amongst the most challenging issues, the Moffatt transposition and the orthoamide problem, which required a blocking maneuver to regiospecifically distinguish the C, hydroxyl group for rearrangement were considered to be the severe cases. However, both Danishevsky and Yon Lee stated that their approach towards the PST total synthesis was not without merit and believed that their work would interest other medicinal scientists to construct a much more practical and efficient way for PST total synthesis. The work of Danishevsky and Joung provided the foundation for another total synthesis of PST, which was propounded by Li, M. in 2006. This method employed a more sophisticated approach, starting out with pinitol that has stereocenters which are exactly the same as the ones in the C-ring of pancratistatin. Protection of the diol functions of compound 30 gave compound 31. The free hydroxyl of this was subsequently substituted by an azide to give 32. After removal of the silyl function, a cyclic sulfate was installed to obtain product 33. The Staudinger reaction gave the free amine 34 from azide 33. The coupling reaction between 34 and 35 gave compound 36 with a moderate yield. Methoxymethyl protection of both the amide and the free phenol gave compound 37. Treatment of this latter product with t-BuLi followed by addition of cerium chloride gave compound 38. Full deprotection of 38 by BBr3 and methanol afforded pancratistatin 3 in 12 steps from commercially available pinitol with an overall yield of 2.3% 20. The most recent and shortest synthesis of pancratistatin was accomplished by David Sarlah and co workers, completing the asymmetric synthesis of (+)-pancratistatin and (+)-7-deoxypancratistatin in 7 and 6 steps respectively. The key step of this synthesis was the Nickel catalyzed dearomatization of benzene which directly installed the amine and catachol ring in 98:2 er. Epoxidation then dihydroxylation of the resulting diene afforded the 4 hydroxyl groups. The synthesis was completed by deprotection of the amine and a Cobalt catalyzed CO insertion to furnish the lactam. (+)-7-deoxypancratistatin can then be directly oxidized in a 62% yield to give (+)-pancratistatin. This synthesis yielded multiple grams of the final product which may be essential in the biological evaluation of pancratistatin and analogues. A very recent approach to a stereocontrolled pancratistatin synthesis was accomplished by Sanghee Kim from the National University of Seoul, in which Claisen rearrangement of dihydropyranethlyene and a cyclic sulfate elimination reaction were employed 21. The B ring of the phenanthridone (three-membered nitrogen heterocyclic ring) is formed using the Bischler-Napieralski reaction. The n precursor 3 with its stereocenters in the C ring is stereoselectively synthesized from the cis-disubstituted cyclohexene 4. The presence of unsaturated carbonyl in compound 4 suggested the use of a Claisen rearrangement of 3,4-dihydro-2H-pyranylethylene. The synthesis starts with the treatment of 6 with excess trimethyl phosphate. This reaction provides phosphate 7 in 97% yield. Using the Horner-Wadsworth-Emmons reaction between 7 and acrolein dimmer 8 in the presence of LHMDS in THF forms (E)-olefin 5 with very high stereoselectivity in 60% yield. Only less than 1% of (Z)-olefin was detected in the final product. The Claisen rearrangement of dihydropyranethylene forms the cis-distributed cyclohexene as a single isomer in 78% yield. The next step of the synthesis involves the oxidation of aldehyde of compound 4 using NaClO2 to the corresponding carboxylic acid 9 in 90% yield. Iodolactonization of 9 and subsequent treatment with DBU in refluxing benzene gives rise to the bicyclic lacytone in 78% yield. Methanolysis of lactone 10 with NaOMe forms a mixture of hydroxyl ester 11 and its C-4a epimer (pancratistatin numbering). Saponification of the methyl ester 11 with LiOH was followed by a Curtius rearrangement of the resulting acid 12 with diphenylphosphoryl azide in refluxing toluene to afford an isocyanate intermediate, treatment of which with NaOMe/MeOH forms the corresponding carbamate 13 in 82% yield. The next steps of the synthesis involve the regioselective elimination of the C-3 hydroxyl group and subsequent unsaturation achieved by cyclic sulfate elimination. Diol 16 needs to be treated with thionyl chloride and further oxidation with RuCl3 provides the cyclic sulfate 17 in 83% yield. Treatment of cyclic sulfate with DBU yields the desired allylic alcohol 18 (67% yield). Reaction with OsO4 forms the single isomer 19 in 88% yield. Peracetylation of 19 (77% yield) accompanied by Banwell’s modified Bischler-Napieralski reaction forms the compound 20 with a small amount of isomer 21 ( 7:1 regioselectivity). The removal of protecting groups with NaOMe/MeOH forms pancratistatin in 83%. See also Plant sources of anti-cancer agents References Isoquinoline alkaloids Quinoline alkaloids Total synthesis
Pancratistatin
[ "Chemistry" ]
1,721
[ "Quinoline alkaloids", "Alkaloids by chemical classification", "Tetrahydroisoquinoline alkaloids", "Chemical synthesis", "Total synthesis" ]
20,723,498
https://en.wikipedia.org/wiki/Reverse%20transfection
Reverse transfection is a technique for the transfer of genetic material into cells. As DNA is printed on a glass slide for the transfection process (the deliberate introduction of nucleic acids into cells) to occur before the addition of adherent cells, the order of addition of DNA and adherent cells is reverse that of conventional transfection. Hence, the word “reverse” is used. Process Transfection-mix preparation for slide printing A DNA-gelatin mixture may be used for printing onto a slide. Gelatin powder is first dissolved in sterile Milli-Q water to form a 0.2% gelatin solution. Purified DNA plasmid is then mixed with the gelatin solution, and the final gelatin concentration is kept greater than 0.17%. Besides gelatin, atelocollagen and fibronectin are also successful transfection vectors for introducing foreign DNA into the cell nucleus. Slide printing of DNA-gelatin mixture After the DNA-gelatin mixture preparation, the mixture is pipetted onto a slide surface and the slide is placed in a covered petri dish. A desiccant is added to the dish to dry up the solution. Finally, cultured cells are poured into the dish for plasmid uptake. However, with the invention of different types of microarray printing systems, hundreds of transfection mixes (containing different DNA of interest) may be printed on the same slide for cell uptake of plasmids. There are two major types of microarray printing systems manufactured by different companies: contact and non-contact printing systems. An example of a non-contact printing system is the Piezorray Flexible Non-contact Microarraying System. It uses pressure control and a piezoelectric collar to squeeze out consistent drops of approximately 333 pL in volume. The PiezoTip dispensers do not contact the surface to which the sample is dispensed; thus, contamination potential is reduced and the risk of disrupting the target surface is eliminated. An example of a contact printing system is the SpotArray 72 (Perkin Elmer Life Sciences) contact-spotting system. Its printhead can accommodate up to 48 pins, and creates compact arrays by selectively raising and lowering subsets of pins during printing. After printing, the pins are washed with a powerful pressure-jet pin washer and vacuum-dried, eliminating carryover. Another example of a contact printing system is the Qarray system (Genetix). It has three types of printing systems: QArray Mini, QArray 2 and QArray Max. After printing, the solution is allowed to dry up and the DNA-gelatin is stuck tightly in position on the array. HybriWell in reverse transfection First, adhesive from the HybriWell is peeled off and the HybriWell is attached over the area of the slide printed with the gelatin-DNA solution. Second, 200ul of transfection mix is pipetted into one of the HybriWell ports; the mixture will distribute evenly over the array. The array is then incubated, with temperature and time dependent on the cell types used. Third, the transfection mix is pipetted away and the HybriWell removed with a thin-tipped forceps. Fourth, the printed slide treated with transfection reagent is placed into a square dish with the printed side facing up. Fifth, the harvested cells are gently poured onto the slides (not on the printed areas). Finally, the dish is placed in a 37°C, 5% CO2 humidified incubator and incubated overnight. Other reverse-transfection reagents Effectene Reagent is used in conjunction with the enhancer and the DNA condensation buffer (Buffer EC) to achieve high transfection efficiency. In the first step of Effectene–DNA complex formation, the DNA is condensed by interaction with the enhancer in a defined-buffer system. Effectene Reagent is then added to the condensed DNA to produce condensed Effectene–DNA complexes. The Effectene–DNA complexes are mixed with the medium and directly added to the cells. Effectene Reagent spontaneously forms micelle structures exhibiting no size or batch variation (as may be found with pre-formulated liposome reagents). This feature ensures reproducibility of transfection-complex formation. The process of highly condensing DNA molecules and then coating them with Effectene Reagent is an effective way to transfer DNA into eukaryotic cells. Advantages and disadvantages The advantages of reverse transfection (over conventional transfection) are: The addition and attachment of target cells to the DNA-loaded surface can lead to a higher probability of cell-DNA contact, potentially leading to higher transfection efficiency. Labour-saving materials (less DNA is required) High-throughput screening; hundreds of genes may be expressed in cells on a single microarray for studying gene expression and regulation. Parallel cell seeding in a single chamber for 384 experiments, with no physical separation between experiments, increases screening data quality. Well-to-well variations occur in experiments performed in multi-wall dishes. Exact-replicate arrays may be produced, since the same sample source plate may be dried and printed on different slides for at least 15 months' storage without apparent loss of transfection efficiency. The disadvantages of reverse transfection are: Reverse transfection is more expensive because a highly accurate and efficient microarray printing system is needed to print the DNA-gelatin solution onto the slides. Applications with different cell lines have (so far) required protocol variations to manufacture siRNA or plasmid arrays, which involve considerable development and testing. Increased possibility of array-spot cross-contamination as spot density increases; therefore, optimization of the array layout is important. References Molecular biology techniques Gene expression
Reverse transfection
[ "Chemistry", "Biology" ]
1,222
[ "Gene expression", "Molecular biology techniques", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
8,108,613
https://en.wikipedia.org/wiki/Pinch%20valve
A pinch valve is a full bore or fully ported type of control valve which uses a pinching effect to obstruct fluid flow. Operating principle Pinch valves employ an elastic tubing (sleeve/hose) and a device that directly contacts the tubing (body). Forcing the tubing together creates a seal that is equivalent to the tubing's permeability. Air-operated pinch valves consist of an elasticised reinforced rubber hose, a type of housing, and two socket end covers (or flanges). In air-operated pinch valves, the rubber hoses are usually press-fitted and centered into the housing ends by the socket covers. There is no additional actuator, the valve closes as soon as there is a pressurized air supply into the body. When the air supply becomes interrupted and the volume of air exhausts, the elastic rubber hose starts to open due to the force of the process flow. Applications Pinch valves are typically used in applications where the flowing media needs to be completely isolated from any internal valve parts. The sleeve will contain the flow media and isolate it from the environment, hence reducing contamination. They are commonly applied to medical instruments, clinical or chemical analyzers, and a wide range of laboratory equipment. They are used in some water pistols, notably the original Super Soaker 50. Material selection The sleeve material is selected among suitable synthetic polymer based upon the corrosiveness and abrasiveness of the flow media. A key selection criterion is the operation temperature, which needs to be within the limit of the polymer. Several rubber qualities are available for pinch valves such as natural rubber, EPDM, nitrile, viton, neoprene and butyl. Different housings and end covers/flange materials such as aluminium, plastics and stainless steel are also available. References Valves
Pinch valve
[ "Physics", "Chemistry" ]
372
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
8,108,692
https://en.wikipedia.org/wiki/Dan%20Walls
Daniel Frank Walls FRS (13 September 1942 – 12 May 1999) was a New Zealand theoretical physicist specialising in quantum optics. Education Walls gained a BSc in physics and mathematics and a first class honours MSc in physics at the University of Auckland. He then went to Harvard University as a Fulbright Scholar, obtaining his PhD in 1969. He was supervised by Roy J. Glauber who was later awarded a Nobel prize in 2005. Career and research After holding postdoctoral research positions in Auckland and Stuttgart, Walls became a senior lecturer in physics at the University of Waikato in 1972, where he became professor in 1980. Together with his colleague Crispin Gardiner, during the next 25 years he established a major research centre for theoretical quantum optics in New Zealand and built active and productive collaborations with groups throughout the world. In 1987 he moved to the University of Auckland as professor of theoretical physics. His major research interests centred on the interaction and similarities between light and atoms. He was notable for his wide-ranging expertise in relating theory to experiment, and was involved in all major efforts to understand non-classical light. A seminal paper by Walls with his first graduate student Howard Carmichael, showed how to create antibunched light, in which photons arrive at regular intervals, rather than randomly. Walls was a pioneer in the study of ways that the particle-like nature of light (photons) could be controlled to make optical systems less susceptible to unwanted fluctuations, in particular by the use of squeezed light, a concept formulated by Carlton Caves. In squeezed light, some fluctuations can be made very small provided other fluctuations are correspondingly large. He made major contributions to the theory of quantum measurement such as those involving Albert Einstein's"which-path" experiment, and the quantum nondemolition measurement. Walls also used a simple field theoretical approach to explain and corroborate Dirac's description of photon interference and in particular Dirac's statement "that a photon interferes only with itself." In the later stages of his career he focused his research efforts on the theoretical aspects of the newly created state of matter, the Bose–Einstein condensate (BECs). Some of his contributions in the field include the prediction of the interference signature of quantized vortices, and the collapses and revivals of the Josephson coupled BECs. Awards and honours Walls was elected a Fellow of the Royal Society (FRS) in 1992. Walls was also elected Fellow of the American Physical Society (1981) and the Royal Society of New Zealand (FRSNZ). In 1995 he was awarded the Dirac Medal by the Institute of Physics for theoretical physics. The Dodd-Walls Centre for Photonic and Quantum Technologies, a New Zealand Centre of Research excellence based in the University of Otago, was named after Jack Dodd and Dan Walls in recognition of their pioneering roles in establishing New Zealand's as an internationally recognised standing in photonics, quantum optics and ultra-cold atoms. Personal life Dan Walls had two younger siblings, a sister and a brother. He married Fari Khoy in 1968 with whom he had one son, Mark, in 1980. This marriage ended in 1986. His partner in later years was Pamela King. Walls died of cancer at hospital, in Auckland, aged 57. Legacy In 2008 the New Zealand Institute of Physics named a biennial award in honour of Walls. The Dan Walls Medal is awarded to "the physicist working in New Zealand who is deemed to have made the greatest impact nationally and/or internationally in their field through predominantly New Zealand-based research". Winners have included Paul Callaghan, David Parry, Jeff Tallon, Matt Visser, Howard Carmichael, Peter Schwerdtfeger, and Jenni Adams. References 1942 births 1999 deaths People from Napier, New Zealand Optical physicists Quantum physicists Theoretical physicists University of Auckland alumni New Zealand fellows of the Royal Society Fellows of Optica (society) Harvard University alumni Academic staff of the University of Auckland Academic staff of the University of Waikato 20th-century New Zealand physicists 20th-century New Zealand scientists Fellows of the Royal Society of New Zealand Fellows of the American Physical Society
Dan Walls
[ "Physics" ]
834
[ "Theoretical physics", "Quantum physicists", "Theoretical physicists", "Quantum mechanics" ]
8,110,214
https://en.wikipedia.org/wiki/Hammer%20%28firearms%29
The hammer is a part of a firearm that is used to strike the percussion cap/primer, or a separate firing pin, to ignite the propellant and fire the projectile. It is so called because it resembles a hammer in both form and function. The hammer itself is a metal piece that forcefully rotates about a pivot point. The term tumbler can refer to a part of the hammer or a part mechanically attached to the pivot-point of the hammer, depending on the particular firearm under discussion (see half-cock). According to one source the term tumbler is synonymous with hammer. Evolution In the development of firearms, the flintlock used flint striking steel to produce sparks and initiate firing by igniting the gunpowder used as a propellant. The flint was fixed to a swinging arm called the cock. Prior to firing, the cock was held rearward under spring tension. Pulling the trigger allowed the cock to rotate forward at a speed sufficient to produce sparks when it struck the steel frizzen. This ignited a small priming charge in the external flash pan, which in turn ignited the propellent charge in the breech through a connecting vent hole. The identification of percussion sensitive fulminates provided an alternative to spark ignition of the propellant. The percussion lock (also caplock) was adapted from the flintlock firing mechanism, with the cock being modified to strike a small cup-like cap containing percussive material. The cap was placed over an external nipple, which acts as an anvil and conduit to ignite the main propellant charge within the breech. In this use, the cock has come to be termed a hammer. Samuel Colt's Colt Paterson revolver of 1836 used percussion caps. The hammer and other components of the firing mechanism are mounted between the sides that form the frame. While not unique, percussion and flint-locks more typically use a side-lock firing mechanism, with the components mounted either side of the mounting plate. The caplock was in wide use for almost five decades until the widespread introduction of the self-contained cartridge which contained the projectile, gunpowder, and percussion cap all in a single shell that could be easily loaded from the breech of a firearm. The introduction of such a technology led to the implementation of the firing pin and hammer system that is even now still used in certain designs. Whereas the percussion cap in the caplock mechanism was external, the percussion cap in a self-contained cartridge is inside the breech. It is therefore necessary to use a firing pin (a thin rod) to strike the primer through a small penetration in the breech and cause firing. An external hammer is one that can be accessed by the operator during use. This allows the hammer to be manually cocked or eased (uncocked) without firing. The hammer is designed with a spur (extension) to facilitate manual operation. An internal hammer cannot be accessed manually during operation. Pistols and shotguns in particular, which have an internal hammer may be referred to as being hammerless. A striker is a type of firing pin operated by the direct action of a spring rather than by a hammer striking the firing pin. Striker-operated firearms lack a hammer. Drawbacks There are some notable drawbacks to the external hammer system compared to other modern, internal designs. In single-action revolvers, specifically, there is an ever-present danger of accidentally discharging the weapon if the hammer is struck with a cartridge loaded in the chamber. There is nothing to prevent the hammer from contacting the firing pin and by default the cartridge, in some models, and so the gun will be discharged unintentionally. Other models do have an internal safety mechanism that prevents contact between the hammer and the firing pin unless the trigger is actually pulled. Even so, many single-action revolver owners choose to carry their revolver with the hammer resting on an empty chamber to minimize the risk of accidental discharge. Additionally, for those who carry their firearm as a personal defense weapon, there is the ever-present worry that an external hammer may catch on a loose article of clothing in an emergency situation, because the hammer protrudes at an angle from the rear of the weapon, and as the owner moves to quickly draw their weapon, the hammer may snag on clothing and cause the loss of seconds in a dangerous situation. Paul B. Weston, an authority on police weapons, called the external a "fish hook" that tended to snag clothing during a fast draw. Linear hammer A linear hammer is similar to but differs from a striker in that the hammer is a separate component from the firing pin. When released, a linear hammer, under spring pressure, slides along the bore axis rather than pivoting around a pin placed perpendicular to the bore, as with the more common rotating hammer. The hammer then impacts the rear of the firing pin. Designs such as the Czech vz. 58 and the Chinese QBZ-95 utilize a linear hammer. See also Hammerless References Firearm components
Hammer (firearms)
[ "Technology" ]
1,015
[ "Firearm components", "Components" ]
8,111,079
https://en.wikipedia.org/wiki/Gravitational%20wave
Gravitational waves are transient displacements in a gravitational fieldgenerated by the relative motion of gravitating massesthat radiate outward from their source at the speed of light. They were proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves. In 1916, Albert Einstein demonstrated that gravitational waves result from his general theory of relativity as ripples in spacetime. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, instead asserting that gravity has instantaneous effect everywhere. Gravitational waves therefore stand as an important relativistic phenomenon that is absent from Newtonian physics. In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang. The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell Alan Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery. The first direct observation of gravitational waves was made in September 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves. Introduction In Albert Einstein's general theory of relativity, gravity is treated as a phenomenon resulting from the curvature of spacetime. This curvature is caused by the presence of mass. If the masses move, the curvature of spacetime changes. If the motion is not spherically symmetric, the motion can cause gravitational waves which propagate away at the speed of light. As a gravitational wave passes an observer, that observer will find spacetime distorted by the effects of strain. Distances between objects increase and decrease rhythmically as the wave passes, at a frequency equal to that of the wave. The magnitude of this effect is inversely proportional to the distance (not distance squared) from the source. Inspiraling binary neutron stars are predicted to be a powerful source of gravitational waves as they coalesce, due to the very large acceleration of their masses as they orbit close to one another. However, due to the astronomical distances to these sources, the effects when measured on Earth are predicted to be very small, having strains of less than 1 part in 1020. Scientists demonstrate the existence of these waves with highly-sensitive detectors at multiple observation sites. , the LIGO and VIRGO observatories were the most sensitive detectors, operating at resolutions of about one part in . The Japanese detector KAGRA was completed in 2019; its first joint detection with LIGO and VIRGO was reported in 2021. Another European ground-based detector, the Einstein Telescope, is under development. A space-based observatory, the Laser Interferometer Space Antenna (LISA), is also being developed by the European Space Agency. Gravitational waves do not strongly interact with matter in the way that electromagnetic radiation does. This allows for the observation of events involving exotic objects in the distant universe that cannot be observed with more traditional means such as optical telescopes or radio telescopes; accordingly, gravitational wave astronomy gives new insights into the workings of the universe. In particular, gravitational waves could be of interest to cosmologists as they offer a possible way of observing the very early universe. This is not possible with conventional astronomy, since before recombination the universe was opaque to electromagnetic radiation. Precise measurements of gravitational waves will also allow scientists to test more thoroughly the general theory of relativity. In principle, gravitational waves can exist at any frequency. Very low frequency waves can be detected using pulsar timing arrays. In this technique, the timing of approximately 100 pulsars spread widely across our galaxy is monitored over the course of years. Detectable changes in the arrival time of their signals can result from passing gravitational waves generated by merging supermassive black holes with wavelengths measured in lightyears. These timing changes can be used to locate the source of the waves. Using this technique, astronomers have discovered the 'hum' of various SMBH mergers occurring in the universe. Stephen Hawking and Werner Israel list different frequency bands for gravitational waves that could plausibly be detected, ranging from 10−7 Hz up to 1011 Hz. Speed of gravity The speed of gravitational waves in the general theory of relativity is equal to the speed of light in vacuum, . Within the theory of special relativity, the constant is not only about light; instead it is the highest possible speed for any interaction in nature. Formally, is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and/or gravity. Thus, the speed of "light" is also the speed of gravitational waves, and, further, the speed of any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons (which are the presumptive field particles associated with gravity; however, an understanding of the graviton, if any exist, requires an as-yet unavailable theory of quantum gravity). In August 2017, the LIGO and Virgo detectors received gravitational wave signals at nearly the same time as gamma ray satellites and optical telescopes saw signals from a source located about 130 million light years away. History The possibility of gravitational waves and that those might travel at the speed of light was discussed in 1893 by Oliver Heaviside, using the analogy between the inverse-square law of gravitation and the electrostatic force. In 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. In 1915 Einstein published his general theory of relativity, a complete relativistic theory of gravitation. He conjectured, like Poincare, that the equation would produce gravitational waves, but, as he mentions in a letter to Schwarzschild in February 1916, these could not be similar to electromagnetic waves. Electromagnetic waves can be produced by dipole motion, requiring both a positive and a negative charge. Gravitation has no equivalent to negative charge. Einstein continued to work through the complexity of the equations of general relativity to find an alternative wave model. The result was published in June 1916, and there he came to the conclusion that the gravitational wave must propagate with the speed of light, and there must, in fact, be three types of gravitational waves dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl. However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the result. In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they "propagate at the speed of thought". This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate system. In 1936, Einstein and Nathan Rosen submitted a paper to Physical Review in which they claimed gravitational waves could not exist in the full general theory of relativity because any such solution of the field equations would have a singularity. The journal sent their manuscript to be reviewed by Howard P. Robertson, who anonymously reported that the singularities in question were simply the harmless coordinate singularities of the employed cylindrical coordinates. Einstein, who was unfamiliar with the concept of peer review, angrily withdrew the manuscript, never to publish in Physical Review again. Nonetheless, his assistant Leopold Infeld, who had been in contact with Robertson, convinced Einstein that the criticism was correct, and the paper was rewritten with the opposite conclusion and published elsewhere. In 1956, Felix Pirani remedied the confusion caused by the use of various coordinate systems by rephrasing the gravitational waves in terms of the manifestly observable Riemann curvature tensor. At the time, Pirani's work was overshadowed by the community's focus on a different question: whether gravitational waves could transmit energy. This matter was settled by a thought experiment proposed by Richard Feynman during the first "GR" conference at Chapel Hill in 1957. In short, his argument known as the "sticky bead argument" notes that if one takes a rod with beads then the effect of a passing gravitational wave would be to move the beads along the rod; friction would then produce heat, implying that the passing wave had done work. Shortly after, Hermann Bondi published a detailed version of the "sticky bead argument". This later led to a series of articles (1959 to 1989) by Bondi and Pirani that established the existence of plane wave solutions for gravitational waves. Paul Dirac further postulated the existence of gravitational waves, declaring them to have "physical significance" in his 1959 lecture at the Lindau Meetings. Further, it was Dirac who predicted gravitational waves with a well defined energy density in 1964. After the Chapel Hill conference, Joseph Weber started designing and building the first gravitational wave detectors now known as Weber bars. In 1969, Weber claimed to have detected the first gravitational waves, and by 1970 he was "detecting" signals regularly from the Galactic Center; however, the frequency of detection soon raised doubts on the validity of his observations as the implied rate of energy loss of the Milky Way would drain our galaxy of energy on a timescale much shorter than its inferred age. These doubts were strengthened when, by the mid-1970s, repeated experiments from other groups building their own Weber bars across the globe failed to find any signals, and by the late 1970s consensus was that Weber's results were spurious. In the same period, the first indirect evidence of gravitational waves was discovered. In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar, which earned them the 1993 Nobel Prize in Physics. Pulsar timing observations over the next decade showed a gradual decay of the orbital period of the Hulse–Taylor pulsar that matched the loss of energy and angular momentum in gravitational radiation predicted by general relativity. This indirect detection of gravitational waves motivated further searches, despite Weber's discredited result. Some groups continued to improve Weber's original concept, while others pursued the detection of gravitational waves using laser interferometers. The idea of using a laser interferometer for this seems to have been floated independently by various people, including M.E. Gertsenshtein and V. I. Pustovoit in 1962, and Vladimir B. Braginskiĭ in 1966. The first prototypes were developed in the 1970s by Robert L. Forward and Rainer Weiss. In the decades that followed, ever more sensitive instruments were constructed, culminating in the construction of GEO600, LIGO, and Virgo. After years of producing null results, improved detectors became operational in 2015. On 11 February 2016, the LIGO-Virgo collaborations announced the first observation of gravitational waves, from a signal (dubbed GW150914) detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The confidence level of this being an observation of gravitational waves was 99.99994%. A year earlier, the BICEP2 collaboration claimed that they had detected the imprint of gravitational waves in the cosmic microwave background. However, they were later forced to retract this result. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves. In 2023, NANOGrav, EPTA, PPTA, and IPTA announced that they found evidence of a universal gravitational wave background. North American Nanohertz Observatory for Gravitational Waves states, that they were created over cosmological time scales by supermassive black holes, identifying the distinctive Hellings-Downs curve in 15 years of radio observations of 25 pulsars. Similar results are published by European Pulsar Timing Array, who claimed a -significance. They expect that a -significance will be achieved by 2025 by combining the measurements of several collaborations. Effects of passing Gravitational waves are constantly passing Earth; however, even the strongest have a minuscule effect and their sources are generally at a great distance. For example, the waves given off by the cataclysmic final merger of GW150914 reached Earth after travelling over a billion light-years, as a ripple in spacetime that changed the length of a 4 km LIGO arm by a thousandth of the width of a proton, proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. This tiny effect from even extreme gravitational waves makes them observable on Earth only with the most sophisticated detectors. The effects of a passing gravitational wave, in an extremely exaggerated form, can be visualized by imagining a perfectly flat region of spacetime with a group of motionless test particles lying in a plane, e.g., the surface of a computer screen. As a gravitational wave passes through the particles along a line perpendicular to the plane of the particles, i.e., following the observer's line of vision into the screen, the particles will follow the distortion in spacetime, oscillating in a "cruciform" manner, as shown in the animations. The area enclosed by the test particles does not change and there is no motion along the direction of propagation. The oscillations depicted in the animation are exaggerated for the purpose of discussion in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity). However, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit. In this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animation. If the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula. As with other waves, there are a number of characteristics used to describe a gravitational wave: Amplitude: Usually denoted h, this is the size of the wave the fraction of stretching or squeezing in the animation. The amplitude shown here is roughly h = 0.5 (or 50%). Gravitational waves passing through the Earth are many sextillion times weaker than this h ≈ 10−20. Frequency: Usually denoted f, this is the frequency with which the wave oscillates (1 divided by the amount of time between two successive maximum stretches or squeezes) Wavelength: Usually denoted λ, this is the distance along the wave between points of maximum stretch or squeeze. Speed: This is the speed at which a point on the wave (for example, a point of maximum stretch or squeeze) travels. For gravitational waves with small amplitudes, this wave speed is equal to the speed of light (c). The speed, wavelength, and frequency of a gravitational wave are related by the equation , just like the equation for a light wave. For example, the animations shown here oscillate roughly once every two seconds. This would correspond to a frequency of 0.5 Hz, and a wavelength of about 600 000 km, or 47 times the diameter of the Earth. In the above example, it is assumed that the wave is linearly polarized with a "plus" polarization, written h+. Polarization of a gravitational wave is just like polarization of a light wave except that the polarizations of a gravitational wave are 45 degrees apart, as opposed to 90 degrees. In particular, in a "cross"-polarized gravitational wave, h×, the effect on the test particles would be basically the same, but rotated by 45 degrees, as shown in the second animation. Just as with light polarization, the polarizations of gravitational waves may also be expressed in terms of circularly polarized waves. Gravitational waves are polarized because of the nature of their source. Sources In general terms, gravitational waves are radiated by large, coherent motions of immense mass, especially in regions where gravity is so strong that Newtonian gravity begins to fail. The effect does not occur in a purely spherically symmetric system. A simple example of this principle is a spinning dumbbell. If the dumbbell spins around its axis of symmetry, it will not radiate gravitational waves; if it tumbles end over end, as in the case of two planets orbiting each other, it will radiate gravitational waves. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. In an extreme case, such as when the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would be given off. Some more detailed examples: Two objects orbiting each other, as a planet would orbit the Sun, will radiate. A spinning non-axisymmetric planetoid say with a large bump or dimple on the equator will radiate. A supernova will radiate except in the unlikely event that the explosion is perfectly symmetric. An isolated non-spinning solid object moving at a constant velocity will not radiate. This can be regarded as a consequence of the principle of conservation of linear momentum. A spinning disk will not radiate. This can be regarded as a consequence of the principle of conservation of angular momentum. However, it will show gravitomagnetic effects. A spherically pulsating spherical star (non-zero monopole moment or mass, but zero quadrupole moment) will not radiate, in agreement with Birkhoff's theorem. More technically, the second time derivative of the quadrupole moment (or the l-th time derivative of the l-th multipole moment) of an isolated system's stress–energy tensor must be non-zero in order for it to emit gravitational radiation. This is analogous to the changing dipole moment of charge or current that is necessary for the emission of electromagnetic radiation. Binaries Gravitational waves carry energy away from their sources and, in the case of orbiting bodies, this is associated with an in-spiral or decrease in orbit. Imagine for example a simple system of two masses such as the Earth–Sun system moving slowly compared to the speed of light in circular orbits. Assume that these two masses orbit each other in a circular orbit in the x–y plane. To a good approximation, the masses follow simple Keplerian orbits. However, such an orbit represents a changing quadrupole moment. That is, the system will give off gravitational waves. In theory, the loss of energy through gravitational radiation could eventually drop the Earth into the Sun. However, the total energy of the Earth orbiting the Sun (kinetic energy + gravitational potential energy) is about 1.14 joules of which only 200 watts (joules per second) is lost through gravitational radiation, leading to a decay in the orbit by about 1 meters per day or roughly the diameter of a proton. At this rate, it would take the Earth approximately 3 times more than the current age of the universe to spiral onto the Sun. This estimate overlooks the decrease in r over time, but the radius varies only slowly for most of the time and plunges at later stages, as with the initial radius and the total time needed to fully coalesce. More generally, the rate of orbital decay can be approximated by where r is the separation between the bodies, t time, G the gravitational constant, c the speed of light, and m1 and m2 the masses of the bodies. This leads to an expected time to merger of Compact binaries Compact stars like white dwarfs and neutron stars can be constituents of binaries. For example, a pair of solar mass neutron stars in a circular orbit at a separation of 1.89 m (189,000 km) has an orbital period of 1,000 seconds, and an expected lifetime of 1.30 seconds or about 414,000 years. Such a system could be observed by LISA if it were not too far away. A far greater number of white dwarf binaries exist with orbital periods in this range. White dwarf binaries have masses in the order of the Sun, and diameters in the order of the Earth. They cannot get much closer together than 10,000 km before they will merge and explode in a supernova which would also end the emission of gravitational waves. Until then, their gravitational radiation would be comparable to that of a neutron star binary. When the orbit of a neutron star binary has decayed to 1.89 m (1890 km), its remaining lifetime is about 130,000 seconds or 36 hours. The orbital frequency will vary from 1 orbit per second at the start, to 918 orbits per second when the orbit has shrunk to 20 km at merger. The majority of gravitational radiation emitted will be at twice the orbital frequency. Just before merger, the inspiral could be observed by LIGO if such a binary were close enough. LIGO has only a few minutes to observe this merger out of a total orbital lifetime that may have been billions of years. In August 2017, LIGO and Virgo observed the first binary neutron star inspiral in GW170817, and 70 observatories collaborated to detect the electromagnetic counterpart, a kilonova in the galaxy NGC 4993, 40 megaparsecs away, emitting a short gamma ray burst (GRB 170817A) seconds after the merger, followed by a longer optical transient (AT 2017gfo) powered by r-process nuclei. Advanced LIGO detectors should be able to detect such events up to 200 megaparsecs away; at this range, around 40 detections per year would be expected. Black hole binaries Black hole binaries emit gravitational waves during their in-spiral, merger, and ring-down phases. Hence, in the early 1990s the physics community rallied around a concerted effort to predict the waveforms of gravitational waves from these systems with the Binary Black Hole Grand Challenge Alliance. The largest amplitude of emission occurs during the merger phase, which can be modeled with the techniques of numerical relativity. The first direct detection of gravitational waves, GW150914, came from the merger of two black holes. Supernova A supernova is a transient astronomical event that occurs during the last stellar evolutionary stages of a massive star's life, whose dramatic and catastrophic destruction is marked by one final titanic explosion. This explosion can happen in one of many ways, but in all of them a significant proportion of the matter in the star is blown away into the surrounding space at extremely high velocities (up to 10% of the speed of light). Unless there is perfect spherical symmetry in these explosions (i.e., unless matter is spewed out evenly in all directions), there will be gravitational radiation from the explosion. This is because gravitational waves are generated by a changing quadrupole moment, which can happen only when there is asymmetrical movement of masses. Since the exact mechanism by which supernovae take place is not fully understood, it is not easy to model the gravitational radiation emitted by them. Spinning neutron stars As noted above, a mass distribution will emit gravitational radiation only when there is spherically asymmetric motion among the masses. A spinning neutron star will generally emit no gravitational radiation because neutron stars are highly dense objects with a strong gravitational field that keeps them almost perfectly spherical. In some cases, however, there might be slight deformities on the surface called "mountains", which are bumps extending no more than 10 centimeters (4 inches) above the surface, that make the spinning spherically asymmetric. This gives the star a quadrupole moment that changes with time, and it will emit gravitational waves until the deformities are smoothed out. Inflation Many models of the Universe suggest that there was an inflationary epoch in the early history of the Universe when space expanded by a large factor in a very short amount of time. If this expansion was not symmetric in all directions, it may have emitted gravitational radiation detectable today as a gravitational wave background. This background signal is too weak for any currently operational gravitational wave detector to observe, and it is thought it may be decades before such an observation can be made. Properties and behaviour Energy, momentum, and angular momentum Water waves, sound waves, and electromagnetic waves are able to carry energy, momentum, and angular momentum and by doing so they carry those away from the source. Gravitational waves perform the same function. Thus, for example, a binary system loses angular momentum as the two orbiting objects spiral towards each otherthe angular momentum is radiated away by gravitational waves. The waves can also carry off linear momentum, a possibility that has some interesting implications for astrophysics. After two supermassive black holes coalesce, emission of linear momentum can produce a "kick" with amplitude as large as 4000 km/s. This is fast enough to eject the coalesced black hole completely from its host galaxy. Even if the kick is too small to eject the black hole completely, it can remove it temporarily from the nucleus of the galaxy, after which it will oscillate about the center, eventually coming to rest. A kicked black hole can also carry a star cluster with it, forming a hyper-compact stellar system. Or it may carry gas, allowing the recoiling black hole to appear temporarily as a "naked quasar". The quasar SDSS J092712.65+294344.0 is thought to contain a recoiling supermassive black hole. Redshifting Like electromagnetic waves, gravitational waves should exhibit shifting of wavelength and frequency due to the relative velocities of the source and observer (the Doppler effect), but also due to distortions of spacetime, such as cosmic expansion. Redshifting of gravitational waves is different from redshifting due to gravity (gravitational redshift). Quantum gravity, wave-particle aspects, and graviton In the framework of quantum field theory, the graviton is the name given to a hypothetical elementary particle speculated to be the force carrier that mediates gravity. However the graviton is not yet proven to exist, and no scientific model yet exists that successfully reconciles general relativity, which describes gravity, and the Standard Model, which describes all other fundamental forces. Attempts, such as quantum gravity, have been made, but are not yet accepted. If such a particle exists, it is expected to be massless (because the gravitational force appears to have unlimited range) and must be a spin-2 boson. It can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field must couple to (interact with) the stress-energy tensor in the same way that the gravitational field does; therefore if a massless spin-2 particle were ever discovered, it would be likely to be the graviton without further distinction from other massless spin-2 particles. Such a discovery would unite quantum theory with gravity. Significance for study of the early universe Due to the weakness of the coupling of gravity to matter, gravitational waves experience very little absorption or scattering, even as they travel over astronomical distances. In particular, gravitational waves are expected to be unaffected by the opacity of the very early universe. In these early phases, space had not yet become "transparent", so observations based upon light, radio waves, and other electromagnetic radiation that far back into time are limited or unavailable. Therefore, gravitational waves are expected in principle to have the potential to provide a wealth of observational data about the very early universe. Determining direction of travel The difficulty in directly detecting gravitational waves means it is also difficult for a single detector to identify by itself the direction of a source. Therefore, multiple detectors are used, both to distinguish signals from other "noise" by confirming the signal is not of earthly origin, and also to determine direction by means of triangulation. This technique uses the fact that the waves travel at the speed of light and will reach different detectors at different times depending on their source direction. Although the differences in arrival time may be just a few milliseconds, this is sufficient to identify the direction of the origin of the wave with considerable precision. Only in the case of GW170814 were three detectors operating at the time of the event, therefore, the direction is precisely defined. The detection by all three instruments led to a very accurate estimate of the position of the source, with a 90% credible region of just 60 deg2, a factor 20 more accurate than before. Gravitational wave astronomy During the past century, astronomy has been revolutionized by the use of new methods for observing the universe. Astronomical observations were initially made using visible light. Galileo Galilei pioneered the use of telescopes to enhance these observations. However, visible light is only a small portion of the electromagnetic spectrum, and not all objects in the distant universe shine strongly in this particular band. More information may be found, for example, in radio wavelengths. Using radio telescopes, astronomers have discovered pulsars and quasars, for example. Observations in the microwave band led to the detection of faint imprints of the Big Bang, a discovery Stephen Hawking called the "greatest discovery of the century, if not all time". Similar advances in observations using gamma rays, x-rays, ultraviolet light, and infrared light have also brought new insights to astronomy. As each of these regions of the spectrum has opened, new discoveries have been made that could not have been made otherwise. The astronomy community hopes that the same holds true of gravitational waves. Gravitational waves have two important and unique properties. First, there is no need for any type of matter to be present nearby in order for the waves to be generated by a binary system of uncharged black holes, which would emit no electromagnetic radiation. Second, gravitational waves can pass through any intervening matter without being scattered significantly. Whereas light from distant stars may be blocked out by interstellar dust, for example, gravitational waves will pass through essentially unimpeded. These two features allow gravitational waves to carry information about astronomical phenomena heretofore never observed by humans. The sources of gravitational waves described above are in the low-frequency end of the gravitational-wave spectrum (10−7 to 105 Hz). An astrophysical source at the high-frequency end of the gravitational-wave spectrum (above 105 Hz and probably 1010 Hz) generates relic gravitational waves that are theorized to be faint imprints of the Big Bang like the cosmic microwave background. At these high frequencies it is potentially possible that the sources may be "man made" that is, gravitational waves generated and detected in the laboratory. A supermassive black hole, created from the merger of the black holes at the center of two merging galaxies detected by the Hubble Space Telescope, is theorized to have been ejected from the merger center by gravitational waves. Detection Indirect detection Although the waves from the Earth–Sun system are minuscule, astronomers can point to other sources for which the radiation should be substantial. One important example is the Hulse–Taylor binary a pair of stars, one of which is a pulsar. The characteristics of their orbit can be deduced from the Doppler shifting of radio signals given off by the pulsar. Each of the stars is about and the size of their orbits is about 1/75 of the Earth–Sun orbit, just a few times larger than the diameter of our own Sun. The combination of greater masses and smaller separation means that the energy given off by the Hulse–Taylor binary will be far greater than the energy given off by the Earth–Sun system roughly 1022 times as much. The information about the orbit can be used to predict how much energy (and angular momentum) would be radiated in the form of gravitational waves. As the binary system loses energy, the stars gradually draw closer to each other, and the orbital period decreases. The resulting trajectory of each star is an inspiral, a spiral with decreasing radius. General relativity precisely describes these trajectories; in particular, the energy radiated in gravitational waves determines the rate of decrease in the period, defined as the time interval between successive periastrons (points of closest approach of the two stars). For the Hulse–Taylor pulsar, the predicted current change in radius is about 3 mm per orbit, and the change in the 7.75 hr period is about 2 seconds per year. Following a preliminary observation showing an orbital energy loss consistent with gravitational waves, careful timing observations by Taylor and Joel Weisberg dramatically confirmed the predicted period decrease to within 10%. With the improved statistics of more than 30 years of timing data since the pulsar's discovery, the observed change in the orbital period currently matches the prediction from gravitational radiation assumed by general relativity to within 0.2 percent. In 1993, spurred in part by this indirect detection of gravitational waves, the Nobel Committee awarded the Nobel Prize in Physics to Hulse and Taylor for "the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation." The lifetime of this binary system, from the present to merger is estimated to be a few hundred million years. Inspirals are very important sources of gravitational waves. Any time two compact objects (white dwarfs, neutron stars, or black holes) are in close orbits, they send out intense gravitational waves. As they spiral closer to each other, these waves become more intense. At some point they should become so intense that direct detection by their effect on objects on Earth or in space is possible. This direct detection is the goal of several large-scale experiments. The only difficulty is that most systems like the Hulse–Taylor binary are so far away. The amplitude of waves given off by the Hulse–Taylor binary at Earth would be roughly h ≈ 10−26. There are some sources, however, that astrophysicists expect to find that produce much greater amplitudes of h ≈ 10−20. At least eight other binary pulsars have been discovered. Difficulties Gravitational waves are not easily detectable. When they reach the Earth, they have a small amplitude with strain approximately 10−21, meaning that an extremely sensitive detector is needed, and that other sources of noise can overwhelm the signal. Gravitational waves are expected to have frequencies 10−16 Hz < f < 104 Hz. Ground-based detectors Though the Hulse–Taylor observations were very important, they give only indirect evidence for gravitational waves. A more conclusive observation would be a direct measurement of the effect of a passing gravitational wave, which could also provide more information about the system that generated it. Any such direct detection is complicated by the extraordinarily small effect the waves would produce on a detector. The amplitude of a spherical wave will fall off as the inverse of the distance from the source (the 1/R term in the formulas for h above). Thus, even waves from extreme systems like merging binary black holes die out to very small amplitudes by the time they reach the Earth. Astrophysicists expect that some gravitational waves passing the Earth may be as large as h ≈ 10−20, but generally no bigger. Resonant antennas A simple device theorised to detect the expected wave motion is called a Weber bar a large, solid bar of metal isolated from outside vibrations. This type of instrument was the first type of gravitational wave detector. Strains in space due to an incident gravitational wave excite the bar's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. With this instrument, Joseph Weber claimed to have detected daily signals of gravitational waves. His results, however, were contested in 1974 by physicists Richard Garwin and David Douglass. Modern forms of the Weber bar are still operated, cryogenically cooled, with superconducting quantum interference devices to detect vibration. Weber bars are not sensitive enough to detect anything but extremely powerful gravitational waves. MiniGRAIL is a spherical gravitational wave antenna using this principle. It is based at Leiden University, consisting of an exactingly machined 1,150 kg sphere cryogenically cooled to 20 millikelvins. The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. There are currently two detectors focused on the higher end of the gravitational wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Both detectors are expected to be sensitive to periodic spacetime strains of h ~ , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of h ~ , with an expectation to reach a sensitivity of h ~ . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ≈1011 Hz (100 GHz) and h ≈10−30 to 10−32. Interferometers A more sensitive class of detector uses a laser Michelson interferometer to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). After years of development ground-based interferometers made the first detection of gravitational waves in 2015. Currently, the most sensitive is LIGO the Laser Interferometer Gravitational Wave Observatory. LIGO has three detectors: one in Livingston, Louisiana, one at the Hanford site in Richland, Washington and a third (formerly installed as a second detector at Hanford) that is planned to be moved to India. Each observatory has two light storage arms that are 4 kilometers in length. These are at 90 degree angles to each other, with the light passing through 1 m diameter vacuum tubes running the entire 4 kilometers. A passing gravitational wave will slightly stretch one arm as it shortens the other. This is the motion to which an interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 m. LIGO should be able to detect gravitational waves as small as h ~ . Upgrades to LIGO and Virgo should increase the sensitivity still further. Another highly sensitive interferometer, KAGRA, which is located in the Kamioka Observatory in Japan, is in operation since February 2020. A key point is that a tenfold increase in sensitivity (radius of 'reach') increases the volume of space accessible to the instrument by one thousand times. This increases the rate at which detectable signals might be seen from one per tens of years of observation, to tens per year. Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly; one analogy is to rainfall the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals of low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these 'stationary' (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other 'non-stationary' noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All of these must be taken into account and excluded by analysis before detection may be considered a true gravitational wave event. Einstein@Home The simplest gravitational waves are those with constant frequency. The waves given off by a spinning, non-axisymmetric neutron star would be approximately monochromatic: a pure tone in acoustics. Unlike signals from supernovae or binary black holes, these signals evolve little in amplitude or frequency over the period it would be observed by ground-based detectors. However, there would be some change in the measured signal, because of Doppler shifting caused by the motion of the Earth. Despite the signals being simple, detection is extremely computationally expensive, because of the long stretches of data that must be analysed. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. Space-based interferometers Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being 2.5 million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to heat, shot noise, and artifacts caused by cosmic rays and solar wind. Using pulsar timing arrays Pulsars are rapidly rotating stars. A pulsar emits beams of radio waves that, like lighthouse beams, sweep through the sky as the pulsar rotates. The signal from a pulsar can be detected by radio telescopes as a series of regularly spaced pulses, essentially like the ticks of a clock. GWs affect the time it takes the pulses to travel from the pulsar to a telescope on Earth. A pulsar timing array uses millisecond pulsars to seek out perturbations due to GWs in measurements of the time of arrival of pulses to a telescope, in other words, to look for deviations in the clock ticks. To detect GWs, pulsar timing arrays search for a distinct quadrupolar pattern of correlation and anti-correlation between the time of arrival of pulses from different pulsar pairs as a function of their angular separation in the sky. Although pulsar pulses travel through space for hundreds or thousands of years to reach us, pulsar timing arrays are sensitive to perturbations in their travel time of much less than a millionth of a second. The most likely source of GWs to which pulsar timing arrays are sensitive are supermassive black hole binaries, which form from the collision of galaxies. In addition to individual binary systems, pulsar timing arrays are sensitive to a stochastic background of GWs made from the sum of GWs from many galaxy mergers. Other potential signal sources include cosmic strings and the primordial background of GWs from cosmic inflation. Globally there are three active pulsar timing array projects. The North American Nanohertz Observatory for Gravitational Waves uses data collected by the Arecibo Radio Telescope and Green Bank Telescope. The Australian Parkes Pulsar Timing Array uses data from the Parkes radio-telescope. The European Pulsar Timing Array uses data from the four largest telescopes in Europe: the Lovell Telescope, the Westerbork Synthesis Radio Telescope, the Effelsberg Telescope and the Nancay Radio Telescope. These three groups also collaborate under the title of the International Pulsar Timing Array project. In June 2023, NANOGrav published the 15-year data release, which contained the first evidence for a stochastic gravitational wave background. In particular, it included the first measurement of the Hellings-Downs curve, the tell-tale sign of the gravitational wave origin of the observed background. Primordial gravitational wave Primordial gravitational waves are gravitational waves observed in the cosmic microwave background. They were allegedly detected by the BICEP2 instrument, an announcement made on 17 March 2014, which was withdrawn on 30 January 2015 ("the signal can be entirely attributed to dust in the Milky Way"). LIGO and Virgo observations On 11 February 2016, the LIGO collaboration announced the first observation of gravitational waves, from a signal detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The gravitational waves were observed in the region more than 5 sigma (in other words, 99.99997% chances of showing/getting the same result), the probability of finding enough to have been assessed/considered as the evidence/proof in an experiment of statistical physics. Since then LIGO and Virgo have reported more gravitational wave observations from merging black hole binaries. On 16 October 2017, the LIGO and Virgo collaborations announced the first-ever detection of gravitational waves originating from the coalescence of a binary neutron star system. The observation of the GW170817 transient, which occurred on 17 August 2017, allowed for constraining the masses of the neutron stars involved between 0.86 and 2.26 solar masses. Further analysis allowed a greater restriction of the mass values to the interval 1.17–1.60 solar masses, with the total system mass measured to be 2.73–2.78 solar masses. The inclusion of the Virgo detector in the observation effort allowed for an improvement of the localization of the source by a factor of 10. This in turn facilitated the electromagnetic follow-up of the event. In contrast to the case of binary black hole mergers, binary neutron star mergers were expected to yield an electromagnetic counterpart, that is, a light signal associated with the event. A gamma-ray burst (GRB 170817A) was detected by the Fermi Gamma-ray Space Telescope, occurring 1.7 seconds after the gravitational wave transient. The signal, originating near the galaxy NGC 4993, was associated with the neutron star merger. This was corroborated by the electromagnetic follow-up of the event (AT 2017gfo), involving 70 telescopes and observatories and yielding observations over a large region of the electromagnetic spectrum which further confirmed the neutron star nature of the merged objects and the associated kilonova. In 2021, the detection of the first two neutron star-black hole binaries by the LIGO and VIRGO detectors was published in the Astrophysical Journal Letters, allowing to first set bounds on the quantity of such systems. No neutron star-black hole binary had ever been observed using conventional means before the gravitational observation. Microscopic sources In 1964, L. Halpern and B. Laurent theoretically proved that gravitational spin-2 electron transitions are possible in atoms. Compared to electric and magnetic transitions the emission probability is extremely low. Stimulated emission was discussed for increasing the efficiency of the process. Due to the lack of mirrors or resonators for gravitational waves, they determined that a single pass GASER (a kind of laser emitting gravitational waves) is practically unfeasible. In 1998, the possibility of a different implementation of the above theoretical analysis was proposed by Giorgio Fontana. The required coherence for a practical GASER could be obtained by Cooper pairs in superconductors that are characterized by a macroscopic collective wave-function. Cuprate high temperature superconductors are characterized by the presence of s-wave and d-wave Cooper pairs. Transitions between s-wave and d-wave are gravitational spin-2. Out of equilibrium conditions can be induced by injecting s-wave Cooper pairs from a low temperature superconductor, for instance lead or niobium, which is pure s-wave, by means of a Josephson junction with high critical current. The amplification mechanism can be described as the effect of superradiance, and 10 cubic centimeters of cuprate high temperature superconductor seem sufficient for the mechanism to properly work. A detailed description of the approach can be found in "High Temperature Superconductors as Quantum Sources of Gravitational Waves: The HTSC GASER". Chapter 3 of this book. In fiction An episode of the 1962 Russian science-fiction novel Space Apprentice by Arkady and Boris Strugatsky shows an experiment monitoring the propagation of gravitational waves at the expense of annihilating a chunk of asteroid 15 Eunomia the size of Mount Everest. In Stanislaw Lem's 1986 novel Fiasco, a "gravity gun" or "gracer" (gravity amplification by collimated emission of resonance) is used to reshape a collapsar, so that the protagonists can exploit the extreme relativistic effects and make an interstellar journey. In Greg Egan's 1997 novel Diaspora, the analysis of a gravitational wave signal from the inspiral of a nearby binary neutron star reveals that its collision and merger is imminent, implying a large gamma-ray burst is going to impact the Earth. In Liu Cixin's 2006 Remembrance of Earth's Past series, gravitational waves are used as an interstellar broadcast signal, which serves as a central plot point in the conflict between civilizations within the galaxy. See also 2017 Nobel Prize in Physics, which was awarded to three individual physicists for their role in the discovery of and testing for the waves Anti-gravity Artificial gravity First observation of gravitational waves Gravitational plane wave Gravitational field Gravitational-wave astronomy Gravitational wave background Gravitational-wave observatory Gravitomagnetism Graviton Hawking radiation, for gravitationally induced electromagnetic radiation from black holes HM Cancri LISA, DECIGO and BBO – proposed space-based detectors LIGO, Virgo interferometer, GEO600, KAGRA, and TAMA 300 – Ground-based gravitational-wave detectors Linearized gravity Peres metric pp-wave spacetime, for an important class of exact solutions modelling gravitational radiation PSR B1913+16, the first binary pulsar discovered and the first experimental evidence for the existence of gravitational waves. Spin-flip, a consequence of gravitational wave emission from binary supermassive black holes Sticky bead argument, for a physical way to see that gravitational radiation should carry energy Tidal force References Further reading Bartusiak, Marcia. Einstein's Unfinished Symphony. Washington, DC: Joseph Henry Press, 2000. Landau, L.D. and Lifshitz, E.M., The Classical Theory of Fields (Pergamon Press), 1987. Bibliography Berry, Michael, Principles of Сosmology and Gravitation (Adam Hilger, Philadelphia, 1989). Collins, Harry, Gravity's Shadow: The Search for Gravitational Waves, University of Chicago Press, 2004. Collins, Harry, Gravity's Kiss: The Detection of Gravitational Waves (The MIT Press, Cambridge MA, 2017). . Davies, P.C.W., The Search for Gravity Waves (Cambridge University Press, 1980). . Grote, Hartmut, Gravitational Waves: A history of discovery (CRC Press, Taylor & Francis Group, Boca Raton/London/New York, 2020). . P. J. E. Peebles, Principles of Physical Cosmology (Princeton University Press, Princeton, 1993). . Wheeler, John Archibald and Ciufolini, Ignazio, Gravitation and Inertia (Princeton University Press, Princeton, 1995). . Woolf, Harry, ed., Some Strangeness in the Proportion (Addison–Wesley, Reading, MA, 1980). . External links Laser Interferometer Gravitational Wave Observatory. LIGO Laboratory, operated by the California Institute of Technology and the Massachusetts Institute of Technology Gravitational Waves – Collected articles at Nature Journal Gravitational Waves – Collected articles Scientific American Video (94:34) – Scientific Talk on Discovery, Barry Barish, CERN (11 February 2016) Binary stars Black holes Effects of gravity Concepts in astronomy Unsolved problems in physics
Gravitational wave
[ "Physics", "Astronomy" ]
11,392
[ "Physical phenomena", "Black holes", "Physical quantities", "Concepts in astronomy", "Unsolved problems in physics", "Astrophysics", "Waves", "Density", "Stellar phenomena", "Gravitational waves", "Astronomical objects" ]
8,112,674
https://en.wikipedia.org/wiki/Mebeverine
Mebeverine is a drug used to alleviate some of the symptoms of irritable bowel syndrome. It works by relaxing the muscles in and around the gut. Medical use Mebeverine is used to alleviate some of the symptoms of irritable bowel syndrome (IBS) and related conditions; specifically stomach pain and cramps, persistent diarrhoea, and flatulence. Historically data from controlled clinical trials have not found a difference from placebo or statistically significant results in the global improvement of IBS. However, more recent systematic reviews found Mebeverine is an effective treatment option in IBS, with a good safety profile and low frequency of adverse effects. It has not been tested in pregnant women nor in pregnant animals so pregnant women should not take it; it is expressed at low levels in breast milk, while no adverse effects have been reported in infants, breastfeeding women should not take this drug. Adverse effects Adverse effects include hypersensitivity reactions and allergic reactions, immune system disorders, skin disorders including hives, oedema and widespread rashes. Additionally, the following adverse effects have been reported: heartburn, indigestion, tiredness, diarrhoea, constipation, loss of appetite, general malaise, dizziness, insomnia, headache, and decreased pulse rate. It does not have systemic anticholinergic side effects. Mebeverine can, on highly rare occasions, cause drug-induced acute angle closure glaucoma. In a urine drug-screening test, mebeverine can affect a false positive result for amphetamines. Mechanism of action Mebeverine is an anticholinergic but its mechanism of action is not known; it appears to work directly on smooth muscle within the gastrointestinal tract and may have an anaesthetic effect, may affect calcium channels, and may affect muscarinic receptors. It is metabolized mostly by esterases, and almost completely. The metabolites are excreted in urine. Mebeverine exists in two enantiomeric forms. The commercially available product is a racemic mixture of them. A study in rats indicates that the two have different pharmacokinetic profiles. History It is a second generation papaverine analog, and was first synthesized around the same time as verapamil. It was first registered in 1965. Availability Mebeverine is a generic drug and is available internationally under many brand names, such as Duspatalin as sold by Abbott or Mave and Mave SR by . References 4-Methoxyphenyl compounds Amines Benzoate esters Catechol ethers M1 receptor antagonists M2 receptor antagonists M3 receptor antagonists M4 receptor antagonists M5 receptor antagonists Motility stimulants
Mebeverine
[ "Chemistry" ]
583
[ "Amines", "Bases (chemistry)", "Functional groups" ]
8,112,701
https://en.wikipedia.org/wiki/Seliwanoff%27s%20test
Seliwanoff’s test is a chemical test which distinguishes between aldose and ketose sugars. If the sugar contains a ketone group, it is a ketose. If a sugar contains an aldehyde group, it is an aldose. This test relies on the principle that, when heated, ketoses are more rapidly dehydrated than aldoses. It is named after Theodor Seliwanoff, the chemist who devised the test. When added to a solution containing ketoses, a red color is formed rapidly indicating a positive test. When added to a solution containing aldoses, a slower forming light pink is observed instead. The reagents consist of resorcinol and concentrated hydrochloric acid: The acid hydrolysis of polysaccharide and oligosaccharide ketoses yields simpler sugars followed by furfural. The dehydrated ketose then reacts with two equivalents of resorcinol in a series of condensation reactions to produce a molecule with a deep cherry red color. Aldoses may react slightly to produce a faint pink color. Fructose and sucrose are two common sugars which give a positive test. Sucrose gives a positive test as it is a disaccharide consisting of fructose and glucose. Generally, 6M HCl is used to run this test. Ketoses are dehydrated faster and give stronger colors. Aldoses react very slowly and give faint colors. References Biochemistry detection methods Carbohydrate methods Reagents for organic chemistry
Seliwanoff's test
[ "Chemistry", "Biology" ]
328
[ "Biochemistry methods", "Biochemistry detection methods", "Chemical tests", "Carbohydrate methods", "Carbohydrate chemistry", "Reagents for organic chemistry" ]
8,113,126
https://en.wikipedia.org/wiki/Globally%20hyperbolic%20manifold
In mathematical physics, global hyperbolicity is a certain condition on the causal structure of a spacetime manifold (that is, a Lorentzian manifold). It is called hyperbolic in analogy with the linear theory of wave propagation, where the future state of a system is specified by initial conditions. (In turn, the leading symbol of the wave operator is that of a hyperboloid.) This is relevant to Albert Einstein's theory of general relativity, and potentially to other metric gravitational theories. Definitions There are several equivalent definitions of global hyperbolicity. Let M be a smooth connected Lorentzian manifold without boundary. We make the following preliminary definitions: M is non-totally vicious if there is at least one point such that no closed timelike curve passes through it. M is causal if it has no closed causal curves. M is non-total imprisoning if no inextendible causal curve is contained in a compact set. This property implies causality. M is strongly causal if for every point p and any neighborhood U of p there is a causally convex neighborhood V of p contained in U, where causal convexity means that any causal curve with endpoints in V is entirely contained in V. This property implies non-total imprisonment. Given any point p in M, [resp. ] is the collection of points which can be reached by a future-directed [resp. past-directed] continuous causal curve starting from p. Given a subset S of M, the domain of dependence of S is the set of all points p in M such that every inextendible causal curve through p intersects S. A subset S of M is achronal if no timelike curve intersects S more than once. A Cauchy surface for M is a closed achronal set whose domain of dependence is M. The following conditions are equivalent: The spacetime is causal, and for every pair of points p and q in M, the space of continuous future-directed causal curves from p to q is compact in the topology. The spacetime has a Cauchy surface. The spacetime is causal, and for every pair of points p and q in M, the subset is compact. The spacetime is non-total imprisoning, and for every pair of points p and q in M, the subset is contained in a compact set (that is, its closure is compact). If any of these conditions are satisfied, we say M is globally hyperbolic. If M is a smooth connected Lorentzian manifold with boundary, we say it is globally hyperbolic if its interior is globally hyperbolic. Other equivalent characterizations of global hyperbolicity make use of the notion of Lorentzian distance where the supremum is taken over all the causal curves connecting the points (by convention d=0 if there is no such curve). They are A strongly causal spacetime for which is finite valued. A non-total imprisoning spacetime such that is continuous for every metric choice in the conformal class of the original metric. Remarks Global hyperbolicity, in the first form given above, was introduced by Leray in order to consider well-posedness of the Cauchy problem for the wave equation on the manifold. In 1970 Geroch proved the equivalence of definitions 1 and 2. Definition 3 under the assumption of strong causality and its equivalence to the first two was given by Hawking and Ellis. As mentioned, in older literature, the condition of causality in the first and third definitions of global hyperbolicity given above is replaced by the stronger condition of strong causality. In 2007, Bernal and Sánchez showed that the condition of strong causality can be replaced by causality. In particular, any globally hyperbolic manifold as defined in 3 is strongly causal. Later Hounnonkpe and Minguzzi proved that for quite reasonable spacetimes, more precisely those of dimension larger than three which are non-compact or non-totally vicious, the 'causal' condition can be dropped from definition 3. In definition 3 the closure of seems strong (in fact, the closures of the sets imply causal simplicity, the level of the causal hierarchy of spacetimes which stays just below global hyperbolicity). It is possible to remedy this problem strengthening the causality condition as in definition 4 proposed by Minguzzi in 2009. This version clarifies that global hyperbolicity sets a compatibility condition between the causal relation and the notion of compactness: every causal diamond is contained in a compact set and every inextendible causal curve escapes compact sets. Observe that the larger the family of compact sets the easier for causal diamonds to be contained on some compact set but the harder for causal curves to escape compact sets. Thus global hyperbolicity sets a balance on the abundance of compact sets in relation to the causal structure. Since finer topologies have less compact sets we can also say that the balance is on the number of open sets given the causal relation. Definition 4 is also robust under perturbations of the metric (which in principle could introduce closed causal curves). In fact using this version it has been shown that global hyperbolicity is stable under metric perturbations. In 2003, Bernal and Sánchez showed that any globally hyperbolic manifold M has a smooth embedded three-dimensional Cauchy surface, and furthermore that any two Cauchy surfaces for M are diffeomorphic. In particular, M is diffeomorphic to the product of a Cauchy surface with . It was previously well known that any Cauchy surface of a globally hyperbolic manifold is an embedded three-dimensional submanifold, any two of which are homeomorphic, and such that the manifold splits topologically as the product of the Cauchy surface and . In particular, a globally hyperbolic manifold is foliated by Cauchy surfaces. In view of the initial value formulation for Einstein's equations, global hyperbolicity is seen to be a very natural condition in the context of general relativity, in the sense that given arbitrary initial data, there is a unique maximal globally hyperbolic solution of Einstein's equations. See also Causality conditions Causal structure Light cone References General relativity Mathematical methods in general relativity
Globally hyperbolic manifold
[ "Physics" ]
1,274
[ "General relativity", "Theory of relativity" ]
8,113,640
https://en.wikipedia.org/wiki/Tup%C3%AD
Tupí, also known as formatge de tupí, is a fermented cheese of a certain area of the Pyrenees and Pre-Pyrenees made from cows' or sheep's milk. It is a cheese traditionally prepared in the mountainous Pallars region, as well as in the Cerdanya and the Alt Urgell. Together with the llenguat, another fermented cheese of the same area, it is one of the few varieties of cheese of true Catalan origin. Description Tupí cheese was home made in rural households according to old custom. It is quite soft and creamy, containing a high proportion of fat. Owing to its strong taste it is usually eaten with farmer-style bread along with strong wine. It can also be used as an ingredient for the preparation of sauces. Its preparation includes sheep's or cow's milk and aiguardent or another similarly strong liquor. The fresh cheese is pressed by hand until it takes a ball shape and all liquid is drained from it. Then it is put inside of a tupí glazed ceramic jar and the liquor is added. The mixture is then stirred from time to time the first four or five days after preparation. Following the preparation process the jar is covered and kept in a cool and dry place for a minimum of two months during which the cheese ferments, reaching the desired consistency. Some households add olive oil to the cheese in the jar after fermentation. See also Fermentation in food processing References External links Formatges catalans Pallars Jussa – Productes gastronomics Productes Típics Cerdanya Catalan cuisine Spanish cheeses Cow's-milk cheeses Sheep's-milk cheeses Fermented foods
Tupí
[ "Biology" ]
347
[ "Fermented foods", "Biotechnology products" ]
17,322,852
https://en.wikipedia.org/wiki/Quasi%20Fermi%20level
A quasi Fermi level is a term used in quantum mechanics and especially in solid state physics for the Fermi level (chemical potential of electrons) that describes the population of electrons separately in the conduction band and valence band, when their populations are displaced from equilibrium. This displacement could be caused by the application of an external voltage, or by exposure to light of energy , which alter the populations of electrons in the conduction band and valence band. Since recombination rate (the rate of equilibration between bands) tends to be much slower than the energy relaxation rate within each band, the conduction band and valence band can each have an individual population that is internally in equilibrium, even though the bands are not in equilibrium with respect to exchange of electrons. The displacement from equilibrium is such that the carrier populations can no longer be described by a single Fermi level, however it is possible to describe using concept of separate quasi-Fermi levels for each band. Definition When a semiconductor is in thermal equilibrium, the distribution function of the electrons at the energy level of E is presented by a Fermi–Dirac distribution function. In this case the Fermi level is defined as the level in which the probability of occupation of electron at that energy is . In thermal equilibrium, there is no need to distinguish between conduction band quasi-Fermi level and valence band quasi-Fermi level as they are simply equal to the Fermi level. When a disturbance from a thermal equilibrium situation occurs, the populations of the electrons in the conduction band and valence band change. If the disturbance is not too great or not changing too quickly, the bands each relax to a state of quasi thermal equilibrium. Because the relaxation time for electrons within the conduction band is much lower than across the band gap, we can consider that the electrons are in thermal equilibrium in the conduction band. This is also applicable for electrons in the valence band (often understood in terms of holes). We can define a quasi Fermi level and quasi temperature due to thermal equilibrium of electrons in conduction band, and quasi Fermi level and quasi temperature for the valence band similarly. We can state the general Fermi function for electrons in conduction band as and for electrons in valence band as where: is the Fermi–Dirac distribution function, is the conduction band quasi-Fermi level at location r, is the valence band quasi-Fermi level at location r, is the conduction band temperature, is the valence band temperature, is the probability that a particular conduction-band state, with wavevector k and position r, is occupied by an electron, is the probability that a particular valence-band state, with wavevector k and position r, is occupied by an electron (i.e. not occupied by a hole). is the energy of the conduction- or valence-band state in question, is the Boltzmann constant. p–n junction As shown in the figure below, the conduction band and valence band in a p–n junction is indicated by blue solid line in the left, and quasi Fermi level is indicated by the red dashed line. When there is no external voltage(bias) applied to a p–n junction, the quasi Fermi levels for electron and holes overlap with one another. As bias increase, the valence band of the p-side gets pulled down, and so did the hole quasi Fermi level. As a result separation of hole and electron quasi Fermi level increased. Application This simplification will help us in many areas. For example, we can use the same equation for electron and hole densities used in thermal equilibrium, but substituting the quasi-Fermi levels and temperature. That is, if we let be the spatial density of conduction band electrons and be the spatial density of holes in a material, and if the Boltzmann approximation holds, i.e. assuming the electron and hole densities are not too high, then where is the spatial density of conduction band electrons that would be present in thermal equilibrium if the Fermi level were at , and is the spatial density of holes that would be present in thermal equilibrium if the Fermi level were at . A current (due to the combined effects of drift and diffusion) will only appear if there is a variation in the Fermi or quasi Fermi level. The current density for electron flow can be shown to be proportional to the gradient in the electron quasi Fermi level. For if we let be the electron mobility, and be the quasi fermi energy at the spatial point , then we have Similarly, for holes, we have Further reading Electronic band structures Fermi–Dirac statistics
Quasi Fermi level
[ "Physics", "Chemistry", "Materials_science" ]
966
[ "Electron", "Electronic band structures", "Condensed matter physics" ]
17,323,921
https://en.wikipedia.org/wiki/Lagrangian%20analysis
Lagrangian analysis is the use of Lagrangian coordinates to analyze various problems in continuum mechanics. Lagrangian analysis may be used to analyze currents and flows of various materials by analyzing data collected from gauges/sensors embedded in the material which freely move with the motion of the material. A common application is study of ocean currents in oceanography, where the movable gauges in question called Lagrangian drifters. Recently, with the development of high speed cameras and particle-tracking algorithms, there have also been applications to measuring turbulence. References Fluid dynamics
Lagrangian analysis
[ "Chemistry", "Engineering" ]
118
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
10,473,148
https://en.wikipedia.org/wiki/Dust%20explosion
A dust explosion is the rapid combustion of fine particles suspended in the air within an enclosed location. Dust explosions can occur where any dispersed powdered combustible material is present in high-enough concentrations in the atmosphere or other oxidizing gaseous medium, such as pure oxygen. In cases when fuel plays the role of a combustible material, the explosion is known as a fuel-air explosion. Dust explosions are a frequent hazard in coal mines, grain elevators and silos, and other industrial environments. They are also commonly used by special effects artists, filmmakers, and pyrotechnicians, given their spectacular appearance and ability to be safely contained under certain carefully controlled conditions. Thermobaric weapons exploit this principle by rapidly saturating an area with an easily combustible material and then igniting it to produce explosive force. These weapons are the most powerful non-nuclear weapons in existence. Terminology If rapid combustion occurs in a confined space, enormous overpressures can build up, causing major structural damage and flying debris. The sudden release of energy from a "detonation" can produce a shockwave, either in open air or in a confined space. If the spread of flame is at subsonic speed, the phenomenon is sometimes called a "deflagration", although looser usage calls both phenomena "explosions". Dust explosions may be classified as being either "primary" or "secondary" in nature. Primary dust explosions may occur inside process equipment or similar enclosures, and are generally controlled by pressure relief through purpose-built ducting to the external atmosphere. Secondary dust explosions are the result of dust accumulation inside a building being disturbed and ignited by the primary explosion, resulting in a much more dangerous uncontrolled explosion that can affect the entire structure. Historically, fatalities from dust explosions have largely been the result of secondary dust explosions. Conditions required There are five necessary conditions for a dust explosion: A combustible dust The dust is dispersed in the air within certain flammability limits There is an oxidant (typically atmospheric oxygen) There is an ignition source The area is confineda building can be an enclosure Sources of dust Many common materials which are known to burn can generate a dust explosion, such as coal dust and sawdust. In addition, many otherwise mundane organic materials can also be dispersed into a dangerous dust cloud, such as grain, flour, starch, sugar, powdered milk, cocoa, coffee, and pollen. Powdered metals (such as aluminum, magnesium, and titanium) can form explosive suspensions in air, if finely divided. A gigantic explosion of flour dust destroyed a mill in Minnesota on May 2, 1878, killing 14 workers at the Washburn A Mill and another four in adjacent buildings. A similar problem occurs in sawmills and other places dedicated to woodworking. Since the advent of industrial production–scale metal powder–based additive manufacturing (AM) in the 2010s, there is growing need for more information and experience with preventing dust explosions and fires from the traces of excess metal powder sometimes left over after laser sintering or other fusion methods. For example, in machining operations downstream of the AM build, excess powder liberated from porosities in the support structures can be exposed to sparks from the cutting interface. Efforts are underway not only to build this knowledgebase within the industry but also to share it with local fire departments, who do periodic fire-safety inspections of businesses in their districts and who can expect to answer alarms at shops or plants where AM is now part of the production mix. Although not strictly a dust, paper particles emitted during processing – especially rolling, unrolling, calendaring/slitting, and sheet-cutting – are also known to pose an explosion hazard. Enclosed paper mill areas subject to such dangers commonly maintain very high air humidities to reduce the chance of airborne paper dust explosions. In special effects pyrotechnics, lycopodium powder and non-dairy creamer are two common means of producing safe, controlled fire effects. To support rapid combustion, the dust must consist of very small particles with a high surface area to volume ratio, thereby making the collective or combined surface area of all the particles very large in comparison to a dust of larger particles. Dust is defined as powders with particles less than about 500 micrometres in diameter, but finer dust will present a much greater hazard than coarse particles by virtue of the larger total surface area of all the particles. Concentration Below a certain value, the lower explosive limit (LEL), there is insufficient dust to support the combustion at the rate required for an explosion. A combustible concentration at or below 25% of the LEL is considered safe. Similarly, if the fuel to air ratio increases above the upper explosive limit (UEL), there is insufficient oxidant to permit combustion to continue at the necessary rate. Determining the minimum explosive concentration or maximum explosive concentration of dusts in air is difficult, and consulting different sources can lead to quite different results. Typical explosive ranges in air are from few dozens grams/m3 for the minimum limit, to few kg/m3 for the maximum limit. For example, the LEL for sawdust has been determined to be between 40 and 50 grams/m3. It depends on many factors including the type of material used. Oxidant Typically, normal atmospheric oxygen can be sufficient to support a dust explosion if the other necessary conditions are also present. High-oxygen or pure oxygen environments are considered to be especially hazardous, as are strong oxidizing gases such as chlorine and fluorine. Also, particulate suspensions of compounds with a high oxidative potential, such as peroxides, chlorates, nitrates, perchlorates, and dichromates, can increase risk of an explosion if combustible materials are also present. Sources of ignition There are many sources of ignition, and a naked flame need not be the only one: over one half of the dust explosions in Germany in 2005 were from non-flame sources. Common sources of ignition include: electrostatic discharge (e.g. an improperly installed conveyor belt, which can act like a Van de Graaff generator) friction electrical arcing from machinery or other equipment hot surfaces (e.g. overheated bearings) fire self-ignition However, it is often difficult to determine the exact source of ignition when investigating after an explosion. When a source cannot be found, ignition will often be attributed to static electricity. Static charges can be generated by external sources, or can be internally generated by friction at the surfaces of particles themselves as they collide or move past one another. Mechanism Dusts have a very large surface area compared to their mass. Since burning can only occur at the surface of a solid or liquid, where it can react with oxygen, this causes dusts to be much more flammable than bulk materials. For example, a sphere of a combustible material with a density of 1 g/cm3 would be about in diameter, and have a surface area of . However, if it were broken up into spherical dust particles 50 μm in diameter (about the size of flour particles) it would have a surface area of . This greatly-increased surface area allows the material to burn much faster, and the extremely small mass of each particle allows them to catch on fire with much less energy than the bulk material, as there is no heat loss to conduction within the material. When this mixture of fuel and air is ignited, especially in a confined space such as a warehouse or silo, a significant increase in pressure is created, often more than sufficient to demolish the structure. Even materials that are traditionally thought of as nonflammable (such as aluminum), or slow burning (such as wood), can produce a powerful explosion when finely divided, and can be ignited by even a small spark. Effects A dust explosion can cause major damage to structures, equipment, and personnel from violent overpressure or shockwave effects. Flying objects and debris can cause further damage. Intense radiant heat from a fireball can ignite the surroundings, or cause severe skin burns in unprotected persons. In a tightly enclosed space, the sudden depletion of oxygen can cause asphyxiation. Where the dust is carbon based (such as in a coal mine), incomplete combustion may cause large amounts of carbon monoxide (the miners' after-damp) to be created. This can cause more deaths than the original explosion as well as hindering rescue attempts. Protection and mitigation Much research has been carried out in Europe and elsewhere to understand how to control these dangers, but dust explosions still occur. The alternatives for making processes and plants safer depend on the industry. In the coal mining industry, a methane explosion can initiate a coal dust explosion, which can then engulf an entire mine pit. As a precaution, incombustible stone dust may be spread along mine roadways, or stored in trays hanging from the roof, to dilute the coal dust stirred up by a shockwave to the point where it cannot burn. Mines may also be sprayed with water to inhibit ignition. Some industries exclude oxygen from dust-raising processes, a precaution known as "inerting". Typically this uses nitrogen, carbon dioxide, or argon, which are incombustible gases which can displace oxygen. The same method is also used in large storage tanks where flammable vapors can accumulate. However, use of oxygen-free gases brings a risk of asphyxiation of the workers. Workers who need illumination in enclosed spaces where a dust explosion is a high risk often use lamps designed for underwater divers, as they have no risk of producing an open spark due to their sealed waterproof design. Good housekeeping practices, such as eliminating build-up of combustible dust deposits that could be disturbed and lead to a secondary explosion, also help mitigate the problem. Best engineering control measures which can be found in the National Fire Protection Association (NFPA) Combustible Dust Standards include: Wetting Oxidant concentration reduction Deflagration venting Deflagration pressure containment Deflagration suppression Deflagration venting through a dust retention and flame-arresting device Notable incidents Dust clouds are a common source of explosions, causing an estimated 2,000 explosions annually in Europe. The table lists notable incidents worldwide. See also Air to fuel ratio Power tool References External links Incidents in France and the US: Combustible dust explosion investigation products from the Chemical Safety Board Combustible Dust Policy Institute-ATEX OSHA case studies of dust explosions Protecting process plant, grain handling facilities, etc. from the risk of dust hazard explosions: Hazard Monitoring Equipment – Selection, Installation and Maintenance Seminars for Combustible Dust Safety HSE (UK) advice on safe handling of combustible dust Combustible Dust, CCOHS Chemical processes Occupational safety and health Particulates
Dust explosion
[ "Chemistry" ]
2,232
[ "Dust explosions", "Particulates", "Chemical processes", "nan", "Explosions", "Chemical process engineering", "Particle technology" ]
10,473,995
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Sustainable%20Materials
The Max Planck Institute for Sustainable Materials () is a research institute of the Max Planck Society located in Düsseldorf. Since 1971, it has been legally independent and organized in the form of a GmbH, which was formerly supported and financed in equal parts by the Max Planck Society and the Steel Institute VDEh. The Max Planck Society has been the sole shareholder since 2020. History The institute was founded in 1917 as Kaiser Wilhelm Institute for Iron Research in Aachen, with Fritz Wüst being the founding director. It moved to Düsseldorf in 1921 and relocated from the "Rheinische Metallwaarenfabrik" to its current location in 1935. In 1943, it moved temporarily to Clausthal and in 1946 back to Düsseldorf. The long-term institutional co-sponsoring by Steel Institute VDEh determined a unique example of a public private partnership for both the Max Planck Society and European industry and was intended to ensure a close link between knowledge-oriented and pre-competitive basic research on the one hand and commercial relevance on the other. After the VDEh had reduced its annual subsidies since 2016 due to structural problems in the steel industry and completely terminated the financing agreement in October 2019 with effect from 31 December 2021, it transferred all shares to the Max Planck Society in March 2020, making it the sole shareholder. In April 2024, the Max Planck Institute for Iron Research was renamed the Max Planck Institute for Sustainable Materials. Fields of research The Institute plays a central role in enabling progress in the fields of mobility (e.g. steels and soft magnets for light-weight hybrid vehicles and Ni-base alloys for plane turbines), energy (e.g. efficiency of thermal power conversion through better high temperature alloys and nanostructured solar cells), infrastructure (e.g. steels for large infrastructures, e.g. wind turbines and chemical plants), and safety (e.g. nanostructured bainitic steels for gas pipelines). The Institute with its international team of about 300 employees is organized in four departments: Computational Materials Design (Prof. Jörg Neugebauer) Interface Chemistry and Surface Engineering (Prof. Jörg Neugebauer, temporarily while Prof. Martin Stratmann is on leave) Microstructure Physics and Alloy Design (Prof. Dierk Raabe) Structure and Nano-/Micromechanics of Materials (Prof. Gerhard Dehm) In addition to departmental research, certain research activities are of common interest within the MPIE. These central research areas are highly interdisciplinary and combine the experimental and theoretical expertise available in the different departments. The six main cross-disciplinary topics are: Sustainability and decarburization Artificial intelligence and digitalisation Materials under harsh conditions Innovative materials Microstructure and properties Advanced methods In many of these areas the Institute holds a position of international scientific leadership, particularly in multiscale materials modeling, surface science, metallurgical alloy design, and advanced structure characterization from atomic to macroscopic scales of complex engineering and functional materials. Literature Adolf von Harnack: Rede zur Weihe des Kaiser-Wilhelm-Instituts für Eisenforschung. 1921, In: Adolf von Harnack . Wissenschaftspolitische Reden und Aufsätze. zusammengestellt und herausgegeben von Bernhard Fabian, Olms-Weidmann, Hildeheim-Zürich-New York 2001, (German). Max-Planck-Gesellschaft (Hrsg.): Max-Planck-Institut für Eisenforschung, Reihe: Berichte und Mitteilungen der Max-Planck-Gesellschaft 1993/5, ISSN 0341-7778 (German). External links Official institute home page Homepage of the International Max Planck Research School (IMPRS) for Surface and Interface Engineering in Advanced Materials DAMASK — the Düsseldorf Advanced Material Simulation Kit developed at MPIE Sustainable Materials Materials science organizations Research institutes in Düsseldorf
Max Planck Institute for Sustainable Materials
[ "Materials_science", "Engineering" ]
797
[ "Materials science organizations", "Materials science" ]
10,475,148
https://en.wikipedia.org/wiki/PCGamerBike
The PCGamerBike is an exercise bike that can interact with computer games. It uses magnets to produce resistance which makes the bike relatively quiet in operation, and comes with software that will automatically logs calories burned, distance and speed to a daily graph. Types There are two versions of the PCGamerBike; the PCGamerBike Mini and the PCGamerBike Recumbent. The PCGamerBike Mini is a compact exercise bike, and the PCGamerBike Recumbent is a full-sized recumbent exercise bike. Use The PCGamerBike is configurable and as a result can interact with a broad range of PC games. They are typically used to control character(s) in a game, or a character's vehicle, such as a car, bike or boat, by pedaling forward or backward, to move the character in those directions. Side to side controls require the use of a keyboard or mouse, which can be used in accompaniment with the bike. When used with driving and racing games, character speed is proportional to pedal speed. The PCGamerBike Mini can be used with any game that supports a keyboard, as it is connected via a USB port as a game controller. The resistance of the pedals on the PCGamerBike Recumbent can be adjusted to the player's preference and will also vary depending on certain in-game situations, for example, in a situation when the character is going up or down hill. Awards The PCGamerBike received the 2007 International CES Innovations Design and Engineering Award. References External links Exercise equipment Fitness games Video game accessories
PCGamerBike
[ "Technology" ]
333
[ "Video game accessories", "Components" ]
10,477,221
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi%20model
In the mathematical field of graph theory, the Erdős–Rényi model refers to one of two closely related models for generating random graphs or the evolution of a random network. These models are named after Hungarian mathematicians Paul Erdős and Alfréd Rényi, who introduced one of the models in 1959. Edgar Gilbert introduced the other model contemporaneously with and independently of Erdős and Rényi. In the model of Erdős and Rényi, all graphs on a fixed vertex set with a fixed number of edges are equally likely. In the model introduced by Gilbert, also called the Erdős–Rényi–Gilbert model, each edge has a fixed probability of being present or absent, independently of the other edges. These models can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs. Definition There are two closely related variants of the Erdős–Rényi random graph model. In the model, a graph is chosen uniformly at random from the collection of all graphs which have nodes and edges. The nodes are considered to be labeled, meaning that graphs obtained from each other by permuting the vertices are considered to be distinct. For example, in the model, there are three two-edge graphs on three labeled vertices (one for each choice of the middle vertex in a two-edge path), and each of these three graphs is included with probability . In the model, a graph is constructed by connecting labeled nodes randomly. Each edge is included in the graph with probability , independently from every other edge. Equivalently, the probability for generating each graph that has nodes and edges is The parameter in this model can be thought of as a weighting function; as increases from to , the model becomes more and more likely to include graphs with more edges and less and less likely to include graphs with fewer edges. In particular, the case corresponds to the case where all graphs on vertices are chosen with equal probability. The behavior of random graphs are often studied in the case where , the number of vertices, tends to infinity. Although and can be fixed in this case, they can also be functions depending on . For example, the statement that almost every graph in is connected means that, as tends to infinity, the probability that a graph on vertices with edge probability is connected tends to . Comparison between the two models The expected number of edges in G(n, p) is , and by the law of large numbers any graph in G(n, p) will almost surely have approximately this many edges (provided the expected number of edges tends to infinity). Therefore, a rough heuristic is that if pn2 → ∞, then G(n,p) should behave similarly to G(n, M) with as n increases. For many graph properties, this is the case. If P is any graph property which is monotone with respect to the subgraph ordering (meaning that if A is a subgraph of B and B satisfies P, then A will satisfy P as well), then the statements "P holds for almost all graphs in G(n, p)" and "P holds for almost all graphs in " are equivalent (provided pn2 → ∞). For example, this holds if P is the property of being connected, or if P is the property of containing a Hamiltonian cycle. However, this will not necessarily hold for non-monotone properties (e.g. the property of having an even number of edges). In practice, the G(n, p) model is the one more commonly used today, in part due to the ease of analysis allowed by the independence of the edges. Properties of G(n, p) With the notation above, a graph in G(n, p) has on average edges. The distribution of the degree of any particular vertex is binomial: where n is the total number of vertices in the graph. Since this distribution is Poisson for large n and np = const. In a 1960 paper, Erdős and Rényi described the behavior of G(n, p) very precisely for various values of p. Their results included that: If np < 1, then a graph in G(n, p) will almost surely have no connected components of size larger than O(log(n)). If np = 1, then a graph in G(n, p) will almost surely have a largest component whose size is of order n2/3. If np → c > 1, where c is a constant, then a graph in G(n, p) will almost surely have a unique giant component containing a positive fraction of the vertices. No other component will contain more than O(log(n)) vertices. If , then a graph in G(n, p) will almost surely contain isolated vertices, and thus be disconnected. If , then a graph in G(n, p) will almost surely be connected. Thus is a sharp threshold for the connectedness of G(n, p). Further properties of the graph can be described almost precisely as n tends to infinity. For example, there is a k(n) (approximately equal to 2log2(n)) such that the largest clique in G(n, 0.5) has almost surely either size k(n) or k(n) + 1. Thus, even though finding the size of the largest clique in a graph is NP-complete, the size of the largest clique in a "typical" graph (according to this model) is very well understood. Edge-dual graphs of Erdos-Renyi graphs are graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. Relation to percolation In percolation theory one examines a finite or infinite graph and removes edges (or links) randomly. Thus the Erdős–Rényi process is in fact unweighted link percolation on the complete graph. (One refers to percolation in which nodes and/or links are removed with heterogeneous weights as weighted percolation). As percolation theory has much of its roots in physics, much of the research done was on the lattices in Euclidean spaces. The transition at np = 1 from giant component to small component has analogs for these graphs, but for lattices the transition point is difficult to determine. Physicists often refer to study of the complete graph as a mean field theory. Thus the Erdős–Rényi process is the mean-field case of percolation. Some significant work was also done on percolation on random graphs. From a physicist's point of view this would still be a mean-field model, so the justification of the research is often formulated in terms of the robustness of the graph, viewed as a communication network. Given a random graph of n ≫ 1 nodes with an average degree . Remove randomly a fraction of nodes and leave only a fraction from the network. There exists a critical percolation threshold below which the network becomes fragmented while above a giant connected component of order n exists. The relative size of the giant component, P∞, is given by Caveats Both of the two major assumptions of the G(n, p) model (that edges are independent and that each edge is equally likely) may be inappropriate for modeling certain real-life phenomena. Erdős–Rényi graphs have low clustering, unlike many social networks. Some modeling alternatives include Barabási–Albert model and Watts and Strogatz model. These alternative models are not percolation processes, but instead represent a growth and rewiring model, respectively. Another alternative family of random graph models, capable of reproducing many real-life phenomena, are exponential random graph models. History The G(n, p) model was first introduced by Edgar Gilbert in a 1959 paper studying the connectivity threshold mentioned above. The G(n, M) model was introduced by Erdős and Rényi in their 1959 paper. As with Gilbert, their first investigations were as to the connectivity of G(n, M), with the more detailed analysis following in 1960. Continuum limit representation of critical G(n, p) A continuum limit of the graph was obtained when is of order . Specifically, consider the sequence of graphs for . The limit object can be constructed as follows: First, generate a diffusion where is a standard Brownian motion. From this process, we define the reflected process . This process can be seen as containing many successive excursion (not quite a Brownian excursion, see ). Because the drift of is dominated by , these excursions become shorter and shorter as . In particular, they can be sorted in order of decreasing lengths: we can partition into intervals of decreasing lengths such that restricted to is a Brownian excursion for any . Now, consider an excursion . Construct a random graph as follows: Construct a real tree (see Brownian tree). Consider a Poisson point process on with unit intensity. To each point such that , corresponds an underlying internal node and a leaf of the tree . Identifying the two vertices, the tree becomes a graph Applying this procedure, one obtains a sequence of random infinite graphs of decreasing sizes: . The theorem states that this graph corresponds in a certain sense to the limit object of as . See also , the graph formed by extending the G(n, p) model to graphs with a countably infinite number of vertices. Unlike in the finite case, the result of this infinite process is (with probability 1) the same graph, up to isomorphism. describes ways in which properties associated with the Erdős–Rényi model contribute to the emergence of order in systems. describe a general probability distribution of graphs on "n" nodes given a set of network statistics and various parameters associated with them. , a generalization of the Erdős–Rényi model for graphs with latent community structure References Literature External links Video: Erdos-Renyi Random Graph Random graphs Renyi model
Erdős–Rényi model
[ "Mathematics" ]
2,061
[ "Mathematical relations", "Graph theory", "Random graphs" ]
10,478,387
https://en.wikipedia.org/wiki/Mimeoscope
In 1914–16, the A.B. Dick Company patented the mimeoscope. A mimeoscope, which is basically a light table, had an electrically illuminated glass top on which the operator traced drawings onto mimeograph stencils. The stencil took the place of tracing paper. The electric light was needed because the stencils were heavier and less transparent than tracing paper. Mimeoscopes were used for a lot of illustrations and in promotional work as well. Designs, maps, and plans could be easily drawn and copied for quick production and distribution. Customers could add these visuals to their instructions or announcements. Those who did not have time to read the entire document would still be able to look at it and quickly know what it was about. References External links Shannon Johnson's Mimeoscope Page American inventions Printing devices
Mimeoscope
[ "Physics", "Technology" ]
171
[ "Physical systems", "Machines", "Printing devices" ]
10,480,045
https://en.wikipedia.org/wiki/Epidemiology%20of%20domestic%20violence
Domestic violence occurs across the world, in various cultures, and affects people across society, at all levels of economic status; however, indicators of lower socioeconomic status (such as unemployment and low income) have been shown to be risk factors for higher levels of domestic violence in several studies. In the United States, according to the Bureau of Justice Statistics in 1995, women reported a six times greater rate of intimate partner violence than men. However, studies have found that men are much less likely to report victimization in these situations. While some sources state that gay and lesbian couples experience domestic violence at the same frequency as heterosexual couples, other sources report that domestic violence rates among gay, lesbian and bisexual people might be higher but more under-reported. By demographic Against women Domestic violence against women has been occurring for centuries. Domestic violence is deemed as any and all physical, sexual, and verbal assaults towards an individual's body, sense of self, or sense of trust. It was not considered a world-wide issue or considered an issue in most countries until the 1980s. A study was conducted by the World Health Organization (WHO) in 1997 and determined that 5-20% of lost healthy lives between women aged 15–44 was due to domestic violence. Domestic violence has since been an issue recognized by most UN countries and further studies have been conducted to include specific domestic violence statistics per country and ways to decrease rates. According to various national surveys, the percentage of women who were ever physically assaulted by an intimate partner varies substantially by country: Barbados (30%), Canada (29%), Egypt (34%), New Zealand (35%), Switzerland (21%), United States (33%). Some surveys in specific places report figures as high as 50–70% of women who were ever physically assaulted by an intimate partner. Others, including surveys in the Philippines and Paraguay, report figures as low as 10%. Statistics published in 2004 show that the rate of domestic violence victimisation for Indigenous women in Australia may be 40 times the rate for non-Indigenous women. 80% of women surveyed in rural Egypt said that beatings were common and often justified, particularly if the woman refused to have sex with her husband. Up to two-thirds of women in certain communities in Nigeria's Lagos State say they are victims to domestic violence. In Turkey 42% of women over 15 have suffered physical or sexual violence. In India, around 70% of women are victims of domestic violence. Between 1993 and 2001, U.S. women reported intimate partner violence almost seven times more frequently than men (a ratio of 20:3). Statistics for the year 1994 showed that more than five times as many females reported being victimized by an intimate than did males. Pregnancy Domestic violence during pregnancy can be missed by medical professionals because it often presents in non-specific ways. A number of countries have been statistically analyzed to calculate the prevalence of this phenomenon: UK prevalence: 3.4% USA prevalence: 3.2–33.7% Ireland prevalence: 12.5% Rates are higher in teenagers Severity and frequency increase postpartum (10% antenatally vs. 19% postnatally); 21% at three months post partum There are a number of presentations that can be related to domestic violence during pregnancy: delay in seeking care for injuries; late booking, non-attenders at appointments, self-discharge; frequent attendance, vague problems; aggressive or over-solicitous partner; burns, pain, tenderness, injuries; vaginal tears, bleeding, STDs; and miscarriage. Domestic violence against a pregnant woman can also affect the fetus and can have lingering effects on the child after birth. Physical abuse is associated with neonatal death (1.5% versus 0.2%), and verbal abuse is associated with low birth weight (7.6% versus 5.1%). Against men Due to social stigmas regarding male victimization, men who are victims of domestic violence face an increased likelihood of being overlooked by healthcare providers. While much attention has been focused on domestic violence against women, researchers showed that also domestic violence that men experience from other men needs also attention. The issue of victimization of men by women has been contentious, due in part to studies which report drastically different statistics regarding domestic violence. Severe perpetration of physical violence tends to be committed by men, and victimization reports generally show women being more likely to experience domestic violence than men. A 2013 review of the literature that combined perpetration and victimization reports indicate that, worldwide, most studies only look at female victimization. The review examined studies from five continents and the correlation between a country's level of gender inequality and rates of domestic violence. The authors found that when partner abuse is defined broadly to include emotional abuse, any kind of hitting, and who hits first, partner abuse is relatively even. They also stated if one examines who is physically harmed and how seriously, expresses more fear, and experiences subsequent psychological problems, domestic violence is significantly gendered toward women as victims. Sherry Hamby argues that victimization reports are more reliable than perpetration reports and therefore studies showing women being more likely to suffer domestic violence than men are the accurate ones. A 2016 meta-analysis indicated that the only risk factors for the perpetration of intimate partner violence that differ by gender are witnessing intimate partner violence as a child, alcohol use, male demand, and female withdrawal communication patterns. Among LGBT people Some sources state that gay and lesbian couples experience domestic violence at the same frequency as heterosexual couples, while other sources state domestic violence among gay and lesbian couples might be higher than among heterosexual couples, that gay, lesbian, and bisexual individuals are less likely to report domestic violence that has occurred in their intimate relationships than heterosexual couples are, or that lesbian couples experience domestic violence less than heterosexual couples do. By contrast, some researchers commonly assume that lesbian couples experience domestic violence at the same rate as heterosexual couples, and have been more cautious when reporting domestic violence among gay male couples. In a survey by the Canadian Government, some 19% of lesbian women reported being victimized by their partners. Other research reports that lesbian relationships exhibit substantially higher rates of physical aggression. Against children The U.S Department of Health and Human Services reports that for each year between 2000 and 2005, "female parents acting alone" were most common perpetrators of child abuse. When it comes to domestic violence towards children involving physical abuse, research in the UK by the NSPCC indicated that "most violence occurred at home" (78%). Forty to sixty percent of men and women who abuse other adults also abuse their children. Girls whose fathers batter their mothers are 6.5 times more likely to be sexually abused by their fathers than are girls from non-violent homes. In China in 1989, 39,000 baby girls died during their first year of life because they did not receive the same medical care that would be given to a male child. In Asia alone, about one million children working in the sex trade are held in slavery-like conditions. Between teenagers Teen dating violence is a pattern of controlling behavior by one teenager over another teenager in the context of a dating relationship. While there are many similarities to "traditional" domestic violence, there are also some differences. Teens are much more likely than adults to become isolated from their peers as a result of controlling behavior by their romantic partner. Also, for many teens the abusive relationship may be their first dating experience, and so they may lack a "normal" dating experience with which to compare it. While teenagers are trying to establish their sexual identities, they are also confronting violence in their relationships and exposure to technology. Studies document that teenagers are experiencing significant amounts of dating or domestic violence. Depending on the population studied and the way dating violence is defined, between 9 and 35% of teens have experienced domestic violence in a dating relationship. When a broader definition of abuse that encompasses physical, sexual, and emotional abuse is used, one in three teen girls is subjected to dating abuse." Additionally, a significant number of teens are victims of stalking by intimate partners. Although involvement with romantic relationships is a critical aspect of adolescence, these relationships also present serious risks for teenagers. Unfortunately, adolescents in dating relationships are at greater risk of intimate partner violence than any other age group. Approximately one-third of adolescent girls are victims of physical, emotional, or verbal abuse from a dating partner. Estimates of sexual victimization range from 14% to 43% of girls and 0.3% to 36% for boys. According to the Center for Disease Control, in 2009, nearly 10% of students nationwide had been intentionally hit, slapped, or physically hurt by their boyfriend or girlfriend. Twenty-six percent of girls in a relationship reported being threatened with violence or experiencing verbal abuse; 13% reported being physically hurt or hit. Measuring Measures of the incidence of violence in intimate relationships can differ markedly in their findings depending on the measures used. Care is needed when using domestic violence statistics to ensure that both gender bias and under-reporting issues do not affect the inferences that are drawn from the statistics. Some researchers, such as Michael P. Johnson, suggest that where and how domestic violence is measured also affects findings, and caution is needed to ensure statistics drawn from one class of situations are not applied to another class of situations in a way that might have fatal consequences. Other researchers, such as David Murray Fergusson, counter that domestic violence prevention services, and statistics that they produce, target the extreme end of domestic violence and preventing child abuse rather than domestic violence between couples. Europe A 1992 Council of Europe study on domestic violence against women found that one in four women experience domestic violence over their lifetimes and between 6 and 10% of women suffer domestic violence in a given year. In the European Union, DV is a serious problem in the Baltic States. These three countries – Estonia, Latvia, and Lithuania – have also lagged behind most post-communist countries in their response to DV. The problem in these countries is severe, and in 2013 a DV victim won a European Court of Human Rights case against Lithuania. United Kingdom The British Crime Survey for 2006–2007 reported that 0.5% of people (0.6% of women and 0.3% of men) reported being victims of domestic violence during that year and 44.3% of domestic violence was reported to the police. According to the survey, 312,000 women and 93,000 men were victims of domestic violence. The Northern Ireland Crime Survey for 2005 reported that 13% of people (16% of women and 10% of men) reported being victims of domestic violence at some point in their lives. The National Study of Domestic Abuse for 2005 reported that 213,000 women and 88,000 men reported being victims of domestic violence at some point in their lives. According to the study, one in seven women and one in sixteen men were victims of severe physical abuse, severe emotional abuse, or sexual abuse. France France has experienced domestic violence across both sexes for decades without the government addressing the issue. The Gender-Based Violence Against Women in Contemporary France determined the number of deaths relating to domestic violence from 2010 to 2014. 653 deaths in this time period were female victims at the hand of their male partner and 125 deaths were male victims by their female partners. These findings were determined after a research into domestic violence toward women compared to men was conducted by the French government, also known as the Istanbul Convention. Since these findings were published, the domestic violence in France has lowered due to passing the Law for Equality between Women and Men. Germany In Germany, domestic violence is a serious issue for women. According to Sexual Violence against Women in Germany, there is significant domestic violence toward women at any given time. The Criminological Research Institute of Lower Saxony conducted research involving women who resided in Germany. It was found that out of 4450 women, 5.4% will experience domestic violence during their lifetime. The German government has attempted to decrease the amount of domestic violence victims among women. Various programs have been implemented, including information campaign to make women aware of different risks like multiple partners or substance abuse that will lead to an increased chance of experiencing domestic violence. North America Canada In Canada, the Assembly of First Nations evaluation of the Canada Prenatal Nutrition Program conducted by CIET offers an inclusive and relatively unbiased national estimate. It documented domestic violence in a random sample of 85 First Nations across Canada: 22% (523 of 2,359) of mothers reported suffering abuse in the year prior to being interviewed; of these, 59% reported physical abuse. Results of studies which estimate the prevalence of domestic violence vary significantly, depending on specific wording of survey questions, how the survey is conducted, the definition of abuse or domestic violence used, the willingness or unwillingness of victims to admit that they have been abused and other factors. For instance, Straus (2005) conducted a study which estimated that the rate of minor assaults by women in the United States was 78 per 1,000 couples, compared with a rate for men of 72 per 1,000 and the severe assault rate was 46 per 1,000 couples for assaults by women and 50 per 1,000 for assaults by men. Neither difference is statistically significant. He claimed that since these rates were based exclusively on information provided by women respondents, the near-equality in assault rates could not be attributed to a gender bias in reporting. One analysis found that "women are as physically aggressive or more aggressive than men in their relationships with their spouses or male partners". However, studies have shown that women are more likely to be injured. Archer's meta-analysis found that women in the United States suffer 65% of domestic violence injuries. A Canadian study showed that 7% of women and 6% of men were abused by their current or former partners, but female victims of spousal violence were more than twice as likely to be injured as male victims, three times more likely to fear for their life, twice as likely to be stalked, and twice as likely to experience more than ten incidents of violence. However, Straus notes that Canadian studies on domestic violence have simply excluded questions that ask men about being victimized by their wives. According to a 2004 survey in Canada, the percentages of males being physically or sexually victimized by their partners was 6% versus 7% for women. However, females reported higher levels of repeated violence and were more likely than men to experience serious injuries; 23% of females versus 15% of males were faced with the most serious forms of violence including being beaten, choked, or threatened with or having a gun or knife used against them. Also, 21% of women versus 11% of men were likely to report experiencing more than 10 violent incidents. Women who often experience higher levels of physical or sexual violence from their current partner were 44%, compared with 18% of men to suffer from an injury. Cases in which women are faced with extremely abusive partners result in the females having to fear for their lives due to the violence. In addition, statistics show that 34% of women feared for their lives, and 10% of men feared for theirs. Some studies show that lesbian relationships have similar levels of violence as heterosexual relationships. United States Approximately 1.3 million women and 835,000 men report being physically assaulted by an intimate partner annually in the United States. In the United States, domestic violence is the leading cause of injury to women between the ages of 15 and 44. Victims of DV are offered legal remedies, which include the criminal law, as well as obtaining a protection order. The remedies offered can be both of a civil nature (civil orders of protection and other protective services) and of a criminal nature (charging the perpetrator with a criminal offense). People perpetrating DV are subject to criminal prosecution, most often under assault and battery laws. Russia In Russia, according to a representative of the Russian Ministry of Internal Affairs, one in four families experiences domestic violence. Domestic violence is not a specific criminal offense: it can be charged under various crimes of the criminal code (e.g. assault), but in practice cases of domestic violence turn into criminal cases only when they involve severe injuries, or the victim has died. For more details see Domestic violence in Russia. Asia In Turkey 42% of women over 15 have suffered physical or sexual violence. Fighting the prevalence of domestic violence in Kashmir has brought Hindu and Muslim activists together. According to some Islamic clerics and women's advocates, women from Muslim-majority cultures often face extra pressure to submit to domestic violence, as their husbands may manipulate Islamic law to exert their control. One study found that half of Palestinian women have been the victims of domestic violence. A study on Bedouin women in Israel found that most have experienced DV, most accepted it as a decree from God, and most believed they were to blame themselves for the violence. The study also showed that the majority of women were not aware of existing laws and policies which protect them: 60% said they did not know what a restraining order was. In Iraq husbands have a legal right to "punish" their wives. The criminal code states at Paragraph 41 that there is no crime if an act is committed while exercising a legal right; examples of legal rights include: "The punishment of a wife by her husband, the disciplining by parents and teachers of children under their authority within certain limits prescribed by law or by custom". In Jordan, part of article 340 of the Penal Code states that "he who discovers his wife or one of his female relatives committing adultery and kills, wounds, or injures one of them, is exempted from any penalty." This has twice been put forward for cancellation by the government, but was retained by the Lower House of the Parliament, in 2003: a year in which at least seven honor killings took place. Article 98 of the Penal Code is often cited alongside Article 340 in cases of honor killings. "Article 98 stipulates that a reduced sentence is applied to a person who kills another person in a 'fit of fury'". The Human Rights Watch found that up to 90% of women in Pakistan were subject to some form of maltreatment within their own homes. Honor killings in Pakistan are a very serious problem, especially in northern Pakistan. In Pakistan, honour killings are known locally as karo-kari. Karo-kari is a compound word literally meaning "black male" (Karo) and "black female" (Kari). Domestic violence in India is widespread, and is often related to the custom of dowry. Honor killings are more common in some regions of India, particularly in northern regions of the country. Honor killings have been reported in the states of Punjab, Rajasthan, Haryana, Uttar Pradesh, and Bihar, as a result of people marrying without their family's acceptance, and sometimes for marrying outside their caste or religion. Africa A UN report compiled from a number of different studies conducted in at least 71 countries found domestic violence against women to be most prevalent in Ethiopia. Up to two-thirds of women in certain communities in Nigeria's Lagos State say they are victims to domestic violence. 80% of women surveyed in rural Egypt said that beatings were common and often justified, particularly if the woman refused to have sex with her husband. Oceania Australia Statistics published in 2004, show that the rate of domestic violence victimisation for Indigenous women in Australia may be 40 times the rate for non-Indigenous women. Findings from the 2006 Australian Bureau of Statistics Personal Safety Survey show that among the female victims of physical assault, 31% were assaulted by a current or previous partner. Among male victims, 4.4% were assaulted by a current or previous partner. Thirty percent of people who had experienced violence by a current partner since the age of 15 were male, and seventy percent were female. References External links World Report on Violence Against Children, Secretary-General of the United Nations Hidden in Plain Sight: A statistical analysis of violence against children, UNICEF Abuse Violence Domestic violence Epidemiology Crime statistics
Epidemiology of domestic violence
[ "Biology", "Environmental_science" ]
4,081
[ "Behavior", "Abuse", "Violence", "Aggression", "Epidemiology", "Environmental social science", "Human behavior" ]
4,705,100
https://en.wikipedia.org/wiki/Nernst%E2%80%93Planck%20equation
The Nernst–Planck equation is a conservation of mass equation used to describe the motion of a charged chemical species in a fluid medium. It extends Fick's law of diffusion for the case where the diffusing particles are also moved with respect to the fluid by electrostatic forces. It is named after Walther Nernst and Max Planck. Equation The Nernst–Planck equation is a continuity equation for the time-dependent concentration of a chemical species: where is the flux. It is assumed that the total flux is composed of three elements: diffusion, advection, and electromigration. This implies that the concentration is affected by an ionic concentration gradient , flow velocity , and an electric field : where is the diffusivity of the chemical species, is the valence of ionic species, is the elementary charge, is the Boltzmann constant, and is the absolute temperature. The electric field may be further decomposed as: where is the electric potential and is the magnetic vector potential. Therefore, the Nernst–Planck equation is given by: Simplifications Assuming that the concentration is at equilibrium and the flow velocity is zero, meaning that only the ion species moves, the Nernst–Planck equation takes the form: Rather than a general electric field, if we assume that only the electrostatic component is significant, the equation is further simplified by removing the time derivative of the magnetic vector potential: Finally, in units of mol/(m2·s) and the gas constant , one obtains the more familiar form: where is the Faraday constant equal to ; the product of Avogadro constant and the elementary charge. Applications The Nernst–Planck equation is applied in describing the ion-exchange kinetics in soils. It has also been applied to membrane electrochemistry. See also Goldman–Hodgkin–Katz equation Bioelectrochemistry References Walther Nernst Diffusion Physical chemistry Electrochemical equations Statistical mechanics Max Planck Transport phenomena Electrochemistry
Nernst–Planck equation
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
410
[ "Transport phenomena", "Physical phenomena", "Applied and interdisciplinary physics", "Diffusion", "Chemical engineering", "Mathematical objects", "Equations", "Electrochemistry", "nan", "Statistical mechanics", "Physical chemistry", "Electrochemical equations" ]
4,709,191
https://en.wikipedia.org/wiki/PA512
PA512 (Serbian ПА512) was an industrial programmable logic controller - a portable, computer developed by Ivo Lola Ribar Institute of Serbia in 1980. It was followed six years late by the LPA512. Portable computers Industrial automation
PA512
[ "Technology", "Engineering" ]
53
[ "Computer hardware stubs", "Automation", "Industrial engineering", "Computing stubs", "Industrial automation" ]
4,709,391
https://en.wikipedia.org/wiki/Biosafety%20Clearing-House
The Biosafety Clearing-House is an international mechanism that exchanges information about the movement of genetically modified organisms, established under the Cartagena Protocol on Biosafety. It assists Parties (i.e. governments that have ratified the Protocol) to implement the protocol’s provisions and to facilitate sharing of information on, and experience with, living modified organisms (also known as genetically modified organisms, GMOs). It further assists Parties and other stakeholders to make informed decisions regarding the importation or release of GMOs. The Biosafety Clearing-House Central Portal is accessible through the Web. The BCH is a distributed system, and information in it is owned and updated by the users themselves through an authenticated system to ensure timeliness and accuracy. Mandate Article 20, paragraph 1 of the Cartagena Protocol on Biosafety established the BCH as part of the clearing-house mechanism of the Convention on Biological Diversity, in order to: (a) Facilitate the exchange of scientific, technical, environmental and legal information on, and experience with, living modified organisms; and (b) Assist Parties to implement the Protocol, taking into account the special needs of developing country Parties, in particular the least developed and small island developing States among them, and countries with economies in transition as well as countries that are centres of origin and centres of genetic diversity. First use in international law The BCH differs from other similar mechanisms established under other international legal agreements because it is in fact essential for the successful implementation of its parent body, the Protocol. It was the first Internet-based information-exchange mechanism created that must be used to fulfil certain international legal obligations - not only do Parties to the Protocol have a legal obligation to provide certain types of information to the BCH within defined time-frames, but certain provisions cannot be implemented without use of the BCH. For example, under Article 11.1 of the Protocol, a decision taken on domestic use of a GMO that might cross international borders (this includes placing on the market) must be advised to other potentially affected Parties through the BCH within 15 days of making the decision to allow them to assess potential impacts on their own territories. This is in contrast to the Advance Informed Agreement procedure which is a more traditional bilateral discussion between importers and exporters to obtain prior informed consent before releasing GMOs into the environment. Interoperability and the Central Portal of the BCH The Biosafety Clearing-House is designed to be interoperable with other databases, so governments may register their information with the central Biosafety Clearing-House database, or with another (interoperable) database of their choice. The location of the information makes no difference to the user, who is able to retrieve all information through the Central Portal of the Biosafety Clearing-House. To date, a number of relevant databases have been identified and are interoperable with the Central Portal, including national sites such as the United States Regulatory Agencies Unified Biotechnology Website and the Swiss Biosafety Clearing-House, and international databases such as the Organisation for Economic Co-operation and Development (OECD) unique identification database, the International Centre for Genetic Engineering and Biotechnology (ICGEB) biosafety publications database and The Food and Agriculture Organization (FAO) GM Foods Platform. Information in the Biosafety Clearing-House Categories of information in the BCH The BCH contains information that must be provided by Parties to the Protocol, such as decisions on release or importation of GMOs, risk assessments, competent national authorities, and national laws; as well as other relevant information and resources, including information on capacity-building, a roster of government-nominated experts in the field, and links to other websites and databases through the Biosafety Information Resource Centre. Governments that are not Parties to the Protocol are also encouraged to contribute information the BCH, and in fact a large number of the decisions in the BCH have been registered by two non-Party governments (Canada and the United States). Organisation of information in the BCH The BCH uses common formats for reporting information from distributed sources, and standardized terminology or “controlled vocabulary” to categorize the information contained within the databases. This allows the many users of the BCH to use the same terms whether they are registering information or searching for it, including synonyms within a language; relationships between terms; and between languages. To enable access to global information, the BCH operates in all six UN languages for both reporting and retrieving data (English, French, Spanish, Russian, Arabic and Chinese). Capacity Building for participation in the BCH Recognising the importance of Parties to use and participate in the BCH, the Global Environment Facility approved, in March 2004, a USD $13 million project entitled “Building Capacity for Effective Participation in the Biosafety Clearing House (BCH) of the Cartagena Protocol” to assist eligible Parties of the Protocol. 139 countries are eligible for funding under this project. External links Biosafety Clearing-House Central Portal Biosafety Protocol Homepage UNEP-GEF Project on Building Capacity for Effective Participation in the Biosafety Clearing House of the Cartagena Protocol References Secretariat of the Convention on Biological Diversity (2000) Cartagena Protocol on Biosafety to the Convention on Biological Diversity: text and annexes. Montreal, Canada. Galloway McLean, K (2005): 'Bridging the gap between researchers and policy-makers: International collaboration through the Biosafety Clearing-House' Environmental Biosafety Research 4 (2005) 123-126 Health risk Biodiversity Convention on Biological Diversity Genetically modified organisms
Biosafety Clearing-House
[ "Engineering", "Biology" ]
1,160
[ "Convention on Biological Diversity", "Biodiversity", "Genetic engineering", "Genetically modified organisms" ]
22,192,834
https://en.wikipedia.org/wiki/Random%20tree
In mathematics and computer science, a random tree is a tree or arborescence that is formed by a stochastic process. Types of random trees include: Uniform spanning tree, a spanning tree of a given graph in which each different tree is equally likely to be selected Random minimal spanning tree, spanning trees of a graph formed by choosing random edge weights and using the minimum spanning tree for those weights Random binary tree, binary trees with various random distributions, including trees formed by random insertion orders, and trees that are uniformly distributed with a given number of nodes Random recursive tree, increasingly labelled trees, which can be generated using a simple stochastic growth rule. Treap or randomized binary search tree, a data structure that uses random choices to simulate a random binary tree for non-random update sequences Rapidly exploring random tree, a fractal space-filling pattern used as a data structure for searching high-dimensional spaces Brownian tree, a fractal tree structure created by diffusion-limited aggregation processes Random forest, a machine-learning classifier based on choosing random subsets of variables for each tree and using the most frequent tree output as the overall classification Branching process, a model of a population in which each individual has a random number of children See also Lightning tree External links Trees (graph theory) Probabilistic data structures Random graphs
Random tree
[ "Mathematics" ]
271
[ "Mathematical relations", "Graph theory", "Random graphs" ]
12,763,523
https://en.wikipedia.org/wiki/Brifentanil
Brifentanil (A-3331) is an opioid analgesic that is an analogue of fentanyl and was developed in the early 1990s. Brifentanil is most similar to highly potent, short-acting fentanyl analogues such as alfentanil. The effects of brifentanil are very similar to those of alfentanil, with strong but short lasting analgesia and sedation, and particularly notable itching and respiratory depression. Side effects of fentanyl analogs are similar to those of fentanyl itself, which include itching, nausea and potentially serious respiratory depression, which can be life-threatening. Fentanyl analogs have killed hundreds of people throughout Europe and the former Soviet republics since the most recent resurgence in use began in Estonia in the early 2000s, and novel derivatives continue to appear. References Synthetic opioids Piperidines Tetrazoles 2-Fluorophenyl compounds Anilines Ethers Ureas Acetamides Mu-opioid receptor agonists
Brifentanil
[ "Chemistry" ]
220
[ "Organic compounds", "Functional groups", "Ethers", "Ureas" ]
12,765,046
https://en.wikipedia.org/wiki/Electrostatic%20spray-assisted%20vapour%20deposition
Electrostatic spray-assisted vapour deposition (ESAVD) is a technique (developed by a company called IMPT) to deposit both thin and thick layers of a coating onto various substrates. In simple terms chemical precursors are sprayed across an electrostatic field towards a heated substrate, the chemicals undergo a controlled chemical reaction and are deposited on the substrate as the required coating. Electrostatic spraying techniques were developed in the 1950s for the spraying of ionised particles on to charged or heated substrates. ESAVD (branded by IMPT as Layatec) is used for many applications in many markets including: Thermal barrier coatings for jet engine turbine blades Various thin layers in the manufacture of flat panel displays and photovoltaic panels, CIGS and CZTS-based thin-film solar cells. Electronic components Biomedical coatings Glass coatings (such as self-cleaning) Corrosion protection coatings The process has advantages over other techniques for layer deposition (plasma, electron-beam) in that it does not require the use of any vacuum, electron beam or plasma so reduces the manufacturing costs. It also uses less power and raw materials making it more environmentally friendly. Also the use of the electrostatic field means that the process can coat complex 3D parts easily. References Further reading "Kwang-Leong Choy – Laying It on Thick And Thin". Materials World. June 2003. Electrostatic spray-assisted vapour deposition (ESAVD), first reported in Materials World in March 1998, is a method for fabricating films and nanocrystalline powders. The inventor describes ongoing progress. Choy, K. L., "Process principles and applications of novel and cost-effective ESAVD based methods", in Innovative Processing of Films and Nanocrystalline Powders, K. L. Choy. ed. (World Scientific Publishing Company). 2002. pp. 15–69. . Choy, K. L., "Review of advances in processing methods: films and nanocrystalline powders", in Innovative Processing of Films and Nanocrystalline Powders, Choy, K. L. ed. (Imperial College Press), 2002, 1–14. Choy, K. L., Progress in Materials Science, 48, 57(2003). Choy, K. L., "Vapor Processing of nanostructured materials", in Handbook of nanostructured materials and nanotechnology, Nalwa, H. S. ed. (Academic Press) 2000, 533. . Choy, K. L., Feist, J. P., Heyes, A. L. and Su, B., J. Mater. Res. 14 (1999) 3111. Choy, K. L., "Innovative and cost-effective deposition of coatings using ESAVD method", Surface Engineering, 16 (2000) 465. R. Chandrasekhar and K. L. Choy, "Electrostatic spray assisted vapour deposition of fluorine doped tin oxide", Journal of Crystal Growth, 231 (1–2) (2001) 215. Thin film deposition Semiconductor device fabrication Metalworking Coatings
Electrostatic spray-assisted vapour deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
651
[ "Microtechnology", "Thin film deposition", "Coatings", "Thin films", "Semiconductor device fabrication", "Planes (geometry)", "Solid state engineering", "Chemical process stubs" ]
12,765,883
https://en.wikipedia.org/wiki/Metabolic%20imprinting
Metabolic imprinting refers to the long-term physiological and metabolic effects that an offspring's prenatal and postnatal environments have on them. Perinatal nutrition has been identified as a significant factor in determining an offspring's likelihood of it being predisposed to developing cardiovascular disease, obesity, and type 2 diabetes amongst other conditions. During pregnancy, maternal glucose can cross the blood-placental barrier meaning maternal hyperglycaemia is associated with foetal hyperglycaemia. Despite maternal glucose being able to cross the blood-placental barrier, maternal insulin is not able and the foetus has to make its own. As a result, if a mother is hyperglycaemic the foetus is likely to be hyperinsulinaemic which leads to it having increased levels of growth and adiposity. Maternal undernutrition Maternal undernutrition has been linked with low birth weight and also a number of diseases, including Cardiovascular disease, stroke, hypertension and diabetes. When a foetus is in the womb and is not receiving sufficient nutrition, it can adapt to prioritize organ growth and increased metabolic efficiency to prepare itself for life in an energy deficient environment. Postnatally, when given the correct nutrition, babies exhibit ‘catch up growth’, potentially leading to obesity and other related complications. Studies based around restricting animals food intake throughout gestation have discovered that a reduction of just 30% of normal intake can cause low birth weight and increase sensitivity to high-fat-diet induced obesity. In animal models, intrauterine undernutrition has been shown to be associated with hypertension later in life. This is because the formation of the kidneys is inhibited, which decreases filtration and flow rate through the nephrons, leading to increased blood pressure. More extreme prenatal conditions such as famine have been shown to have effects on the neurodevelopment of a foetus. After the Dutch Famine of the winter of 1944–1945, it was found that the risk of schizophrenia was significantly higher in those conceived at the height of the famine, as was the prevalence of schizoid personality. Maternal over-nutrition Maternal overnutrition can have detrimental effects on the health of the offspring later in life. This area is less well studied and understood but some progress has been made in identifying specific genes that are affected. Studies have investigated hypermethylation of DNA and found it to be higher in obese mothers to those of a healthy BMI. More specific studies have investigated Leptin (LEP) as a possible gene which is altered via metabolic imprinting in response to overnutrition in utero, and found hypermethylation of LEP in the placenta of those born to overly nourished mothers. This hypermethylation has been found to cause changes in the levels of circulating Leptin, as well as to leptin sensitivity and the development of neural circuits involved in the control of homeostasis which causes the higher risk of metabolic disease. Upon investigation it was found that a mother who was obese before conception was likely to have a higher level of placental LEP than the placenta of a mother of a healthy weight. One strategy for overcoming obesity is the use of gastric bypass and other such surgeries, while this does not entirely alleviate the risk of altered metabolic imprinting it has been found that siblings born post maternal surgery are less likely to have as high body fat percentages than over nutrition as siblings born before the surgery. Paternal overnutrition can also have a detrimental effect and new-borns have shown changes in methylation of DNA generally, with substantial hypomethylation at the gene Insulin-like Growth factor 2 (IGF2). However, this topic is much less studied than maternal nutrition. Maternal/gestational diabetes An increase in certain hormones such as oestrogen, progesterone, human placental lactogen, human placental growth hormone and cortisol during the second and third trimester of pregnancy cause an increase in insulin resistance. This increase in insulin resistance and following increase in insulin secretion ensures that the foetus develops a normal glucose tolerance. Gestational Diabetes Mellitus (GDM) arises when beta cells do not secrete enough insulin to adopt to the insulin resistance triggered by pregnancy, which leads to mild hyperglycaemia. Although the mechanisms are still largely unknown, foetus exposure to GDM and maternal diabetes has been shown to lead to lifelong metabolic complications because of metabolic imprinting. The risk of Type II diabetes developing in offspring is significantly higher in offspring where the mother was diagnosed with Type II diabetes before pregnancy rather than after. In addition, the age at which offspring are diagnosed with Type 2 diabetes is significantly younger in offspring exposed to maternal diabetes/GDM than those who are not. It is suggested that this is a result of DNA methylation during foetal development. References Epigenetics Metabolism
Metabolic imprinting
[ "Chemistry", "Biology" ]
1,010
[ "Cellular processes", "Biochemistry", "Metabolism" ]
12,767,009
https://en.wikipedia.org/wiki/Plate%20count%20agar
Plate count agar (PCA), also called standard methods agar (SMA), is a microbiological growth medium commonly used to assess or to monitor "total" or viable bacterial growth of a sample. PCA is not a selective medium. The total number of living aerobic bacteria can be determined using a plate count agar which is a substrate for bacteria to grow on. The medium contains casein which provides nitrogen, carbon, amino acids, vitamins and minerals to aid in the growth of the organism. Yeast extract is the source for vitamins, particularly of B-group. Glucose is the fermentable carbohydrate and agar is the solidifying agent. This is a non-selective medium and the bacteria is counted as colony forming units per gram (CFU/g) in solid samples and (CFU/ml) in liquid samples. Pour plate technique The pour plate technique is the typical technique used to prepare plate count agars. Here, the inoculum is added to the molten agar before pouring the plate. The molten agar is cooled to about 45 degrees Celsius and is poured using a sterile method into a petri dish containing a specific diluted sample. From here, the plates are rotated to ensure the samples are uniformly mixing with the agar. Incubation of the plates is the next step and is carried out for about 3 days at 20 to 30 degrees Celsius. Composition Benefits easy to perform larger sample volume than the surface spread method allowing for detection of lower microbiological concentrations agar surface does not have to be pre-dried number of microbes/ mL in a specimen can be determined previously prepared plates are not needed possibility of determination of bacterial contamination of foods Obtaining isolated colonies from plate count agars Once a plate has been successfully prepared, plate count agar cells will grow into colonies which can be sufficiently isolated to determine the original cell type. The colony-forming unit (CFU) is an appropriate description of the colony's origin. In plate counts, colonies are counted, but the count is usually recorded in CFU. Due to the fact that colonies growing on plates may begin as either a single cell or a cluster of cells, CFU allows for a correct description of the cell density. The streak plate method helps identify the unknown microbe by producing individual colonies on an agar plate which allows for CFU method to be used: Beginning the streak pattern. Label the base of the plate. Then, visualize the plate in four quadrants: top left (I), top right (II), bottom right (III), bottom left (IV). Streak the mixed culture back and forth in the first quadrant (top left) of the agar plate. Do not cut the agar, simply scrape the top. Flame the loop to rid of culture residue. Wait for it to cool for the next quadrant. Streaking again. Proceed to the second quadrant with streaking. Streaks on the medium will overlap. Flame the loop to rid of culture residue. Wait for it to cool for the next quadrant. Streaking yet again. Rotate the plate 180 degrees to get a proper streaking angle in the third quadrant. Be sure to cool the loops before streaking in quadrant four. Streaking in the center. Streak one last time beginning in quadrant four and into the center of the plate. Flame the loops. Incubate the plate for assigned time and appropriate temperature. References 1. "Plate Count Agar (PCA) - Culture Media". Microbe Notes. 2019-05-13. Retrieved 2021-12-06. 2. Aryal, Sagar (2021-07-08). "Streak Plate Method- Principle, Methods, Significance, Limitations". Microbe Notes. Retrieved 2021-12-07. Microbiological media
Plate count agar
[ "Biology" ]
784
[ "Microbiological media", "Microbiology equipment" ]
12,767,994
https://en.wikipedia.org/wiki/Triclocarban
Triclocarban (sometimes abbreviated as TCC) is an antibacterial chemical once common in, but now phased out of, personal care products like soaps and lotions. It was originally developed for the medical field. Although the mode of action is unknown, TCC can be effective in fighting infections by targeting the growth of bacteria such as Staphylococcus aureus. Additional research seeks to understand its potential for causing antibacterial resistance and its effects on organismal and environmental health. Usage Triclocarban has been used as an antimicrobial and antifungal compound since the 1960s. It was commonly found in personal care products as an antimicrobial in soaps, lotions, deodorants, toothpaste, and plastic. about 80% of all antimicrobial bar soap sold in the United States contained triclocarban. In 2011 United States consumers were spending nearly 1 billion dollars annually on products containing triclocarban and triclosan. In December 2013, the Food and Drug Administration (FDA) required all companies to prove within the next year, that triclocarban is not harmful to consumers. Companies like Johnson & Johnson, Procter & Gamble, Colgate-Palmolive, and Avon began phasing out antibacterial ingredients due to health concerns. By 2016 usage of triclocarban in soaps had declined to 40%, and that September the FDA banned triclocarban, triclosan and 17 other common antibacterial chemicals by September 2017, for their failure to be proven safe, or more effective than plain soap and water. Chemical structure and properties Triclocarban, 3-(4-chlorophenyl)-1-(3,4-dichlorophenyl)urea, is a white powder that is insoluble in water. While triclocarban has two chlorinated phenyl rings, it is structurally similar to carbanilide compounds often found in pesticides (such as diuron) and some drugs. Chlorination of ring structures is often associated with hydrophobicity, persistence in the environment, and bioaccumulation in fatty tissues of living organisms. For this reason, chlorine is also a common component of persistent organic pollutants. Triclocarban is incompatible with strong oxidizing reagents and strong bases, reaction with which could result in safety concerns such as explosion, toxicity, gas, and heat. Synthesis of triclocarban There are two commercial routes used for the production of triclocarban, using the reaction of isocyanates with nucleophiles such as amines to form ureas: 4-chlorophenylisocyanate is reacted with 3,4-dichloroaniline 3,4-dichlorophenylisocyanate is reacted with 4-chloroaniline The purity specification in the draft USP monograph for triclocarban is: not less than 97.0% w/w. The purity of commercial production is greater, 98% w/w. Mechanism of action Bacteria Triclocarban is predominantly active against gram positive bacteria (bacteria with a thick peptidoglycan wall). The precise mechanism of action of triclocarban is unknown, but it is shown to be bacteriostatic, which prevents bacterial proliferation. Humans The specific mechanism of action for triclocarban's health effects on humans, like in bacteria, is unclear. Generally, in vitro, triclocarban enhances the gene expression of other steroid hormones, including androgens, estrogens, and cortisol. It is hypothesized that the compound acts similar to cofactors or coactivators that modulate the activity of estrogen receptors and androgen receptors. Experiments show that triclocarban activates constitutive androstane receptor and estrogen receptor alpha both in vivo and in vitro and might have the potential to alter normal physiological homeostasis. Activation of these receptors amplifies gene expression and, in doing so, may be the mechanistic base of triclocarban's health impact on humans. However, further investigation is needed to determine whether triclocarban increases the activity of sex steroid hormones by binding to the receptors or by binding to and sensitizing the receptor coactivators. Antibacterial properties Triclocarban acts to treat both initial bacterial skin and mucosal infections as well as those infections at risk for superinfection. In vitro, triclocarban has been found to be effective against various strains of staphylococcus, streptococcus, and enterococcus bacteria. It has been shown to be effective as an antibacterial even at very low levels. Triclocarban's minimum inhibitory concentration has been found to range from 0.5 to 8 mg/L for these various strains. Triclocarban is unquestionably bacteriostatic only for gram-positive bacteria such as Staphylococcus aureus, which suggests that the mechanism of triclocarban's antibacterial activity is through its destabilization of bacterial cell walls. Resistance Exposure of organisms like fish, algae, and humans to low levels of triclocarban and other antibacterial chemicals kills weak microbes and allows the stronger, resistant strains to proliferate. As microbes share genes, an increase in resistant strains increases the probability that weak microbes acquire these resistance genes. The consequence is a new colony of drug resistant microbes. When resistant microbes are exposed to antimicrobials, they increase their expression of genes that confer this resistance. The risk of bacterial antibiotic resistance has been studied by quantitatively monitoring the abundance of the tetQ gene in wastewater microcosms. As tetQ is the most common resistance gene in the environment and encodes for ribosomal protection proteins, the amount that it expresses correlates with the amount of resistance in a microbial population. The addition of triclocarban was shown to increase the expression of this tetQ gene. TetQ gene expression in bacteria was also found to be significantly increased when multiple antimicrobials such as tetracycline, triclosan, and triclocarban were added to an experimental system at the same time. Combining these compounds affects resistance by creating a situation where co-selection (or natural selection by more than one reagent) for resistance genes occurs. The complex nature of microbial communities and the multitude of antibiotics present in aquatic environments often leads to this sort of dynamic selection event and the multiple resistance patterns seen in naturally occurring bacteria. Environmental fate When triclocarban is manufactured, 139 toxic, carcinogenic byproducts, such as 4-chloroaniline and 3,4-dichloroaniline, are released. More of these carcinogens can be released upon chemical, physical and biological attack of triclocarban. The duration of triclocarban chemical in personal product use is relatively short. Upon disposal, the triclocarban is washed down the drain to municipal wastewater treatment plants, where about 97-98% of triclocarban is removed from the water. Discharge of effluent from these treatment plants and disposal of sludge on land is the primary route of environmental exposure to triclocarban. Research shows that triclocarban and triclosan have been detected in sewage effluents and sludge (biosolids) due to their incomplete removal during wastewater treatment. Due to their hydrophobic nature, significant amounts of them in wastewater streams partition into sludge, with concentrations at mg/kg levels. The volume of triclocarban reentering the environment in sewage sludge after initial successful capture from wastewater is s 127,000 ± 194,000 kg/yr. This is equivalent to a 4.8 – 48.2% of its total U.S. consumption volume. Crops shown to take up antimicrobials from soil include barley, meadow fescue, carrots and pinto beans. Studies show that substantial quantities of triclocarban (227,000 – 454,000 kg/y) can break through wastewater treatment plants and damage algae on surface waters. Environmental concerns Waste water High concentrations of triclocarban may be found in wastewater. As of 2011 it was among the top ten most commonly detected organic wastewater compounds in terms of frequency and concentration. Triclocarban has been found in increasing concentrations over the past five years and is now more frequently detected than triclosan. Wildlife toxicity Triclocarban has a hazard quotient rating of greater than one, which indicates the potential for adverse effects on organisms due to toxicity. As triclocarban is found in high concentrations in aquatic environments, there are concerns regarding its toxicity to aquatic species. Specifically, triclocarban has been shown to be toxic to amphibians, fish, invertebrates, and aquatic plants, and traces of the compound have been found in Atlantic dolphins. Triclocarban may disrupt hormones critical to the developmental and endocrine processes in exposed animal wildlife. The neurological and reproductive systems are particularly affected through contact with this compound. Triclocarban may also affect animal wildlife behavior. For example, triclosan and triclocarban are 100–1,000 times more effective in inhibiting and killing algae, crustaceans, and fish than they are in killing microbes. Triclocarban and triclosan have been observed in multiple organisms, including algae, aquatic blackworms, fish, and dolphins. Bioaccumulation Triclocarban bioaccumulation is possible in a number of organisms. Earthworms are known to store this chemical in their bodies and, because of their ecological role as a food source, they have the potential to move triclocarban up the food chain. Microbial species found in soils also bioaccumulate triclocarban. However, the health of these microbes has not been found to be affected by the presence of the chemical. Triclocarban is rapidly accumulated in both algae and adult caged snails. Moreover, triclocarban is more likely than triclosan to bioaccumulate in aquatic organisms. Bioaccumulation occurs in plants treated with water containing triclocarban. However, it is estimated that less than 0.5% of the acceptable daily intake of triclocarban for humans is represented by vegetable consumption. Thus, the concentration of triclocarban in edible portions of plants is a negligible exposure pathway for humans. The potential for triclocarban to bioaccumulate in plants has been exploited in the construction of wetlands meant to help remove triclocarban from wastewater. These constructed wetlands are considered a cost-effective treatment option for the removal of PPCPs, including triclocarban and triclosan, from domestic water effluent. Such compounds tend to concentrate in the roots of wetland plants. Potential ecological risks associated with this method are the decrease of root systems in wetland plants, reduced nutrient uptake, decreased competitive ability, and increased potential for uprooting. Due to these risks, the long term exposure of wetland ecosystems to wastewater containing triclocarban as a major solution to wastewater pollution is still under discussion. Health concerns Personal care One study has investigated how triclocarban remains in the human system after using a bar of soap with traces of triclocarban. Analysis of urine samples from human test subjects shows that, after triclocarban has undergone glucuronidation, its oxidative metabolites are less readily excreted than triclocarban itself. This same study performed topical treatments of triclocarban on rats and, by analyzing urine and plasma levels, demonstrated that triclocarban does remain in the organism's system. Endocrine disorders Triclocarban induces weak responses mediated by aryl hydrocarbon, estrogen, and androgen receptors in vitro. This has yet to be confirmed in vivo. In vitro, the dihydrotestosterone-dependent activation of androgen receptor-responsive gene expression is enhanced by triclocarban by up to 130%. Triclocarban is also a potent inhibitor of the enzyme soluble epoxide hydrolase (sEH) in vitro. Additionally, triclocarban amplifies the bioactivity of testosterone and other androgens. This increased activity may have adverse implications for reproductive health. Triclocarban studies on rats exhibited increased size of the specimens' prostate glands. The amplification of sex hormones could promote the growth of breast and prostate cancer. The chemical toxicity of triclocarban with respect to lethality is low ( >5000 mg/kg). Its rate of skin absorption is also low. Repeated low-dose exposure, however, can cause endocrine disruption over time. Safety Spillage may increase the risk of human, ecological, and environmental exposure to triclocarban. Immediate removal and restraint of the spill, including triclocarban as dust, is urged. Although triclocarban has few to no direct detrimental effects on health aside from allergic reactions, preventing exposure to triclocarban is recommended. Since triclocarban enters the body through pores, wearing gloves, properly washing hands, and overall proper hygiene reduces the risk of skin exposure and irritation. High concentrations of triclocarban dust may remain in the lungs and inhibit lung and respiratory function. For individuals with prior respiratory conditions, triclocarban exacerbates the severity of respiratory diseases, and proper protection is recommended as a precaution. In case of exposure to triclocarban, the individual is suggested to wash the area with water or to clear the respiratory pathways. In addition to its adverse effects on humans and the environment, solid triclocarban is a fire hazard. It is particularly combustible as dust. Contamination with other oxidizing agents may also result in combustion. Policy The Food and Drug Administration began to review the safety of triclocarban and triclosan in the 1970s, but due to the difficulties of finding antimicrobial alternatives, no final policy, or "drug monograph," was established. Legal action by the Natural Resources Defense Council in 2010 forced the FDA to review triclocarban and triclosan. The United States Environmental Protection Agency maintains regulatory control over triclocarban and triclosan. On September 2, 2016, the Food and Drug Administration announced that triclosan and triclocarban must be removed from all antibacterial soap products by late 2017. Triclocarban is similar in its use and adverse health impacts as triclosan, and hexachlorophene which was already prohibited by the FDA. Current research Scientists are searching for more sustainable antimicrobials that maintain their effectiveness while being minimally toxic to the environment, humans, and wildlife. This entails low degrees of bioaccumulation and rapid, clean biodegradation in existing wastewater treatment facilities. A lowered potential or no potential for resistance is also preferable. These next generation chemicals should aim to act on a broad spectrum of microbes and pathogens while also being minimally toxic and bioaccumulating in non-target species. Synthesis of these compounds could be improved upon by finding renewable sources for their production that lacks occupational hazards. Research into sustainable chemical production is helping to formulate green pharmaceuticals. These same principles may be applied to the development of improved antimicrobials. Developments in this area would benefit both people and the environment. See also Antibacterial soap Chlorine Dial (soap) Prostate cancer Bioaccumulation Breast cancer Triclosan Sludge Hand sanitizer Deodorant Sewage treatment References Antimicrobials Chloroarenes Endocrine disruptors Fungicides Ureas Xenoestrogens 4-Chlorophenyl compounds Drugs with unknown mechanisms of action
Triclocarban
[ "Chemistry", "Biology" ]
3,413
[ "Fungicides", "Antimicrobials", "Endocrine disruptors", "Organic compounds", "Biocides", "Ureas" ]
12,768,250
https://en.wikipedia.org/wiki/Etanautine
Etanautine, also known as diphenhydramine monoacefyllinate, is an anticholinergic used as an antiparkinsonian agent. It is a 1:1 salt of diphenhydramine with acefylline, similar to the diphenhydramine/8-chlorotheophylline combination product dimenhydrinate. As with dimenhydrinate, the stimulant effect of the etanautine counteracts the sedative effect from the diphenhydramine, resulting in an improved therapeutic profile. The 1:2 salt diphenhydramine diacefylline (with two molecules of acefylline to each molecule of diphenhydramine) is also used in medicine, under the brand name Nautamine. References Adenosine receptor antagonists Antiparkinsonian agents Ethers Muscarinic antagonists Stimulants
Etanautine
[ "Chemistry" ]
196
[ "Organic compounds", "Functional groups", "Ethers" ]
12,770,444
https://en.wikipedia.org/wiki/Cooling%20flow
A cooling flow occurs when the intracluster medium (ICM) in the centres of galaxy clusters should be rapidly cooling at the rate of tens to thousands of solar masses per year. This should happen as the ICM (a plasma) is quickly losing its energy by the emission of X-rays. The X-ray brightness of the ICM is proportional to the square of its density, which rises steeply towards the centres of many clusters. Also the temperature falls to typically a third or a half of the temperature in the outskirts of the cluster. The typical [predicted] timescale for the ICM to cool is relatively short, less than a billion years. As material in the centre of the cluster cools out, the pressure of the overlying ICM should cause more material to flow inwards (the cooling flow). In a steady state, the rate of mass deposition, i.e. the rate at which the plasma cools, is given by where L is the bolometric (i.e. over the entire spectrum) luminosity of the cooling region, T is its temperature, k is the Boltzmann constant and μm is the mean molecular mass. Cooling flow problem It is currently thought that the very large amounts of expected cooling are in reality much smaller, as there is little evidence for cool X-ray emitting gas in many of these systems. This is the cooling flow problem. Theories for why there is little evidence of cooling include Heating by the central Active galactic nucleus (AGN) in clusters, possibly via sound waves (seen in the Perseus and Virgo clusters) Thermal conduction of heat from the outer parts of clusters Cosmic ray heating Hiding cool gas by absorbing material Mixing of cool gas with hotter material Heating by AGN is the most popular explanation, as they emit a lot of energy over their lifetimes, and some of the alternatives listed have theoretical problems. References Further reading 5.7. Cooling flows and accretion by cDs (in X-ray Emission from Clusters of Galaxies. Sarazin 1988) Extragalactic astronomy Space plasmas
Cooling flow
[ "Physics", "Astronomy" ]
430
[ "Space plasmas", "Galaxy clusters", "Astrophysics", "Extragalactic astronomy", "Astronomical objects", "Astronomical sub-disciplines" ]
19,523,646
https://en.wikipedia.org/wiki/Tektronix%20hex%20format
Tektronix hex format (TEK HEX) and Extended Tektronix hex format (EXT TEK HEX or XTEK) / Extended Tektronix Object Format are ASCII-based hexadecimal file formats, created by Tektronix, for conveying binary information for applications like programming microcontrollers, EPROMs, and other kinds of chips. Each line of a Tektronix hex file starts with a slash (/) character, whereas extended Tektronix hex files start with a percent (%) character. Tektronix hex format A line consists of four parts, excluding the initial '/' character: Address — 4 character (2 byte) field containing the address where the data is to be loaded into memory. This limits the address to a maximum value of FFFF16. Byte count — 2 character (1 byte) field containing the length of the data fields. Prefix checksum — 2 character (1 byte) field containing the checksum of the prefix. The prefix checksum is the 8-bit sum of the four-bit hexadecimal value of the six digits that make up the address and byte count. Data— contains the data to be transferred, followed by a 2 character (1 byte) checksum. The data checksum is the 8-bit sum, modulo 256, of the 4-bit hexadecimal values of the digits that make up the data bytes. Extended Tektronix hex format A line consists of five parts, excluding the initial '%' character: Record Length — 2 character (1 byte) field that specifies the number of characters (not bytes) in the record, excluding the percent sign. Type — 1 character field, specifies whether the record is data (6) or termination (8). (6 record contains data, placed at the address specified. 8 termination record: The address field may optionally contain the address of the instruction to which control is passed; there is no data field.) Checksum — 2 hex digits (1 byte, represents the sum of all the nibbles on the line, excluding the checksum itself. Address — 2 to N character field. The first character is how many characters are to follow for this field. The remaining characters contains the address that specifies where the data is to be loaded into memory. For example, if the first character is 8, then the following 8 characters should specify the address for a total of 9 characters in this field. Data — contains the executable code, memory-loadable data or descriptive information to be transferred. See also Binary-to-text encoding, a survey and comparison of encoding algorithms Intel hex format MOS Technology file format Motorola S-record hex format Texas Instruments TI-TXT (TI Text) References Further reading (56 pages) External links SRecord is a collection of tools for manipulating hex format files, including both Tektronix formats Binary-to-text encoding formats Embedded systems Computer file formats
Tektronix hex format
[ "Technology", "Engineering" ]
631
[ "Embedded systems", "Computer science", "Computer engineering", "Computer systems" ]
6,157,978
https://en.wikipedia.org/wiki/Micro%20heat%20exchanger
Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels, which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramic. Microchannel heat exchangers can be used for many applications including: high-performance aircraft gas turbine engines heat pumps Microprocessor and microchip cooling air conditioning Background Investigation of microscale thermal devices is motivated by the single phase internal flow correlation for convective heat transfer: Where is the heat transfer coefficient, is the Nusselt number, is the thermal conductivity of the fluid and is the hydraulic diameter of the channel or duct. In internal laminar flows, the Nusselt number becomes a constant. This is a result which can be arrived at analytically: For the case of a constant wall temperature, and for the case of constant heat flux for round tubes. The last value is increased to 140/17 = 8.23 for flat parallel plates. As Reynolds number is proportional to hydraulic diameter, fluid flow in channels of small hydraulic diameter will predominantly be laminar in character. This correlation therefore indicates that the heat transfer coefficient increases as channel diameter decreases. Should the hydraulic diameter in forced convection be on the order of tens or hundreds of micrometres, an extremely high heat transfer coefficient should result. This hypothesis was initially investigated by Tuckerman and Pease. Their positive results led to further research ranging from classical investigations of single channel heat transfer to more applied investigations in parallel micro-channel and micro scale plate fin heat exchangers. Recent work in the field has focused on the potential of two-phase flows at the micro-scale. Classification Just like "conventional" or "macro scale" heat exchangers, micro heat exchangers have one, two or even three fluidic flows. In the case of one fluidic flow, heat can be transferred to the fluid (each of the fluids can be a gas, a liquid, or a multiphase flow) from electrically powered heater cartridges, or removed from the fluid by electrically powered elements like Peltier chillers. In the case of two fluidic flows, micro heat exchangers are usually classified by the orientation of the fluidic flows to another as "cross flow" or "counter flow" devices. If a chemical reaction is conducted inside a micro heat exchanger, the latter is also called a microreactor. See also Micro process engineering References Microtechnology Microfluidics Heat exchangers Heat transfer
Micro heat exchanger
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
541
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Microfluidics", "Microtechnology", "Chemical equipment", "Materials science", "Heat exchangers", "Thermodynamics" ]
6,161,213
https://en.wikipedia.org/wiki/Pollination%20syndrome
Pollination syndromes are suites of flower traits that have evolved in response to natural selection imposed by different pollen vectors, which can be abiotic (wind and water) or biotic, such as birds, bees, flies, and so forth through a process called pollinator-mediated selection. These traits include flower shape, size, colour, odour, reward type and amount, nectar composition, timing of flowering, etc. For example, tubular red flowers with copious nectar often attract birds; foul smelling flowers attract carrion flies or beetles, etc. The "classical" pollination syndromes were first studied in the 19th century by the Italian botanist Federico Delpino. Although they are useful in understanding of plant-pollinator interactions, sometimes the pollinator of a plant species cannot be accurately predicted from the pollination syndrome alone, and caution must be exerted in making assumptions. The naturalist Charles Darwin surmised that the flower of the orchid Angraecum sesquipedale was pollinated by a then undiscovered moth with a proboscis whose length was unprecedented at the time. His prediction had gone unverified until 21 years after his death, when the moth was discovered and his conjecture vindicated. The story of its postulated pollinator has come to be seen as one of the celebrated predictions of the theory of evolution. Abiotic Abiotically pollinated flowers do not attract animal pollinators. Nevertheless, they often have suites of shared traits. Wind Wind-pollinated flowers may be small and inconspicuous, as well as green and not showy. They produce enormous numbers of relatively small pollen grains (hence wind-pollinated plants may be allergens, but seldom are animal-pollinated plants allergenic). Their stigmas may be large and feathery to catch the pollen grains. Insects may visit them to collect pollen; in some cases, these are ineffective pollinators and exert little natural selection on the flowers, but there are also examples of ambophilous flowers which are both wind and insect pollinated. Anemophilous, or wind pollinated flowers, are usually small and inconspicuous, and do not possess a scent or produce nectar. The anthers may produce a large number of pollen grains, while the stamens are generally long and protrude out of flower. Water Water-pollinated plants are aquatic and pollen is released into the water. Water currents therefore act as a pollen vector in a similar way to wind currents. Their flowers tend to be small and inconspicuous with many pollen grains and large, feathery stigmas to catch the pollen. However, this is relatively uncommon (only 2% of pollination is hydrophily) and most aquatic plants are insect-pollinated, with flowers that emerge into the air. Vallisneria is an example. Biotic Insects Bees Bee-pollinated flowers can be very variable in their size, shape and colouration. They can be open and bowl-shaped ('actinomorphic', radially symmetrical) or more complex and non-radially symmetric ('zygomorphic'), as is the case with many peas and foxgloves. Some bee flowers tend to be yellow or blue, often with ultraviolet nectar guides and scent. Nectar, pollen, or both are offered as rewards in varying amounts. The sugar in the nectar tends to be sucrose-dominated. A few bees collect oil from special glands on the flower. Butterflies Butterfly-pollinated flowers tend to be large and showy, pink or lavender in colour, frequently have a landing area, and are usually scented. Since butterflies do not digest pollen (with one exception), more nectar is offered than pollen. The flowers have simple nectar guides with the nectaries usually hidden in narrow tubes or spurs, reached by the long tongue of the butterflies. Moths Among the more important moth pollinators are the hawk moths (Sphingidae). Their behaviour is similar to hummingbirds: they hover in front of flowers with rapid wingbeats. Most are nocturnal or crepuscular. So moth-pollinated flowers tend to be white, night-opening, large and showy with tubular corollas and a strong, sweet scent produced in the evening, night or early morning. Much nectar is produced to fuel the high metabolic rates needed to power their flight. Other moths (Noctuids, Geometrids, Pyralids, for example) fly slowly and settle on the flower. They do not require as much nectar as the fast-flying hawk moths, and the flowers tend to be small (though they may be aggregated in heads). Flies Myophilous plants, those pollinated by flies, tend not to emit a strong scent, are typically purple, violet, blue, and white, and have open dishes or tubes. Sapromyophilous plants attract flies which normally visit dead animals or dung. Flowers mimic the odor of such objects. The plant provides them with no reward and they leave quickly unless it has traps to slow them down. Such plants are far less common than myophilous ones. Beetles Beetle-pollinated flowers are usually large, greenish or off-white in color and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Most beetle-pollinated flowers are flattened or dish shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plant's ovaries are usually well protected from the biting mouthparts of their pollinators. A number of cantharophilous plants are thermogenic, with flowers that can increase their temperature. This heat is thought to help further spread the scent, but the infrared light produced by this heat may also be visible to insects during the dark night, and act as a shining beacon to attract them. Birds Flowers pollinated by specialist nectarivores tend to be large, red or orange tubes with a lot of dilute nectar, secreted during the day. Since birds do not have a strong response to scent, they tend to be odorless. Flowers pollinated by generalist birds are often shorter and wider. Hummingbirds are often associated with pendulous flowers, whereas passerines (perching birds) need a landing platform so flowers and surrounding structures are often more robust. Also, many plants have anthers placed in the flower so that pollen rubs against the birds head/back as the bird reaches in for nectar. Bats There are major differences between bat pollination in the New World as opposed to the Old World. In the Old World pollinating bats are large fruit bats of the family Pteropodidae which do not have the ability to hover and must perch in the plant to lap the nectar; these bats furthermore do not have the ability to echolocate. Bat-pollinated flowers in this part of the world tend to be large and showy, white or light coloured, open at night and have strong musty odours. They are often large balls of stamens. In the Americas pollinating bats are tiny creatures called glossophagines which have both the ability to hover as well as echolocate, and have extremely long tongues. Plants in this part of the world are often pollinated by both bats and hummingbirds, and have long tubular flowers. Flowers in this part of the world are typically borne away from the trunk or other obstructions, and offer nectar for extended periods of time. In one essay, von Helversen et al. speculate that maybe some bell-shaped flowers have evolved to attract bats in the Americas, as the bell-shape might reflect the sonar pulses emitted by the bats in a recognisable pattern. A number of species of Marcgravia from Caribbean islands have evolved a special leaf just above the inflorescence to attract bats. The leaf petiole is twisted so the leaf sticks upwards, and the leaf is shaped like a concave disc or dish reflector. The leaf reflects echolocation signals from many directions, guiding the pollinating bats towards the flowers. The epiphytic bean Mucuna holtonii employs a similar tactic, but in this species it is a specialised petal that acts as a sonar reflector. In the New World bat pollinated flowers often have sulphur-scented compounds. Bat-pollinated plants have bigger pollen than their relatives. Non-flying mammals The characteristics of the pollination syndrome associated with pollination by mammals which are not bats are: a yeasty odour; cryptic, drab, axillary, geoflorous flowers or inflorescences often obscured from sight; large and sturdy flowers, or grouped together as multi-flowered inflorescences; either sessile flowers or inflorescences or subtended by a short and stout peduncle or pedicel; bowl-shaped flowers or inflorescences; copious, sucrose-rich nectar usually produced during the night; tough and wiry styles; an adequate distance between the stigma and nectar to fit the rostrum of the pollinating animal; and potentially a winter–spring flowering period. Many non-flying mammals are nocturnal and have an acute sense of smell, so the plants tend not to have bright showy colours, but instead excrete a strong odour. These plants also tend to produce large amounts of pollen because mammals are larger than some other pollinators, and lack the precision smaller pollinators can achieve. The Western-Australian endemic Honey possum (Tarsipes rostratus) is an unusual non-flying mammal pollinator in that it has adapted to feeding exclusively on pollen and nectar. It is known to forage on a wide variety of plants (particularly in the families Proteaceae and Myrtaceae) including many with typical bird-pollinated flowers such as Calothamnus quadrifidus and many species of Banksia. Biology Pollination syndromes reflect convergent evolution towards forms (phenotypes) that limit the number of species of pollinators visiting the plant. They increase the functional specialization of the plant with regard to pollination, though this may not affect the ecological specialization (i.e. the number of species of pollinators within that functional group). They are responses to common selection pressures exerted by shared pollinators or abiotic pollen vectors, which generate correlations among traits. That is, if two distantly related plant species are both pollinated by nocturnal moths, for example, their flowers will converge on a form which is recognised by the moths (e.g. pale colour, sweet scent, nectar released at the base of a long tube, night-flowering). Advantages of specialization Efficiency of pollination: the rewards given to pollinators (commonly nectar or pollen or both, but sometimes oil, scents, resins, or wax) may be costly to produce. Nectar can be cheap, but pollen is generally expensive as it is relatively high in nitrogen compounds. Plants have evolved to obtain the maximum pollen transfer for the minimum reward delivered. Different pollinators, because of their size, shape, or behaviour, have different efficiencies of transfer of pollen. And the floral traits affect efficiency of transfer: columbine flowers were experimentally altered and presented to hawkmoths, and flower orientation, shape, and colour were found to affect visitation rates or pollen removal. Pollinator constancy: to efficiently transfer pollen, it is best for the plant if the pollinator focuses on one species of plant, ignoring other species. Otherwise, pollen may be dropped uselessly on the stigmas of other species. Animals, of course, do not aim to pollinate, they aim to collect food as fast as they can. However, many pollinator species exhibit constancy, passing up available flowers to focus on one plant species. Why should animals specialize on a plant species, rather than move to the next flower of any species? Although pollinator constancy was recognized by Aristotle, the benefits to animals are not yet fully understood. The most common hypothesis is that pollinators must learn to handle particular types of flowers, and they have limited capacity to learn different types. They can only efficiently gather rewards from one type of flower. These honeybees selectively visit flowers from only one species for a period of time, as can be seen by the colour of the pollen in their baskets. Advantages of generalization Pollinators fluctuate in abundance and activity independently of their plants, and any one species may fail to pollinate a plant in a particular year. Thus a plant may be at an advantage if it attracts several species or types of pollinators, ensuring pollen transfer every year. Many species of plants have the back-up option of self-pollination, if they are not self-incompatible. A continuum rather than discrete syndromes Whilst it is clear that pollination syndromes can be observed in nature, there has been much debate amongst scientists as to how frequent they are and to what extent we can use the classical syndromes to classify plant-pollinator interactions. Although some species of plants are visited only by one type of animal (i.e. they are functionally specialized), many plant species are visited by very different pollinators. For example, a flower may be pollinated by bees, butterflies, and birds. Strict specialization of plants relying on one species of pollinator is relatively rare, probably because it can result in variable reproductive success across years as pollinator populations vary significantly. In such cases, plants should generalize on a wide range of pollinators, and such ecological generalization is frequently found in nature. A study in Tasmania found the syndromes did not usefully predict the pollinators. A critical re-evaluation of the syndromes suggests that on average about one third of the flowering plants can be classified into the classical syndromes. This reflects the fact that nature is much less predictable and straightforward than 19th-century biologists originally thought. Pollination syndromes can be thought of as extremes of a continuum of greater or lesser specialization or generalization onto particular functional groups of pollinators that exert similar selective pressures" and the frequency with which flowers conform to the expectations of the pollination syndromes is relatively rare. In addition, new types of plant-pollinator interaction, involving "unusual" pollinating animals are regularly being discovered, such as specialized pollination by spider hunting wasps (Pompilidae) and fruit chafers (Cetoniidae) in the eastern grasslands of South Africa. These plants do not fit into the classical syndromes, though they may show evidence of convergent evolution in their own right. An analysis of flower traits and visitation in 49 species in the plant genus Penstemon found that it was possible to separate bird- and bee- pollinated species quite well, but only by using floral traits which were not considered in the classical accounts of syndromes, such as the details of anther opening. Although a recent review concluded that there is "overwhelming evidence that functional groups exert different selection pressures on floral traits", the sheer complexity and subtlety of plant-pollinator interactions (and the growing recognition that non-pollinating organisms such as seed predators can affect the evolution of flower traits) means that this debate is likely to continue for some time. See also Pollinator-mediated selection Mutualism (biology) Floral biology Pollination trap Monocotyledon reproduction References Bibliography Pollination Flowers Plant morphology Insect ecology Evolutionary biology
Pollination syndrome
[ "Biology" ]
3,180
[ "Evolutionary biology", "Plant morphology", "Plants" ]
6,161,274
https://en.wikipedia.org/wiki/Poisson%E2%80%93Boltzmann%20equation
The Poisson–Boltzmann equation describes the distribution of the electric potential in solution in the direction normal to a charged surface. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. It is expressed as a differential equation of the electric potential , which depends on the solvent permitivity , the solution temperature , and the mean concentration of each ion species : The Poisson–Boltzmann equation is derived via mean-field assumptions. From the Poisson–Boltzmann equation many other equations have been derived with a number of different assumptions. Origins Background and derivation The Poisson–Boltzmann equation describes a model proposed independently by Louis Georges Gouy and David Leonard Chapman in 1910 and 1913, respectively. In the Gouy-Chapman model, a charged solid comes into contact with an ionic solution, creating a layer of surface charges and counter-ions or double layer. Due to thermal motion of ions, the layer of counter-ions is a diffuse layer and is more extended than a single molecular layer, as previously proposed by Hermann Helmholtz in the Helmholtz model. The Stern Layer model goes a step further and takes into account the finite ion size. The Gouy–Chapman model explains the capacitance-like qualities of the electric double layer. A simple planar case with a negatively charged surface can be seen in the figure below. As expected, the concentration of counter-ions is higher near the surface than in the bulk solution. The Poisson–Boltzmann equation describes the electrochemical potential of ions in the diffuse layer. The three-dimensional potential distribution can be described by the Poisson equation where is the local electric charge density in C/m3, is the permittivity of the solvent, is the electric potential. The freedom of movement of ions in solution can be accounted for by Boltzmann statistics. The Boltzmann equation is used to calculate the local ion density such that where is the ion concentration at the bulk, is the work required to move an ion closer to the surface from an infinitely far distance, is the Boltzmann constant, is the temperature in kelvins. The equation for local ion density can be substituted into the Poisson equation under the assumptions that the work being done is only electric work, and that the concentration of salt is much higher than the concentration of ions. The electric work to bring an ion of charge to a surface with potential can be represented by . These work equations can be substituted into the Boltzmann equation, producing an expression for the concentration of each ion species . Substituting this Boltzmann relation into the local electric charge density expression, the following expression can be obtained Finally the charge density can be substituted into the Poisson equation to produce the Poisson–Boltzmann equation: When distance is measured as multiples of Bjerrum length and potential is measured in multiples of then the equation can be rearranged to dimensionless form Related theories The Poisson–Boltzmann equation can take many forms throughout various scientific fields. In biophysics and certain surface chemistry applications, it is known simply as the Poisson–Boltzmann equation. It is also known in electrochemistry as Gouy-Chapman theory; in solution chemistry as Debye–Huckel theory; in colloid chemistry as Derjaguin–Landau–Verwey–Overbeek (DLVO) theory. Only minor modifications are necessary to apply the Poisson–Boltzmann equation to various interfacial models, making it a highly useful tool in determining electrostatic potential at surfaces. Solving analytically Because the Poisson–Boltzmann equation is a partial differential of the second order, it is commonly solved numerically; however, with certain geometries, it can be solved analytically. Geometries The geometry that most easily facilitates this is a planar surface. In the case of an infinitely extended planar surface, there are two dimensions in which the potential cannot change because of symmetry. Assuming these dimensions are the y and z dimensions, only the x dimension is left. Below is the Poisson–Boltzmann equation solved analytically in terms of a second order derivative with respect to x. Analytical solutions have also been found for axial and spherical cases in a particular study. The equation is in the form of a logarithm of a power series and it is as follows: It uses a dimensionless potential and the lengths are measured in units of the Debye electron radius in the region of zero potential (where denotes the number density of negative ions in the zero potential region). For the spherical case, L=2, the axial case, L=1, and the planar case, L=0. Low-potential vs high-potential cases When using the Poisson–Boltzmann equation, it is important to determine if the specific case is low or high potential. The high-potential case becomes more complex so if applicable, use the low-potential equation. In the low-potential condition, the linearized version of the Poisson–Boltzmann equation (shown below) is valid, and it is commonly used as it is more simple and spans a wide variety of cases. Low-potential case conditions Strictly, low potential means that ; however, the results that the equations yields are valid for a wider range of potentials, from 50–80mV. Nevertheless, at room temperature, and that is generally the standard. Some boundary conditions that apply in low potential cases are that: at the surface, the potential must be equal to the surface potential and at large distances from the surface the potential approaches a zero value. This distance decay length is yielded by the Debye length equation. As salt concentration increases, the Debye length decreases due to the ions in solution screening the surface charge. A special instance of this equation is for the case of water with a monovalent salt. The Debye length equation is then: where is the salt concentration in mol/L. These equations all require 1:1 salt concentration cases, but if ions that have higher valence are present, the following case is used. High-potential case The high-potential case is referred to as the “full one-dimensional case”. In order to obtain the equation, the general solution to the Poisson–Boltzmann equation is used and the case of low potentials is dropped. The equation is solved with a dimensionless parameter , which is not to be confused with the spatial coordinate symbol, y. Employing several trigonometric identities and the boundary conditions that at large distances from the surface, the dimensionless potential and its derivative are zero, the high potential equation is revealed. This equation solved for is shown below. In order to obtain a more useful equation that facilitates graphing high potential distributions, take the natural logarithm of both sides and solve for the dimensionless potential, y. Knowing that , substitute this for y in the previous equation and solve for . The following equation is rendered. Conditions In low potential cases, the high potential equation may be used and will still yield accurate results. As the potential rises, the low potential, linear case overestimates the potential as a function of distance from the surface. This overestimation is visible at distances less than half the Debye length, where the decay is steeper than exponential decay. The following figure employs the linearized equation and the high potential graphing equation derived above. It is a potential-versus-distance graph for varying surface potentials of 50, 100, 150, and 200 mV. The equations employed in this figure assume an 80mM NaCl solution. General applications The Poisson–Boltzmann equation can be applied in a variety of fields mainly as a modeling tool to make approximations for applications such as charged biomolecular interactions, dynamics of electrons in semiconductors or plasma, etc. Most applications of this equation are used as models to gain further insight on electrostatics. Physiological applications The Poisson–Boltzmann equation can be applied to biomolecular systems. One example is the binding of electrolytes to biomolecules in a solution. This process is dependent upon the electrostatic field generated by the molecule, the electrostatic potential on the surface of the molecule, as well as the electrostatic free energy. The linearized Poisson–Boltzmann equation can be used to calculate the electrostatic potential and free energy of highly charged molecules such as tRNA in an ionic solution with different number of bound ions at varying physiological ionic strengths. It is shown that electrostatic potential depends on the charge of the molecule, while the electrostatic free energy takes into account the net charge of the system. Another example of utilizing the Poisson–Boltzmann equation is the determination of an electric potential profile at points perpendicular to the phospholipid bilayer of an erythrocyte. This takes into account both the glycocalyx and spectrin layers of the erythrocyte membrane. This information is useful for many reasons including the study of the mechanical stability of the erythrocyte membrane. Electrostatic free energy The Poisson–Boltzmann equation can also be used to calculate the electrostatic free energy for hypothetically charging a sphere using the following charging integral: where is the final charge on the sphere The electrostatic free energy can also be expressed by taking the process of the charging system. The following expression utilizes chemical potential of solute molecules and implements the Poisson-Boltzmann Equation with the Euler-Lagrange functional: Note that the free energy is independent of the charging pathway [5c]. The above expression can be rewritten into separate free energy terms based on different contributions to the total free energy where Electrostatic fixed charges = Electrostatic mobile charges = Entropic free energy of mixing of mobile species = Entropic free energy of mixing of solvent = Finally, by combining the last three term the following equation representing the outer space contribution to the free energy density integral These equations can act as simple geometry models for biological systems such as proteins, nucleic acids, and membranes. This involves the equations being solved with simple boundary conditions such as constant surface potential. These approximations are useful in fields such as colloid chemistry. Materials science An analytical solution to the Poisson–Boltzmann equation can be used to describe an electron-electron interaction in a metal-insulator semiconductor (MIS). This can be used to describe both time and position dependence of dissipative systems such as a mesoscopic system. This is done by solving the Poisson–Boltzmann equation analytically in the three-dimensional case. Solving this results in expressions of the distribution function for the Boltzmann equation and self-consistent average potential for the Poisson equation. These expressions are useful for analyzing quantum transport in a mesoscopic system. In metal-insulator semiconductor tunneling junctions, the electrons can build up close to the interface between layers and as a result the quantum transport of the system will be affected by the electron-electron interactions. Certain transport properties such as electric current and electronic density can be known by solving for self-consistent Coulombic average potential from the electron-electron interactions, which is related to electronic distribution. Therefore, it is essential to analytically solve the Poisson–Boltzmann equation in order to obtain the analytical quantities in the MIS tunneling junctions. Applying the following analytical solution of the Poisson–Boltzmann equation (see section 2) to MIS tunneling junctions, the following expression can be formed to express electronic transport quantities such as electronic density and electric current Applying the equation above to the MIS tunneling junction, electronic transport can be analyzed along the z-axis, which is referenced perpendicular to the plane of the layers. An n-type junction is chosen in this case with a bias V applied along the z-axis. The self-consistent average potential of the system can be found using where and is called the Debye length. The electronic density and electric current can be found by manipulation to equation 16 above as functions of position z. These electronic transport quantities can be used to help understand various transport properties in the system. Limitations As with any approximate model, the Poisson–Boltzmann equation is an approximation rather than an exact representation. Several assumptions were made to approximate the potential of the diffuse layer. The finite size of the ions was considered negligible and ions were treated as individual point charges, where ions were assumed to interact with the average electrostatic field of all their neighbors rather than each neighbor individually. In addition, non-Coulombic interactions were not considered and certain interactions were unaccounted for, such as the overlap of ion hydration spheres in an aqueous system. The permittivity of the solvent was assumed to be constant, resulting in a rough approximation as polar molecules are prevented from freely moving when they encounter the strong electric field at the solid surface. Though the model faces certain limitations, it describes electric double layers very well. The errors resulting from the previously mentioned assumptions cancel each other for the most part. Accounting for non-Coulombic interactions increases the ion concentration at the surface and leads to a reduced surface potential. On the other hand, including the finite size of the ions causes the opposite effect. The Poisson–Boltzmann equation is most appropriate for approximating the electrostatic potential at the surface for aqueous solutions of univalent salts at concentrations smaller than 0.2 M and potentials not exceeding 50–80 mV. In the limit of strong electrostatic interactions, a strong coupling theory is more applicable than the weak coupling assumed in deriving the Poisson-Boltzmann theory. See also Double layer References External links Adaptive Poisson–Boltzmann Solver – A free, open-source Poisson-Boltzmann electrostatics and biomolecular solvation software package Zap – A Poisson–Boltzmann electrostatics solver MIBPB Matched Interface & Boundary based Poisson–Boltzmann solver CHARMM-GUI: PBEQ Solver AFMPB Adaptive Fast Multipole Poisson–Boltzmann Solver, free and open-source Global classical solutions of the Boltzmann equation with long-range interactions, Philip T. Gressman and Robert M. Strain, 2009, University of Pennsylvania, Department of Mathematics, Philadelphia, PA, USA. Eponymous equations of physics Molecular dynamics Colloidal chemistry
Poisson–Boltzmann equation
[ "Physics", "Chemistry" ]
2,956
[ "Colloidal chemistry", "Molecular physics", "Equations of physics", "Eponymous equations of physics", "Colloids", "Computational physics", "Surface science", "Molecular dynamics", "Computational chemistry" ]
456,410
https://en.wikipedia.org/wiki/Crystal%20system
In crystallography, a crystal system is a set of point groups (a group of geometric symmetries with at least one fixed point). A lattice system is a set of Bravais lattices. Space groups are classified into crystal systems according to their point groups, and into lattice systems according to their Bravais lattices. Crystal systems that have space groups assigned to a common lattice system are combined into a crystal family. The seven crystal systems are triclinic, monoclinic, orthorhombic, tetragonal, trigonal, hexagonal, and cubic. Informally, two crystals are in the same crystal system if they have similar symmetries (though there are many exceptions). Classifications Crystals can be classified in three ways: lattice systems, crystal systems and crystal families. The various classifications are often confused: in particular the trigonal crystal system is often confused with the rhombohedral lattice system, and the term "crystal system" is sometimes used to mean "lattice system" or "crystal family". Lattice system A lattice system is a group of lattices with the same set of lattice point groups. The 14 Bravais lattices are grouped into seven lattice systems: triclinic, monoclinic, orthorhombic, tetragonal, rhombohedral, hexagonal, and cubic. Crystal system A crystal system is a set of point groups in which the point groups themselves and their corresponding space groups are assigned to a lattice system. Of the 32 crystallographic point groups that exist in three dimensions, most are assigned to only one lattice system, in which case both the crystal and lattice systems have the same name. However, five point groups are assigned to two lattice systems, rhombohedral and hexagonal, because both exhibit threefold rotational symmetry. These point groups are assigned to the trigonal crystal system. Crystal family A crystal family is determined by lattices and point groups. It is formed by combining crystal systems that have space groups assigned to a common lattice system. In three dimensions, the hexagonal and trigonal crystal systems are combined into one hexagonal crystal family. Comparison Five of the crystal systems are essentially the same as five of the lattice systems. The hexagonal and trigonal crystal systems differ from the hexagonal and rhombohedral lattice systems. These are combined into the hexagonal crystal family. The relation between three-dimensional crystal families, crystal systems and lattice systems is shown in the following table: Note: there is no "trigonal" lattice system. To avoid confusion of terminology, the term "trigonal lattice" is not used. Crystal classes The 7 crystal systems consist of 32 crystal classes (corresponding to the 32 crystallographic point groups) as shown in the following table below: The point symmetry of a structure can be further described as follows. Consider the points that make up the structure, and reflect them all through a single point, so that (x,y,z) becomes (−x,−y,−z). This is the 'inverted structure'. If the original structure and inverted structure are identical, then the structure is centrosymmetric. Otherwise it is non-centrosymmetric. Still, even in the non-centrosymmetric case, the inverted structure can in some cases be rotated to align with the original structure. This is a non-centrosymmetric achiral structure. If the inverted structure cannot be rotated to align with the original structure, then the structure is chiral or enantiomorphic and its symmetry group is enantiomorphic. A direction (meaning a line without an arrow) is called polar if its two-directional senses are geometrically or physically different. A symmetry direction of a crystal that is polar is called a polar axis. Groups containing a polar axis are called polar. A polar crystal possesses a unique polar axis (more precisely, all polar axes are parallel). Some geometrical or physical property is different at the two ends of this axis: for example, there might develop a dielectric polarization as in pyroelectric crystals. A polar axis can occur only in non-centrosymmetric structures. There cannot be a mirror plane or twofold axis perpendicular to the polar axis, because they would make the two directions of the axis equivalent. The crystal structures of chiral biological molecules (such as protein structures) can only occur in the 65 enantiomorphic space groups (biological molecules are usually chiral). Bravais lattices There are seven different kinds of lattice systems, and each kind of lattice system has four different kinds of centerings (primitive, base-centered, body-centered, face-centered). However, not all of the combinations are unique; some of the combinations are equivalent while other combinations are not possible due to symmetry reasons. This reduces the number of unique lattices to the 14 Bravais lattices. The distribution of the 14 Bravais lattices into 7 lattice systems is given in the following table. In geometry and crystallography, a Bravais lattice is a category of translative symmetry groups (also known as lattices) in three directions. Such symmetry groups consist of translations by vectors of the form R = n1a1 + n2a2 + n3a3, where n1, n2, and n3 are integers and a1, a2, and a3 are three non-coplanar vectors, called primitive vectors. These lattices are classified by the space group of the lattice itself, viewed as a collection of points; there are 14 Bravais lattices in three dimensions; each belongs to one lattice system only. They represent the maximum symmetry a structure with the given translational symmetry can have. All crystalline materials (not including quasicrystals) must, by definition, fit into one of these arrangements. For convenience a Bravais lattice is depicted by a unit cell which is a factor 1, 2, 3, or 4 larger than the primitive cell. Depending on the symmetry of a crystal or other pattern, the fundamental domain is again smaller, up to a factor 48. The Bravais lattices were studied by Moritz Ludwig Frankenheim in 1842, who found that there were 15 Bravais lattices. This was corrected to 14 by A. Bravais in 1848. In other dimensions Two-dimensional space In two-dimensional space, there are four crystal systems (oblique, rectangular, square, hexagonal), four crystal families (oblique, rectanguar, square, hexagonal), and four lattice systems (oblique, rectangular, square, and hexagonal). Four-dimensional space ‌The four-dimensional unit cell is defined by four edge lengths (a, b, c, d) and six interaxial angles (α, β, γ, δ, ε, ζ). The following conditions for the lattice parameters define 23 crystal families The names here are given according to Whittaker. They are almost the same as in Brown et al., with exception for names of the crystal families 9, 13, and 22. The names for these three families according to Brown et al. are given in parentheses. The relation between four-dimensional crystal families, crystal systems, and lattice systems is shown in the following table. Enantiomorphic systems are marked with an asterisk. The number of enantiomorphic pairs is given in parentheses. Here the term "enantiomorphic" has a different meaning than in the table for three-dimensional crystal classes. The latter means, that enantiomorphic point groups describe chiral (enantiomorphic) structures. In the current table, "enantiomorphic" means that a group itself (considered as a geometric object) is enantiomorphic, like enantiomorphic pairs of three-dimensional space groups P31 and P32, P4122 and P4322. Starting from four-dimensional space, point groups also can be enantiomorphic in this sense. See also References Works cited External links Overview of the 32 groups Mineral galleries – Symmetry all cubic crystal classes, forms, and stereographic projections (interactive java applet) Crystal system at the Online Dictionary of Crystallography Crystal family at the Online Dictionary of Crystallography Lattice system at the Online Dictionary of Crystallography Conversion Primitive to Standard Conventional for VASP input files Learning Crystallography Symmetry Euclidean geometry Crystallography Geomorphology Mineralogy
Crystal system
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,749
[ "Materials science", "Crystal systems", "Crystallography", "Condensed matter physics", "Geometry", "Symmetry" ]
456,715
https://en.wikipedia.org/wiki/Kerr%20metric
The Kerr metric or Kerr geometry describes the geometry of empty spacetime around a rotating uncharged axially symmetric black hole with a quasispherical event horizon. The Kerr metric is an exact solution of the Einstein field equations of general relativity; these equations are highly non-linear, which makes exact solutions very difficult to find. Overview The Kerr metric is a generalization to a rotating body of the Schwarzschild metric, discovered by Karl Schwarzschild in 1915, which described the geometry of spacetime around an uncharged, spherically symmetric, and non-rotating body. The corresponding solution for a charged, spherical, non-rotating body, the Reissner–Nordström metric, was discovered soon afterwards (1916–1918). However, the exact solution for an uncharged, rotating black hole, the Kerr metric, remained unsolved until 1963, when it was discovered by Roy Kerr. The natural extension to a charged, rotating black hole, the Kerr–Newman metric, was discovered shortly thereafter in 1965. These four related solutions may be summarized by the following table, where Q represents the body's electric charge and J represents its spin angular momentum: {| class="wikitable" |- ! ! Non-rotating (J = 0) ! Rotating (J ∈ ) |- ! Uncharged (Q = 0) | Schwarzschild | Kerr |- ! Charged (Q ∈ ) | Reissner–Nordström | Kerr–Newman |- |} According to the Kerr metric, a rotating body should exhibit frame-dragging (also known as Lense–Thirring precession), a distinctive prediction of general relativity. The first measurement of this frame dragging effect was done in 2011 by the Gravity Probe B experiment. Roughly speaking, this effect predicts that objects coming close to a rotating mass will be entrained to participate in its rotation, not because of any applied force or torque that can be felt, but rather because of the swirling curvature of spacetime itself associated with rotating bodies. In the case of a rotating black hole, at close enough distances, all objects – even light – must rotate with the black hole; the region where this holds is called the ergosphere. The light from distant sources can travel around the event horizon several times (if close enough); creating multiple images of the same object. To a distant viewer, the apparent perpendicular distance between images decreases at a factor of 2 (about 500). However, fast spinning black holes have less distance between multiplicity images. Rotating black holes have surfaces where the metric seems to have apparent singularities; the size and shape of these surfaces depends on the black hole's mass and angular momentum. The outer surface encloses the ergosphere and has a shape similar to a flattened sphere. The inner surface marks the event horizon; objects passing into the interior of this horizon can never again communicate with the world outside that horizon. However, neither surface is a true singularity, since their apparent singularity can be eliminated in a different coordinate system. A similar situation obtains when considering the Schwarzschild metric which also appears to result in a singularity at dividing the space above and below rs into two disconnected patches; using a different coordinate transformation one can then relate the extended external patch to the inner patch (see ) – such a coordinate transformation eliminates the apparent singularity where the inner and outer surfaces meet. Objects between these two surfaces must co-rotate with the rotating black hole, as noted above; this feature can in principle be used to extract energy from a rotating black hole, up to its invariant mass energy, Mc2. The LIGO experiment that first detected gravitational waves, announced in 2016, also provided the first direct observation of a pair of Kerr black holes. Metric The Kerr metric is commonly expressed in one of two forms, the Boyer–Lindquist form and the Kerr–Schild form. It can be readily derived from the Schwarzschild metric, using the Newman–Janis algorithm by Newman–Penrose formalism (also known as the spin–coefficient formalism), Ernst equation, or Ellipsoid coordinate transformation. Boyer–Lindquist coordinates The Kerr metric describes the geometry of spacetime in the vicinity of a mass rotating with angular momentum . The metric (or equivalently its line element for proper time) in Boyer–Lindquist coordinates is where the coordinates are standard oblate spheroidal coordinates, which are equivalent to the cartesian coordinates where is the Schwarzschild radius and where for brevity, the length scales and have been introduced as A key feature to note in the above metric is the cross-term . This implies that there is coupling between time and motion in the plane of rotation that disappears when the black hole's angular momentum goes to zero. In the non-relativistic limit where (or, equivalently, ) goes to zero, the Kerr metric becomes the orthogonal metric for the oblate spheroidal coordinates Kerr–Schild coordinates The Kerr metric can be expressed in "Kerr–Schild" form, using a particular set of Cartesian coordinates as follows. These solutions were proposed by Kerr and Schild in 1965. Notice that k is a unit 3-vector, making the 4-vector a null vector, with respect to both g and η. Here M is the constant mass of the spinning object, η is the Minkowski tensor, and a is a constant rotational parameter of the spinning object. It is understood that the vector is directed along the positive z-axis. The quantity r is not the radius, but rather is implicitly defined by Notice that the quantity r becomes the usual radius R when the rotational parameter approaches zero. In this form of solution, units are selected so that the speed of light is unity (). At large distances from the source (R ≫ a), these equations reduce to the Eddington–Finkelstein form of the Schwarzschild metric. In the Kerr–Schild form of the Kerr metric, the determinant of the metric tensor is everywhere equal to negative one, even near the source. Soliton coordinates As the Kerr metric (along with the Kerr–NUT metric) is axially symmetric, it can be cast into a form to which the Belinski–Zakharov transform can be applied. This implies that the Kerr black hole has the form of a gravitational soliton. Mass of rotational energy If the complete rotational energy of a black hole is extracted, for example with the Penrose process, the remaining mass cannot shrink below the irreducible mass. Therefore, if a black hole rotates with the spin , its total mass-equivalent is higher by a factor of in comparison with a corresponding Schwarzschild black hole where is equal to . The reason for this is that in order to get a static body to spin, energy needs to be applied to the system. Because of the mass–energy equivalence this energy also has a mass-equivalent, which adds to the total mass–energy of the system, . The total mass equivalent (the gravitating mass) of the body (including its rotational energy) and its irreducible mass are related by Wave operator Since even a direct check on the Kerr metric involves cumbersome calculations, the contravariant components of the metric tensor in Boyer–Lindquist coordinates are shown below in the expression for the square of the four-gradient operator: Frame dragging We may rewrite the Kerr metric () in the following form: This metric is equivalent to a co-rotating reference frame that is rotating with angular speed Ω that depends on both the radius r and the colatitude θ, where Ω is called the Killing horizon. Thus, an inertial reference frame is entrained by the rotating central mass to participate in the latter's rotation; this is called frame-dragging, and has been tested experimentally. Qualitatively, frame-dragging can be viewed as the gravitational analog of electromagnetic induction. An "ice skater", in orbit over the equator and rotationally at rest with respect to the stars, extends her arms. The arm extended toward the black hole will be torqued spinward. The arm extended away from the black hole will be torqued anti-spinward. She will therefore be rotationally sped up, in a counter-rotating sense to the black hole. This is the opposite of what happens in everyday experience. If she is already rotating at a certain speed when she extends her arms, inertial effects and frame-dragging effects will balance and her spin will not change. Due to the equivalence principle, gravitational effects are locally indistinguishable from inertial effects, so this rotation rate, at which when she extends her arms nothing happens, is her local reference for non-rotation. This frame is rotating with respect to the fixed stars and counter-rotating with respect to the black hole. A useful metaphor is a planetary gear system with the black hole being the sun gear, the ice skater being a planetary gear and the outside universe being the ring gear. This can also be interpreted through Mach's principle. Important surfaces There are several important surfaces in the Kerr metric (). The inner surface corresponds to an event horizon similar to that observed in the Schwarzschild metric; this occurs where the purely radial component g of the metric goes to infinity. Solving the quadratic equation yields the solution: which in natural units (that give ) simplifies to: While in the Schwarzschild metric the event horizon is also the place where the purely temporal component g of the metric changes sign from positive to negative, in Kerr metric that happens at a different distance. Again solving a quadratic equation yields the solution: or in natural units: Due to the cosθ term in the square root, this outer surface resembles a flattened sphere that touches the inner surface at the poles of the rotation axis, where the colatitude θ equals 0 or π; the space between these two surfaces is called the ergosphere. Within this volume, the purely temporal component g is negative, i.e., acts like a purely spatial metric component. Consequently, particles within this ergosphere must co-rotate with the inner mass, if they are to retain their time-like character. A moving particle experiences a positive proper time along its worldline, its path through spacetime. However, this is impossible within the ergosphere, where g is negative, unless the particle is co-rotating around the interior mass  with an angular speed at least of . Thus, no particle can move in the direction opposite to central mass's rotation within the ergosphere. As with the event horizon in the Schwarzschild metric, the apparent singularity at r is due to the choice of coordinates (i.e., it is a coordinate singularity). In fact, the spacetime can be smoothly continued through it by an appropriate choice of coordinates. In turn, the outer boundary of the ergosphere at r is not singular by itself even in Kerr coordinates due to non-zero term. Ergosphere and the Penrose process A black hole in general is surrounded by a surface, called the event horizon and situated at the Schwarzschild radius for a nonrotating black hole, where the escape velocity is equal to the velocity of light. Within this surface, no observer/particle can maintain itself at a constant radius. It is forced to fall inwards, and so this is sometimes called the static limit. A rotating black hole has the same static limit at its event horizon but there is an additional surface outside the event horizon named the "ergosurface" given by in Boyer–Lindquist coordinates, which can be intuitively characterized as the sphere where "the rotational velocity of the surrounding space" is dragged along with the velocity of light. Within this sphere the dragging is greater than the speed of light, and any observer/particle is forced to co-rotate. The region outside the event horizon but inside the surface where the rotational velocity is the speed of light, is called the ergosphere (from Greek ergon meaning work). Particles falling within the ergosphere are forced to rotate faster and thereby gain energy. Because they are still outside the event horizon, they may escape the black hole. The net process is that the rotating black hole emits energetic particles at the cost of its own total energy. The possibility of extracting spin energy from a rotating black hole was first proposed by the mathematician Roger Penrose in 1969 and is thus called the Penrose process. Rotating black holes in astrophysics are a potential source of large amounts of energy and are used to explain energetic phenomena, such as gamma-ray bursts. Features The Kerr geometry exhibits many noteworthy features: the maximal analytic extension includes a sequence of asymptotically flat exterior regions, each associated with an ergosphere, stationary limit surfaces, event horizons, Cauchy horizons, closed timelike curves, and a ring-shaped curvature singularity. The geodesic equation can be solved exactly in closed form. In addition to two Killing vector fields (corresponding to time translation and axisymmetry), the Kerr geometry admits a remarkable Killing tensor. There is a pair of principal null congruences (one ingoing and one outgoing). The Weyl tensor is algebraically special, in fact it has Petrov type D. The global structure is known. Topologically, the homotopy type of the Kerr spacetime can be simply characterized as a line with circles attached at each integer point. Note that the inner Kerr geometry is unstable with regard to perturbations in the interior region. This instability means that although the Kerr metric is axis-symmetric, a black hole created through gravitational collapse may not be so. This instability also implies that many of the features of the Kerr geometry described above may not be present inside such a black hole. A surface on which light can orbit a black hole is called a photon sphere. The Kerr solution has infinitely many photon spheres, lying between an inner one and an outer one. In the nonrotating, Schwarzschild solution, with , the inner and outer photon spheres degenerate, so that there is only one photon sphere at a single radius. The greater the spin of a black hole, the farther from each other the inner and outer photon spheres move. A beam of light traveling in a direction opposite to the spin of the black hole will circularly orbit the hole at the outer photon sphere. A beam of light traveling in the same direction as the black hole's spin will circularly orbit at the inner photon sphere. Orbiting geodesics with some angular momentum perpendicular to the axis of rotation of the black hole will orbit on photon spheres between these two extremes. Because the spacetime is rotating, such orbits exhibit a precession, since there is a shift in the variable after completing one period in the variable. Trajectory equations The equations of motion for test particles in the Kerr spacetime are governed by four constants of motion. The first is the invariant mass of the test particle, defined by the relation where is the four-momentum of the particle. Furthermore, there are two constants of motion given by the time translation and rotation symmetries of Kerr spacetime, the energy , and the component of the orbital angular momentum parallel to the spin of the black hole . and Using Hamilton–Jacobi theory, Brandon Carter showed that there exists a fourth constant of motion, , now referred to as the Carter constant. It is related to the total angular momentum of the particle and is given by Since there are four (independent) constants of motion for degrees of freedom, the equations of motion for a test particle in Kerr spacetime are integrable. Using these constants of motion, the trajectory equations for a test particle can be written (using natural units of ), with where is an affine parameter such that . In particular, when the affine parameter , is related to the proper time through . Because of the frame-dragging-effect, a zero-angular-momentum observer (ZAMO) is corotating with the angular velocity which is defined with respect to the bookkeeper's coordinate time . The local velocity of the test-particle is measured relative to a probe corotating with . The gravitational time-dilation between a ZAMO at fixed and a stationary observer far away from the mass is In Cartesian Kerr–Schild coordinates, the equations for a photon are where is analogous to Carter's constant and is a useful quantity If we set , the Schwarzschild geodesics are restored. Symmetries The group of isometries of the Kerr metric is the subgroup of the ten-dimensional Poincaré group which takes the two-dimensional locus of the singularity to itself. It retains the time translations (one dimension) and rotations around its axis of rotation (one dimension). Thus it has two dimensions. Like the Poincaré group, it has four connected components: the component of the identity; the component which reverses time and longitude; the component which reflects through the equatorial plane; and the component that does both. In physics, symmetries are typically associated with conserved constants of motion, in accordance with Noether's theorem. As shown above, the geodesic equations have four conserved quantities: one of which comes from the definition of a geodesic, and two of which arise from the time translation and rotation symmetry of the Kerr geometry. The fourth conserved quantity does not arise from a symmetry in the standard sense and is commonly referred to as a hidden symmetry. Overextreme Kerr solutions The location of the event horizon is determined by the larger root of . When (i.e. ), there are no (real valued) solutions to this equation, and there is no event horizon. With no event horizons to hide it from the rest of the universe, the black hole ceases to be a black hole and will instead be a naked singularity. Kerr black holes as wormholes Although the Kerr solution appears to be singular at the roots of , these are actually coordinate singularities, and, with an appropriate choice of new coordinates, the Kerr solution can be smoothly extended through the values of corresponding to these roots. The larger of these roots determines the location of the event horizon, and the smaller determines the location of a Cauchy horizon. A (future-directed, time-like) curve can start in the exterior and pass through the event horizon. Once having passed through the event horizon, the coordinate now behaves like a time coordinate, so it must decrease until the curve passes through the Cauchy horizon. Anti-universe region The Kerr metric, which describes the spacetime geometry around a rotating black hole, can be extended beyond the inner event horizon. In the Boyer-Lindquist coordinate system , this inner horizon is located at As one crosses this inner horizon, the radial coordinate continues to decrease, even becoming negative. The ring singularity and beyond At , a peculiar feature arises: a ring singularity. Unlike the point singularity in the Schwarzschild metric (a non-rotating black hole), the Kerr singularity is not a single point but a ring lying in the equatorial plane (). This ring singularity acts as a portal to a new region of spacetime. If we avoid the equatorial plane (), we can smoothly continue the coordinate to negative values. This region with is interpreted as an entirely new, asymptotically flat universe, often called the "anti-universe." This anti-universe has some surprising properties: Negative ADM Mass: The anti-universe possesses a negative Arnowitt-Deser-Misner (ADM) mass, which can be thought of as the total mass-energy of the spacetime as measured at infinity. A negative mass is a highly unusual concept in general relativity, and its physical interpretation is still debated. Closed timelike curves and the Cauchy horizon Within the anti-universe, an even stranger phenomenon occurs. The metric component , which is related to the azimuthal direction around the ring singularity, can change sign. Specifically, is given by: When becomes negative, the coordinate becomes timelike, and a linear combination of the coordinates and becomes spacelike. This leads to the existence of closed timelike curves (CTCs). A CTC is a path through spacetime where an object could travel back to its own past, violating causality. The boundary where changes sign and CTCs first appear is called the Cauchy horizon. It is defined by the condition , which gives The Cauchy horizon acts as a boundary beyond which the familiar notions of cause and effect break down. The presence of CTCs raises fundamental questions about the predictability and consistency of the laws of physics in these extreme regions of spacetime. The anti-universe region of the extended Kerr metric is a fascinating and perplexing theoretical construct. It presents a scenario with a negative mass, reversed time orientation, and the possibility of time travel through closed timelike curves. While the physical reality of the anti-universe remains uncertain, its study provides valuable insights into the nature of spacetime, gravity, and the limits of our current understanding of the universe While it is expected that the exterior region of the Kerr solution is stable, and that all rotating black holes will eventually approach a Kerr metric, the interior region of the solution appears to be unstable, much like a pencil balanced on its point. This is related to the idea of cosmic censorship. Relation to other exact solutions The Kerr geometry is a particular example of a stationary axially symmetric vacuum solution to the Einstein field equation. The family of all stationary axially symmetric vacuum solutions to the Einstein field equation are the Ernst vacuums. The Kerr solution is also related to various non-vacuum solutions which model black holes. For example, the Kerr–Newman electrovacuum models a (rotating) black hole endowed with an electric charge, while the Kerr–Vaidya null dust models a (rotating) hole with infalling electromagnetic radiation. The special case of the Kerr metric yields the Schwarzschild metric, which models a nonrotating black hole which is static and spherically symmetric, in the Schwarzschild coordinates. (In this case, every Geroch moment but the mass vanishes.) The interior of the Kerr geometry, or rather a portion of it, is locally isometric to the Chandrasekhar–Ferrari CPW vacuum, an example of a colliding plane wave model. This is particularly interesting, because the global structure of this CPW solution is quite different from that of the Kerr geometry, and in principle, an experimenter could hope to study the geometry of (the outer portion of) the Kerr interior by arranging the collision of two suitable gravitational plane waves. Multipole moments Each asymptotically flat Ernst vacuum can be characterized by giving the infinite sequence of relativistic multipole moments, the first two of which can be interpreted as the mass and angular momentum of the source of the field. There are alternative formulations of relativistic multipole moments due to Hansen, Thorne, and Geroch, which turn out to agree with each other. The relativistic multipole moments of the Kerr geometry were computed by Hansen; they turn out to be Thus, the special case of the Schwarzschild vacuum () gives the "monopole point source" of general relativity. Weyl multipole moments arise from treating a certain metric function (formally corresponding to Newtonian gravitational potential) which appears the Weyl–Papapetrou chart for the Ernst family of all stationary axisymmetric vacuum solutions using the standard euclidean scalar multipole moments. They are distinct from the moments computed by Hansen, above. In a sense, the Weyl moments only (indirectly) characterize the "mass distribution" of an isolated source, and they turn out to depend only on the even order relativistic moments. In the case of solutions symmetric across the equatorial plane the odd order Weyl moments vanish. For the Kerr vacuum solutions, the first few Weyl moments are given by In particular, we see that the Schwarzschild vacuum has nonzero second order Weyl moment, corresponding to the fact that the "Weyl monopole" is the Chazy–Curzon vacuum solution, not the Schwarzschild vacuum solution, which arises from the Newtonian potential of a certain finite length uniform density thin rod. In weak field general relativity, it is convenient to treat isolated sources using another type of multipole, which generalize the Weyl moments to mass multipole moments and momentum multipole moments, characterizing respectively the distribution of mass and of momentum of the source. These are multi-indexed quantities whose suitably symmetrized and anti-symmetrized parts can be related to the real and imaginary parts of the relativistic moments for the full nonlinear theory in a rather complicated manner. Perez and Moreschi have given an alternative notion of "monopole solutions" by expanding the standard NP tetrad of the Ernst vacuums in powers of (the radial coordinate in the Weyl–Papapetrou chart). According to this formulation: the isolated mass monopole source with zero angular momentum is the Schwarzschild vacuum family (one parameter), the isolated mass monopole source with radial angular momentum is the Taub–NUT vacuum family (two parameters; not quite asymptotically flat), the isolated mass monopole source with axial angular momentum is the Kerr vacuum family (two parameters). In this sense, the Kerr vacuums are the simplest stationary axisymmetric asymptotically flat vacuum solutions in general relativity. Open problems The Kerr geometry is often used as a model of a rotating black hole but if the solution is held to be valid only outside some compact region (subject to certain restrictions), in principle, it should be able to be used as an exterior solution to model the gravitational field around a rotating massive object other than a black hole such as a neutron star, or the Earth. This works out very nicely for the non-rotating case, where the Schwarzschild vacuum exterior can be matched to a Schwarzschild fluid interior, and indeed to more general static spherically symmetric perfect fluid solutions. However, the problem of finding a rotating perfect-fluid interior which can be matched to a Kerr exterior, or indeed to any asymptotically flat vacuum exterior solution, has proven very difficult. In particular, the Wahlquist fluid, which was once thought to be a candidate for matching to a Kerr exterior, is now known not to admit any such matching. At present, it seems that only approximate solutions modeling slowly rotating fluid balls are known (These are the relativistic analog of oblate spheroidal balls with nonzero mass and angular momentum but vanishing higher multipole moments). However, the exterior of the Neugebauer–Meinel disk, an exact dust solution which models a rotating thin disk, approaches in a limiting case the Kerr geometry. Physical thin-disk solutions obtained by identifying parts of the Kerr spacetime are also known. See also Schwarzschild metric Kerr–Newman metric Kerr–Newman–de–Sitter metric Reissner–Nordström metric Hartle–Thorne metric Spin-flip Rotating black hole Footnotes References Further reading See chapter 19 for a readable introduction at the advanced undergraduate level. See chapters 6--10 for a very thorough study at the advanced graduate level. See chapter 13 for the Chandrasekhar/Ferrari CPW model. See chapter 7. Characterization of three standard families of vacuum solutions as noted above. Gives the relativistic multipole moments for the Ernst vacuums (plus the electromagnetic and gravitational relativistic multipole moments for the charged generalization). "... This note is meant to be a guide for those readers who wish to verify all the details [of the derivation of the Kerr solution] ..." Exact solutions in general relativity Black holes Metric tensors fr:Trou noir de Kerr#Métrique de Kerr
Kerr metric
[ "Physics", "Astronomy", "Mathematics", "Engineering" ]
5,771
[ "Exact solutions in general relativity", "Black holes", "Physical phenomena", "Tensors", "Physical quantities", "Unsolved problems in physics", "Mathematical objects", "Astrophysics", "Equations", "Metric tensors", "Density", "Stellar phenomena", "Astronomical objects" ]
457,036
https://en.wikipedia.org/wiki/Sulfur%20hexafluoride
Sulfur hexafluoride or sulphur hexafluoride (British spelling) is an inorganic compound with the formula SF6. It is a colorless, odorless, non-flammable, and non-toxic gas. has an octahedral geometry, consisting of six fluorine atoms attached to a central sulfur atom. It is a hypervalent molecule. Typical for a nonpolar gas, is poorly soluble in water but quite soluble in nonpolar organic solvents. It has a density of 6.12 g/L at sea level conditions, considerably higher than the density of air (1.225 g/L). It is generally stored and transported as a liquefied compressed gas. has 23,500 times greater global warming potential (GWP) than as a greenhouse gas (over a 100-year time-frame) but exists in relatively minor concentrations in the atmosphere. Its concentration in Earth's troposphere reached 11.50 parts per trillion (ppt) in October 2023, rising at 0.37 ppt/year. The increase since 1980 is driven in large part by the expanding electric power sector, including fugitive emissions from banks of gas contained in its medium- and high-voltage switchgear. Uses in magnesium, aluminium, and electronics manufacturing also hastened atmospheric growth. The 1997 Kyoto Protocol, which came into force in 2005, is supposed to limit emissions of this gas. In a somewhat nebulous way it has been included as part of the carbon emission trading scheme. In some countries this has led to the defunction of entire industries. Synthesis and reactions Sulfur hexafluoride on Earth exists primarily as a synthetic industrial gas, but has also been found to occur naturally. can be prepared from the elements through exposure of to . This was the method used by the discoverers Henri Moissan and Paul Lebeau in 1901. Some other sulfur fluorides are cogenerated, but these are removed by heating the mixture to disproportionate any (which is highly toxic) and then scrubbing the product with NaOH to destroy remaining . Alternatively, using bromine, sulfur hexafluoride can be synthesized from SF4 and CoF3 at lower temperatures (e.g. 100 °C), as follows: There is virtually no reaction chemistry for . A main contribution to the inertness of SF6 is the steric hindrance of the sulfur atom, whereas its heavier group 16 counterparts, such as SeF6 are more reactive than SF6 as a result of less steric hindrance. It does not react with molten sodium below its boiling point, but reacts exothermically with lithium. As a result of its inertness, has an atmospheric lifetime of around 3200 years, and no significant environmental sinks other than the ocean. Applications By 2000, the electrical power industry is estimated to use about 80% of the sulfur hexafluoride produced, mostly as a gaseous dielectric medium. Other main uses as of 2015 included a silicon etchant for semiconductor manufacturing, and an inert gas for the casting of magnesium. Dielectric medium is used in the electrical industry as a gaseous dielectric medium for high-voltage sulfur hexafluoride circuit breakers, switchgear, and other electrical equipment, often replacing oil-filled circuit breakers (OCBs) that can contain harmful polychlorinated biphenyls (PCBs). gas under pressure is used as an insulator in gas insulated switchgear (GIS) because it has a much higher dielectric strength than air or dry nitrogen. The high dielectric strength is a result of the gas's high electronegativity and density. This property makes it possible to significantly reduce the size of electrical gear. This makes GIS more suitable for certain purposes such as indoor placement, as opposed to air-insulated electrical gear, which takes up considerably more room. Gas-insulated electrical gear is also more resistant to the effects of pollution and climate, as well as being more reliable in long-term operation because of its controlled operating environment. Exposure to an arc chemically breaks down though most of the decomposition products tend to quickly re-form , a process termed "self-healing". Arcing or corona can produce disulfur decafluoride (), a highly toxic gas, with toxicity similar to phosgene. was considered a potential chemical warfare agent in World War II because it does not produce lacrimation or skin irritation, thus providing little warning of exposure. is also commonly encountered as a high voltage dielectric in the high voltage supplies of particle accelerators, such as Van de Graaff generators and Pelletrons and high voltage transmission electron microscopes. Alternatives to as a dielectric gas include several fluoroketones. Compact GIS technology that combines vacuum switching with clean air insulation has been introduced for a subset of applications up to 420 kV. Medical use is used to provide a tamponade or plug of a retinal hole in retinal detachment repair operations in the form of a gas bubble. It is inert in the vitreous chamber. The bubble initially doubles its volume in 36 hours due to oxygen and nitrogen entering it, before being absorbed in the blood in 10–14 days. is used as a contrast agent for ultrasound imaging. Sulfur hexafluoride microbubbles are administered in solution through injection into a peripheral vein. These microbubbles enhance the visibility of blood vessels to ultrasound. This application has been used to examine the vascularity of tumours. It remains visible in the blood for 3 to 8 minutes, and is exhaled by the lungs. Tracer compound Sulfur hexafluoride was the tracer gas used in the first roadway air dispersion model calibration; this research program was sponsored by the U.S. Environmental Protection Agency and conducted in Sunnyvale, California on U.S. Highway 101. Gaseous is used as a tracer gas in short-term experiments of ventilation efficiency in buildings and indoor enclosures, and for determining infiltration rates. Two major factors recommend its use: its concentration can be measured with satisfactory accuracy at very low concentrations, and the Earth's atmosphere has a negligible concentration of . Sulfur hexafluoride was used as a non-toxic test gas in an experiment at St John's Wood tube station in London, United Kingdom on 25 March 2007. The gas was released throughout the station, and monitored as it drifted around. The purpose of the experiment, which had been announced earlier in March by the Secretary of State for Transport Douglas Alexander, was to investigate how toxic gas might spread throughout London Underground stations and buildings during a terrorist attack. Sulfur hexafluoride is also routinely used as a tracer gas in laboratory fume hood containment testing. The gas is used in the final stage of ASHRAE 110 fume hood qualification. A plume of gas is generated inside of the fume hood and a battery of tests are performed while a gas analyzer arranged outside of the hood samples for SF6 to verify the containment properties of the fume hood. It has been used successfully as a tracer in oceanography to study diapycnal mixing and air-sea gas exchange. Other uses The magnesium industry uses as an inert "cover gas" to prevent oxidation during casting, and other processes including smelting. Once the largest user, consumption has declined greatly with capture and recycling. Insulated glazing windows have used it as a filler to improve their thermal and acoustic insulation performance. plasma is used in the semiconductor industry as an etchant in processes such as deep reactive-ion etching. A small fraction of the breaks down in the plasma into sulfur and fluorine, with the fluorine ions performing a chemical reaction with silicon. Tires filled with it take longer to deflate from diffusion through rubber due to the larger molecule size. Nike likewise used it to obtain a patent and to fill the cushion bags in all of their "Air"-branded shoes from 1992 to 2006. 277 tons was used during the peak in 1997. The United States Navy's Mark 50 torpedo closed Rankine-cycle propulsion system is powered by sulfur hexafluoride in an exothermic reaction with solid lithium. Waveguides in high-power microwave systems are pressurized with it. The gas electrically insulates the waveguide, preventing internal arcing. Electrostatic loudspeakers have used it because of its high dielectric strength and high molecular weight. Disulfur decafluoride, a chemical weapon, is produced with it as a feedstock. For entertainment purposes, when breathed, causes the voice to become significantly deeper, due to its density being so much higher than air. This phenomenon is related to the more well-known effect of breathing low-density helium, which causes someone's voice to become much higher. Both of these effects should only be attempted with caution as these gases displace oxygen that the lungs are attempting to extract from the air. Sulfur hexafluoride is also mildly anesthetic. For science demonstrations / magic as "invisible water" since a light foil boat can be floated in a tank, as will an air-filled balloon. It is used for benchmark and calibration measurements in Associative and Dissociative Electron Attachment (DEA) experiments Greenhouse gas According to the Intergovernmental Panel on Climate Change, is the most potent greenhouse gas. Its global warming potential of 23,900 times that of when compared over a 100-year period. Sulfur hexafluoride is inert in the troposphere and stratosphere and is extremely long-lived, with an estimated atmospheric lifetime of 800–3,200 years. Measurements of SF6 show that its global average mixing ratio has increased from a steady base of about 54 parts per quadrillion prior to industrialization, to over 11.5 parts per trillion (ppt) as of October 2023, and is increasing by about 0.4 ppt (3.5%) per year. Average global SF6 concentrations increased by about 7% per year during the 1980s and 1990s, mostly as the result of its use in magnesium production, and by electrical utilities and electronics manufacturers. Given the small amounts of SF6 released compared to carbon dioxide, its overall individual contribution to global warming is estimated to be less than 0.2%, however the collective contribution of it and similar man-made halogenated gases has reached about 10% as of 2020. Alternatives are being tested. In Europe, falls under the F-Gas directive which ban or control its use for several applications. Since 1 January 2006, is banned as a tracer gas and in all applications except high-voltage switchgear. It was reported in 2013 that a three-year effort by the United States Department of Energy to identify and fix leaks at its laboratories in the United States such as the Princeton Plasma Physics Laboratory, where the gas is used as a high voltage insulator, had been productive, cutting annual leaks by . This was done by comparing purchases with inventory, assuming the difference was leaked, then locating and fixing the leaks. Physiological effects and precautions Sulfur hexafluoride is a nontoxic gas, but by displacing oxygen in the lungs, it also carries the risk of asphyxia if too much is inhaled. Since it is more dense than air, a substantial quantity of gas, when released, will settle in low-lying areas and present a significant risk of asphyxiation if the area is entered. That is particularly relevant to its use as an insulator in electrical equipment since workers may be in trenches or pits below equipment containing . As with all gases, the density of affects the resonance frequencies of the vocal tract, thus changing drastically the vocal sound qualities, or timbre, of those who inhale it. It does not affect the vibrations of the vocal folds. The density of sulfur hexafluoride is relatively high at room temperature and pressure due to the gas's large molar mass. Unlike helium, which has a molar mass of about 4 g/mol and pitches the voice up, has a molar mass of about 146 g/mol, and the speed of sound through the gas is about 134 m/s at room temperature, pitching the voice down. For comparison, the molar mass of air, which is about 80% nitrogen and 20% oxygen, is approximately 30 g/mol which leads to a speed of sound of 343 m/s. Sulfur hexafluoride has an anesthetic potency slightly lower than nitrous oxide; it is classified as a mild anesthetic. See also Selenium hexafluoride Tellurium hexafluoride Uranium hexafluoride Hypervalent molecule Halocarbon—another group of major greenhouse gases Trifluoromethylsulfur pentafluoride, a similar gas References Further reading SF6 Reduction Partnership for Electric Power Systems External links Fluoride and compounds fact sheet— National Pollutant Inventory High GWP Gases and Climate Change from the U.S. EPA website International Conference on SF6 and the Environment (related archive) CDC - NIOSH Pocket Guide to Chemical Hazards Sulfur fluorides Dielectric gases Greenhouse gases Octahedral compounds Hexafluorides Industrial gases Refrigerants Hypervalent molecules General anesthetics Ultrasound contrast agents
Sulfur hexafluoride
[ "Physics", "Chemistry", "Environmental_science" ]
2,777
[ "Molecules", "Environmental chemistry", "Hypervalent molecules", "Industrial gases", "Chemical process engineering", "Greenhouse gases", "Matter" ]
457,064
https://en.wikipedia.org/wiki/Galois%20extension
In mathematics, a Galois extension is an algebraic field extension E/F that is normal and separable; or equivalently, E/F is algebraic, and the field fixed by the automorphism group Aut(E/F) is precisely the base field F. The significance of being a Galois extension is that the extension has a Galois group and obeys the fundamental theorem of Galois theory. A result of Emil Artin allows one to construct Galois extensions as follows: If E is a given field, and G is a finite group of automorphisms of E with fixed field F, then E/F is a Galois extension. The property of an extension being Galois behaves well with respect to field composition and intersection. Characterization of Galois extensions An important theorem of Emil Artin states that for a finite extension each of the following statements is equivalent to the statement that is Galois: is a normal extension and a separable extension. is a splitting field of a separable polynomial with coefficients in that is, the number of automorphisms equals the degree of the extension. Other equivalent statements are: Every irreducible polynomial in with at least one root in splits over and is separable. that is, the number of automorphisms is at least the degree of the extension. is the fixed field of a subgroup of is the fixed field of There is a one-to-one correspondence between subfields of and subgroups of An infinite field extension is Galois if and only if is the union of finite Galois subextensions indexed by an (infinite) index set , i.e. and the Galois group is an inverse limit where the inverse system is ordered by field inclusion . Examples There are two basic ways to construct examples of Galois extensions. Take any field , any finite subgroup of , and let be the fixed field. Take any field , any separable polynomial in , and let be its splitting field. Adjoining to the rational number field the square root of 2 gives a Galois extension, while adjoining the cubic root of 2 gives a non-Galois extension. Both these extensions are separable, because they have characteristic zero. The first of them is the splitting field of ; the second has normal closure that includes the complex cubic roots of unity, and so is not a splitting field. In fact, it has no automorphism other than the identity, because it is contained in the real numbers and has just one real root. For more detailed examples, see the page on the fundamental theorem of Galois theory. An algebraic closure of an arbitrary field is Galois over if and only if is a perfect field. Notes Citations References Further reading (Galois' original paper, with extensive background and commentary.) (Chapter 4 gives an introduction to the field-theoretic approach to Galois theory.) (This book introduces the reader to the Galois theory of Grothendieck, and some generalisations, leading to Galois groupoids.) . English translation (of 2nd revised edition): (Later republished in English by Springer under the title "Algebra".) Galois theory Algebraic number theory Field extensions
Galois extension
[ "Mathematics" ]
644
[ "Algebraic number theory", "Number theory" ]
457,314
https://en.wikipedia.org/wiki/Merck%20Index
The Merck Index is an encyclopedia of chemicals, drugs and biologicals with over 10,000 monographs on single substances or groups of related compounds published online by the Royal Society of Chemistry. History The first edition of the Merck's Index was published in 1889 by the German chemical company Emanuel Merck and was primarily used as a sales catalog for Merck's growing list of chemicals it sold. The American subsidiary was established two years later and continued to publish it. During World War I the US government seized Merck's US operations and made it a separate American "Merck" company that continued to publish the Merck Index. In 2012 the Merck Index was licensed to the Royal Society of Chemistry. An online version of The Merck Index, including historic records and new updates not in the print edition, is commonly available through research libraries. It also includes an appendix with monographs on organic named reactions. The 15th edition was published in April 2013. Monographs in The Merck Index typically contain: a CAS registry number synonyms of the substance, such as trivial names and International Union of Pure and Applied Chemistry nomenclature a chemical formula molecular weight percent composition a structural formula a description of the substance's appearance melting point and boiling point solubility in solvents commonly used in the laboratory citations to other literature regarding the compound's chemical synthesis a therapeutic category, if applicable caution and hazard information Editions 1st (1889) – first edition released by E. Merck (Germany) 2nd (1896) – second edition released by Merck's American subsidiary and added medicines from the United States Pharmacopeia and National Formulary 3rd (1907) 4th (1930) 5th (1940) 6th (1952) 7th (1960) – first named editor is Merck chemist Paul G. Stecher 8th (1968) – editor Paul G. Stecher 9th (1976) – editor Martha Windholz, a Merck chemist 10th (1983), – editor Martha Windholz. In 1984 the Index became available online as well as printed. 11th (1989), 12th (1996), – editor Susan Budavari, a Merck chemist 13th (2001), – editor Maryadele O'Neil, senior editor at Merck 14th (2006), – editor Maryadele O'Neil 15th (2013), – editor Maryadele O'Neil; first edition under the Royal Society of Chemistry See also List of academic databases and search engines The Merck Manual of Diagnosis and Therapy The Merck Veterinary Manual Home Health and Pet Health References External links Merck Group 1889 non-fiction books 1896 non-fiction books 1907 non-fiction books 1930 non-fiction books 1940 non-fiction books 1952 non-fiction books 1960 non-fiction books 1968 non-fiction books 1976 non-fiction books 1983 non-fiction books 1989 non-fiction books 1996 non-fiction books 2001 non-fiction books 2006 non-fiction books Encyclopedias of science 1889 in science Royal Society of Chemistry Chemical databases Biological databases Eponymous indices
Merck Index
[ "Chemistry", "Biology" ]
608
[ "Bioinformatics", "Chemical databases", "Biological databases", "Royal Society of Chemistry" ]
457,579
https://en.wikipedia.org/wiki/Solid%20modeling
Solid modeling (or solid modelling) is a consistent set of principles for mathematical and computer modeling of three-dimensional shapes (solids). Solid modeling is distinguished within the broader related areas of geometric modeling and computer graphics, such as 3D modeling, by its emphasis on physical fidelity. Together, the principles of geometric and solid modeling form the foundation of 3D-computer-aided design, and in general, support the creation, exchange, visualization, animation, interrogation, and annotation of digital models of physical objects. Overview The use of solid modeling techniques allows for the automation process of several difficult engineering calculations that are carried out as a part of the design process. Simulation, planning, and verification of processes such as machining and assembly were one of the main catalysts for the development of solid modeling. More recently, the range of supported manufacturing applications has been greatly expanded to include sheet metal manufacturing, injection molding, welding, pipe routing, etc. Beyond traditional manufacturing, solid modeling techniques serve as the foundation for rapid prototyping, digital data archival and reverse engineering by reconstructing solids from sampled points on physical objects, mechanical analysis using finite elements, motion planning and NC path verification, kinematic and dynamic analysis of mechanisms, and so on. A central problem in all these applications is the ability to effectively represent and manipulate three-dimensional geometry in a fashion that is consistent with the physical behavior of real artifacts. Solid modeling research and development has effectively addressed many of these issues, and continues to be a central focus of computer-aided engineering. Mathematical foundations The notion of solid modeling as practised today relies on the specific need for informational completeness in mechanical geometric modeling systems, in the sense that any computer model should support all geometric queries that may be asked of its corresponding physical object. The requirement implicitly recognizes the possibility of several computer representations of the same physical object as long as any two such representations are consistent. It is impossible to computationally verify informational completeness of a representation unless the notion of a physical object is defined in terms of computable mathematical properties and independent of any particular representation. Such reasoning led to the development of the modeling paradigm that has shaped the field of solid modeling as we know it today. All manufactured components have finite size and well behaved boundaries, so initially the focus was on mathematically modeling rigid parts made of homogeneous isotropic material that could be added or removed. These postulated properties can be translated into properties of regions, subsets of three-dimensional Euclidean space. The two common approaches to define "solidity" rely on point-set topology and algebraic topology respectively. Both models specify how solids can be built from simple pieces or cells. According to the continuum point-set model of solidity, all the points of any X ⊂ ℝ3 can be classified according to their neighborhoods with respect to X as interior, exterior, or boundary points. Assuming ℝ3 is endowed with the typical Euclidean metric, a neighborhood of a point p ∈X takes the form of an open ball. For X to be considered solid, every neighborhood of any p ∈X must be consistently three dimensional; points with lower-dimensional neighborhoods indicate a lack of solidity. Dimensional homogeneity of neighborhoods is guaranteed for the class of closed regular sets, defined as sets equal to the closure of their interior. Any X ⊂ ℝ3 can be turned into a closed regular set or "regularized" by taking the closure of its interior, and thus the modeling space of solids is mathematically defined to be the space of closed regular subsets of ℝ3 (by the Heine-Borel theorem it is implied that all solids are compact sets). In addition, solids are required to be closed under the Boolean operations of set union, intersection, and difference (to guarantee solidity after material addition and removal). Applying the standard Boolean operations to closed regular sets may not produce a closed regular set, but this problem can be solved by regularizing the result of applying the standard Boolean operations. The regularized set operations are denoted ∪∗, ∩∗, and −∗. The combinatorial characterization of a set X ⊂ ℝ3 as a solid involves representing X as an orientable cell complex so that the cells provide finite spatial addresses for points in an otherwise innumerable continuum. The class of semi-analytic bounded subsets of Euclidean space is closed under Boolean operations (standard and regularized) and exhibits the additional property that every semi-analytic set can be stratified into a collection of disjoint cells of dimensions 0,1,2,3. A triangulation of a semi-analytic set into a collection of points, line segments, triangular faces, and tetrahedral elements is an example of a stratification that is commonly used. The combinatorial model of solidity is then summarized by saying that in addition to being semi-analytic bounded subsets, solids are three-dimensional topological polyhedra, specifically three-dimensional orientable manifolds with boundary. In particular this implies the Euler characteristic of the combinatorial boundary of the polyhedron is 2. The combinatorial manifold model of solidity also guarantees the boundary of a solid separates space into exactly two components as a consequence of the Jordan-Brouwer theorem, thus eliminating sets with non-manifold neighborhoods that are deemed impossible to manufacture. The point-set and combinatorial models of solids are entirely consistent with each other, can be used interchangeably, relying on continuum or combinatorial properties as needed, and can be extended to n dimensions. The key property that facilitates this consistency is that the class of closed regular subsets of ℝn coincides precisely with homogeneously n-dimensional topological polyhedra. Therefore, every n-dimensional solid may be unambiguously represented by its boundary and the boundary has the combinatorial structure of an n−1-dimensional polyhedron having homogeneously n−1-dimensional neighborhoods. Solid representation schemes Based on assumed mathematical properties, any scheme of representing solids is a method for capturing information about the class of semi-analytic subsets of Euclidean space. This means all representations are different ways of organizing the same geometric and topological data in the form of a data structure. All representation schemes are organized in terms of a finite number of operations on a set of primitives. Therefore, the modeling space of any particular representation is finite, and any single representation scheme may not completely suffice to represent all types of solids. For example, solids defined via combinations of regularized Boolean operations cannot necessarily be represented as the sweep of a primitive moving according to a space trajectory, except in very simple cases. This forces modern geometric modeling systems to maintain several representation schemes of solids and also facilitate efficient conversion between representation schemes. Below is a list of techniques used to create or represent solid models. Modern modeling software may use a combination of these schemes to represent a solid. Primitive instancing This scheme is based on notion of families of object, each member of a family distinguishable from the other by a few parameters. Each object family is called a generic primitive, and individual objects within a family are called primitive instances. For example, a family of bolts is a generic primitive, and a single bolt specified by a particular set of parameters is a primitive instance. The distinguishing characteristic of pure parameterized instancing schemes is the lack of means for combining instances to create new structures which represent new and more complex objects. The other main drawback of this scheme is the difficulty of writing algorithms for computing properties of represented solids. A considerable amount of family-specific information must be built into the algorithms and therefore each generic primitive must be treated as a special case, allowing no uniform overall treatment. Spatial occupancy enumeration This scheme is essentially a list of spatial cells occupied by the solid. The cells, also called voxels are cubes of a fixed size and are arranged in a fixed spatial grid (other polyhedral arrangements are also possible but cubes are the simplest). Each cell may be represented by the coordinates of a single point, such as the cell's centroid. Usually a specific scanning order is imposed and the corresponding ordered set of coordinates is called a spatial array. Spatial arrays are unambiguous and unique solid representations but are too verbose for use as 'master' or definitional representations. They can, however, represent coarse approximations of parts and can be used to improve the performance of geometric algorithms, especially when used in conjunction with other representations such as constructive solid geometry. Cell decomposition This scheme follows from the combinatoric (algebraic topological) descriptions of solids detailed above. A solid can be represented by its decomposition into several cells. Spatial occupancy enumeration schemes are a particular case of cell decompositions where all the cells are cubical and lie in a regular grid. Cell decompositions provide convenient ways for computing certain topological properties of solids such as its connectedness (number of pieces) and genus (number of holes). Cell decompositions in the form of triangulations are the representations used in 3D finite elements for the numerical solution of partial differential equations. Other cell decompositions such as a Whitney regular stratification or Morse decompositions may be used for applications in robot motion planning. Surface mesh modeling Similar to boundary representation, the surface of the object is represented. However, rather than complex data structures and NURBS, a simple surface mesh of vertices and edges is used. Surface meshes can be structured (as in triangular meshes in STL files or quad meshes with horizontal and vertical rings of quadrilaterals), or unstructured meshes with randomly grouped triangles and higher level polygons. Constructive solid geometry Constructive solid geometry (CSG) is a family of schemes for representing rigid solids as Boolean constructions or combinations of primitives via the regularized set operations discussed above. CSG and boundary representations are currently the most important representation schemes for solids. CSG representations take the form of ordered binary trees where non-terminal nodes represent either rigid transformations (orientation preserving isometries) or regularized set operations. Terminal nodes are primitive leaves that represent closed regular sets. The semantics of CSG representations is clear. Each subtree represents a set resulting from applying the indicated transformations/regularized set operations on the set represented by the primitive leaves of the subtree. CSG representations are particularly useful for capturing design intent in the form of features corresponding to material addition or removal (bosses, holes, pockets etc.). The attractive properties of CSG include conciseness, guaranteed validity of solids, computationally convenient Boolean algebraic properties, and natural control of a solid's shape in terms of high level parameters defining the solid's primitives and their positions and orientations. The relatively simple data structure and elegant recursive algorithms have further contributed to the popularity of CSG. Sweeping The basic notion embodied in sweeping schemes is simple. A set moving through space may trace or sweep out volume (a solid) that may be represented by the moving set and its trajectory. Such a representation is important in the context of applications such as detecting the material removed from a cutter as it moves along a specified trajectory, computing dynamic interference of two solids undergoing relative motion, motion planning, and even in computer graphics applications such as tracing the motions of a brush moved on a canvas. Most commercial CAD systems provide (limited) functionality for constructing swept solids mostly in the form of a two dimensional cross section moving on a space trajectory transversal to the section. However, current research has shown several approximations of three dimensional shapes moving across one parameter, and even multi-parameter motions. Implicit representation A very general method of defining a set of points X is to specify a predicate that can be evaluated at any point in space. In other words, X is defined implicitly to consist of all the points that satisfy the condition specified by the predicate. The simplest form of a predicate is the condition on the sign of a real valued function resulting in the familiar representation of sets by equalities and inequalities. For example, if the conditions , , and represent, respectively, a plane and two open linear halfspaces. More complex functional primitives may be defined by Boolean combinations of simpler predicates. Furthermore, the theory of R-functions allow conversions of such representations into a single function inequality for any closed semi analytic set. Such a representation can be converted to a boundary representation using polygonization algorithms, for example, the marching cubes algorithm. Parametric and feature-based modeling Features are defined to be parametric shapes associated with attributes such as intrinsic geometric parameters (length, width, depth etc.), position and orientation, geometric tolerances, material properties, and references to other features. Features also provide access to related production processes and resource models. Thus, features have a semantically higher level than primitive closed regular sets. Features are generally expected to form a basis for linking CAD with downstream manufacturing applications, and also for organizing databases for design data reuse. Parametric feature based modeling is frequently combined with constructive binary solid geometry (CSG) to fully describe systems of complex objects in engineering. History of solid modelers The historical development of solid modelers has to be seen in context of the whole history of CAD, the key milestones being the development of the research system BUILD followed by its commercial spin-off Romulus which went on to influence the development of Parasolid, ACIS and Solid Modeling Solutions. One of the first CAD developers in the Commonwealth of Independent States (CIS), ASCON began internal development of its own solid modeler in the 1990s. In November 2012, the mathematical division of ASCON became a separate company, and was named C3D Labs. It was assigned the task of developing the C3D geometric modeling kernel as a standalone product – the only commercial 3D modeling kernel from Russia. Other contributions came from Mäntylä, with his GWB and from the GPM project which contributed, among other things, hybrid modeling techniques at the beginning of the 1980s. This is also when the Programming Language of Solid Modeling PLaSM was conceived at the University of Rome. Computer-aided design The modeling of solids is only the minimum requirement of a CAD system's capabilities. Solid modelers have become commonplace in engineering departments in the last ten years due to faster computers and competitive software pricing. Solid modeling software creates a virtual 3D representation of components for machine design and analysis. A typical graphical user interface includes programmable macros, keyboard shortcuts and dynamic model manipulation. The ability to dynamically re-orient the model, in real-time shaded 3-D, is emphasized and helps the designer maintain a mental 3-D image. A solid part model generally consists of a group of features, added one at a time, until the model is complete. Engineering solid models are built mostly with sketcher-based features; 2-D sketches that are swept along a path to become 3-D. These may be cuts, or extrusions for example. Design work on components is usually done within the context of the whole product using assembly modeling methods. An assembly model incorporates references to individual part models that comprise the product. Another type of modeling technique is 'surfacing' (Freeform surface modeling). Here, surfaces are defined, trimmed and merged, and filled to make solid. The surfaces are usually defined with datum curves in space and a variety of complex commands. Surfacing is more difficult, but better applicable to some manufacturing techniques, like injection molding. Solid models for injection molded parts usually have both surfacing and sketcher based features. Engineering drawings can be created semi-automatically and reference the solid models. Parametric modeling Parametric modeling uses parameters to define a model (dimensions, for example). Examples of parameters are: dimensions used to create model features, material density, formulas to describe swept features, imported data (that describe a reference surface, for example). The parameter may be modified later, and the model will update to reflect the modification. Typically, there is a relationship between parts, assemblies, and drawings. A part consists of multiple features, and an assembly consists of multiple parts. Drawings can be made from either parts or assemblies. Example: A shaft is created by extruding a circle 100 mm. A hub is assembled to the end of the shaft. Later, the shaft is modified to be 200 mm long (click on the shaft, select the length dimension, modify to 200). When the model is updated the shaft will be 200 mm long, the hub will relocate to the end of the shaft to which it was assembled, and the engineering drawings and mass properties will reflect all changes automatically. Related to parameters, but slightly different, are constraints. Constraints are relationships between entities that make up a particular shape. For a window, the sides might be defined as being parallel, and of the same length. Parametric modeling is obvious and intuitive. But for the first three decades of CAD this was not the case. Modification meant re-draw, or add a new cut or protrusion on top of old ones. Dimensions on engineering drawings were created, instead of shown. Parametric modeling is very powerful, but requires more skill in model creation. A complicated model for an injection molded part may have a thousand features, and modifying an early feature may cause later features to fail. Skillfully created parametric models are easier to maintain and modify. Parametric modeling also lends itself to data re-use. A whole family of capscrews can be contained in one model, for example. Medical solid modeling Modern computed axial tomography and magnetic resonance imaging scanners can be used to create solid models of internal body features called voxel-based models, with images generated using volume rendering. Optical 3D scanners can be used to create point clouds or polygon mesh models of external body features. Uses of medical solid modeling; Visualization Visualization of specific body tissues (just blood vessels and tumor, for example) Designing prosthetics, orthotics, and other medical and dental devices (this is sometimes called mass customization) Creating polygon mesh models for rapid prototyping (to aid surgeons preparing for difficult surgeries, for example) Combining polygon mesh models with CAD solid modeling (design of hip replacement parts, for example) Computational analysis of complex biological processes, e.g. air flow, blood flow Computational simulation of new medical devices and implants in vivo If the use goes beyond visualization of the scan data, processes like image segmentation and image-based meshing will be necessary to generate an accurate and realistic geometrical description of the scan data. Engineering Because CAD programs running on computers "understand" the true geometry comprising complex shapes, many attributes of/for a 3D solid, such as its center of gravity, volume, and mass, can be quickly calculated. For instance, the cube with rounded edges shown at the top of this article measures 8.4 mm from flat to flat. Despite its many radii and the shallow pyramid on each of its six faces, its properties are readily calculated for the designer, as shown in the screenshot at right. See also Wire frame modelling Free-surface modelling Computational geometry Computer graphics Engineering drawing Euler boundary representation List of CAx companies PLaSM – Programming Language of Solid Modeling. Technical drawing References External links sgCore C++/C# library The Solid Modeling Association 3D computer graphics Computer-aided design Euclidean solid geometry
Solid modeling
[ "Physics", "Engineering" ]
3,949
[ "Computer-aided design", "Design engineering", "Euclidean solid geometry", "Space", "Spacetime" ]
457,673
https://en.wikipedia.org/wiki/Implantable%20cardioverter-defibrillator
An implantable cardioverter-defibrillator (ICD) or automated implantable cardioverter defibrillator (AICD) is a device implantable inside the body, able to perform defibrillation, and depending on the type, cardioversion and pacing of the heart. The ICD is the first-line treatment and prophylactic therapy for patients at risk for sudden cardiac death due to ventricular fibrillation and ventricular tachycardia. "AICD" was trademarked by the Boston Scientific corporation, so the more generic "ICD" is preferred terminology. On average ICD batteries last about six to ten years. Advances in technology, such as batteries with more capacity or rechargeable batteries, may allow batteries to last over ten years. The leads (electrical cable wires connecting the device to the heart) have much longer average longevity, but can malfunction in various ways, specifically insulation failure or fracture of the conductor; thus, ICDs and leads generally require replacement after every 5 to 10 years. The process of implantation of an ICD system is similar to implantation of an artificial pacemaker. In fact, ICDs are composed of an ICD generator and of wires. The first component or generator contains a computer chip or circuitry with RAM (memory), programmable software, a capacitor and a battery; this is implanted typically under the skin in the left upper chest. The second part of the system is an electrode wire or wires that, similar to pacemakers, are connected to the generator and passed through a vein to the right chambers of the heart. The lead usually lodges in the apex or septum of the right ventricle. Just like pacemakers, ICDs can have a single wire or lead in the heart (in the right ventricle, single chamber ICD), two leads (in the right atrium and right ventricle, dual chamber ICD) or three leads (biventricular ICD, one in the right atrium, one in the right ventricle and one on the outer wall of the left ventricle). The difference between pacemakers and ICDs is that pacemakers are also available as temporary units and are generally designed to correct slow heart rates, i.e. bradycardia, while ICDs are often permanent safeguards against sudden life-threatening arrhythmias. Recent developments include the subcutaneous ICD (S-ICD) which is placed entirely under the skin, leaving the vessels and heart untouched. Implantation with an S-ICD is regarded as a procedure with even less risks, it is currently suggested for patients with previous history of infection or increased risk of infection. It is also recommended for very active patients, younger patients with will likely outlive their transvenous ICD (TV-ICD) leads and those with complicated anatomy/arterial access. S-ICDs are not able to be used in patients with ventricular tachycardia or bradycardia. Living with an ICD People who have an implanted cardioverter-defibrillator can live full lives. Patients overall have either a sustained or improved quality of life after ICD implantation when compared to before ICD implantation. It may provide a strong degree of reassurance. As with a pacemaker, however, living with an ICD does impose some restrictions on the person's lifestyle. Physical activities Almost all forms of physical activities can be performed by patients with an ICD with moderation. All forms of sports that do not pose a risk of damaging the ICD or because of the underlying cardiomyopathy can be undertaken by the patient. Special care should be taken not to put excessive strain on the shoulder, arm and torso area where the ICD is implanted. Doing so may damage the ICD or the leads going from the ICD generator to the patient's heart. Particularly to be avoided are exercises that cause the clavicle to be pulled down towards the ribs, such as lifting weights with the arm, on the ICD site, while standing. Driving ICD patients in the United States are prohibited from professional or commercial driving per the Cardiovascular Advisory Panel Guidelines for the Medical Examination of Commercial Motor Vehicle Drivers. A driving abstinence for private drivers is recommended following ICD implantation, but the timeframe is variable depending on the country (between 3 and 6 months for secondary prevention and 1–4 weeks for primary prevention). Following an appropriate ICD-therapy, a driving ban is recommended for 3–6 months depending on the country. After inappropriate ICD-therapy delivered for non-ventricular arrhythmias or due to the device malfunction, driving restrictions usually apply until the cause of the inappropriate therapy has been eliminated. Electro-magnetic equipment Equipment using magnets or generating magnetic fields, or any similar environment, must be avoided by patients with an ICD. As with other metallic objects, an ICD is normally a contraindication to the use of magnetic resonance imaging (MRI). However, several ICD manufacturers have recently introduced MR-Conditional ICDs, which allow the use of MRI under specified safe operating conditions. Quality of life Implantable cardioverter defibrillators have demonstrated clear life-saving benefits, while concerns about patient acceptance and psychological adjustment to the ICD have been the focus of much research. Researchers, including those from the field of cardiac psychology, have concluded that the quality of life (QoL) of ICD patients is at least equal to, or better than, that of those taking anti-arrhythmic medications. The largest study of examined 2,521 patients with stable heart failure in the SCD-HeFT trial. Results indicated that there were no differences between ICD-treated and medication-treated groups at 30 months in patient-reported QoL. Psychological adjustment following ICD implantation has also been well studied. In rare cases, the ICD can become infected and is usually bacterial in origin but other organisms such as certain fungi have occasionally been implicated. This is more likely to occur in people with diabetes, heart failure, kidney failure, or a suppressed immune system. Anxiety is a common psychological side effect, with approximately 13–38% of ICD patients reporting clinically significant anxiety. The primary etiological factors contributing to anxiety in ICD patients have not been determined, however. Depressive symptoms are also common, but the incidence of these problems has been shown to be similar to those observed in other cardiac patient groups, with approximately 24–41% of patients with ICDs experiencing depressive symptoms. Problems in psychosocial adjustment to ICDs, including the experience of anxiety, among spouses or other romantic partners are also prevalent. This phenomenon may be related, at least in part, to shared shock anxiety and avoidance of physical and sexual contact. Follow Up Patients are generally required to follow up with their electrophysiologist cardiologist at regular intervals, mostly 3 to 6 months. At this time, many device manufactures have some form of home monitoring to allow for device data to be sent in electronically to the physician. Recent advances include app integration for home interrogation and remote care has been correlated with some mortality benefit. Indications Implantation of ICD is meant to prevent sudden cardiac death and is indicated under various conditions. Two broad but distinct categories are primary and secondary prevention. Primary prevention refers to patients who have not suffered a life-threatening arrhythmia episode. Secondary prevention has the strongest evidence for benefit and it refers to survivors of cardiac arrest secondary to ventricular fibrillation (VF) or hemodynamically unstable sustained ventricular tachycardia (VT) after reversible causes are excluded. Similarly, ICD use in primary prevention is to prevent cardiac death in patients who are at risk for sustained ventricular tachycardia or ventricular fibrillation. This population accounts for the bulk of all ICD implants. There are a multitude of guideline indications for ICD use in primary preventions with varying degree of supporting evidence. Periodically, both the American College of Cardiology (ACC)/American Heart Association (AHA) and European Society of Cardiology provide an update to this guideline. Some of the Class I indications are as follows: With Left Ventricular Ejection Fraction (LVEF) ≤ 35% due to prior Myocardial Infarction (MI) who are at least 40 days post-MI and are in NYHA Functional Class II or III With Left Ventricular (LV) dysfunction due to prior MI who are at least 40 days post-MI, have an LVEF ≤ 30%, and are in NYHA Functional Class I With nonischemic Dilated cardiomyopathy (DCM) who have an LVEF ≤ 35% and who are in NYHA Functional Class II or III With nonsustained VT due to prior MI, LVEF < 40%, and inducible VF or sustained VT at electrophysiological study With structural heart disease and spontaneous sustained VT, whether hemodynamically stable or unstable With syncope of undetermined origin with clinically relevant, hemodynamically significant sustained VT or VF induced at electrophysiological study Clinical trials A number of clinical trials have demonstrated the superiority of the ICD over AAD (antiarrhythmic drugs) in the prevention of death from malignant arrhythmias. The SCD-HeFT trial (published in 2005) showed a significant all-cause mortality benefit for patients with ICD. Congestive heart failure patients that were implanted with an ICD had an all-cause death risk 23% lower than placebo and an absolute decrease in mortality of 7.2 percentage points after five years in the overall population.1 Reporting in 1999, the Antiarrhythmics Versus Implantable Defibrillators (AVID) trial consisted of 1,016 patients, and deaths in those treated with AAD were more frequent (n = 122) compared with deaths in the ICD groups (n = 80, p < 0.001). In 2002 the MADITII trial showed benefit of ICD treatment in patients after myocardial infarction with reduced left ventricular function (EF<30). Initially ICDs were implanted via thoracotomy with defibrillator patches applied to the epicardium or pericardium. The device was attached via subcutaneous and transvenous leads to the device contained in a subcutaneous abdominal wall pocket. The device itself acts as an electrode. Most ICDs nowadays are implanted transvenously with the devices placed in the left pectoral region similar to pacemakers. Intravascular spring or coil electrodes are used to defibrillate. The devices have become smaller and less invasive as the technology advances. Current ICDs weigh only 70 grams and are about 12.9 mm thick. A recent study by Birnie and colleagues at the University of Ottawa Heart Institute has demonstrated that ICDs are underused in both the United States and Canada. An accompanying editorial by Simpson of Queen's University explores some of the economic, geographic, social and political reasons for this. History The development of the ICD was pioneered at Sinai Hospital in Baltimore by a team including Michel Mirowski, Morton Mower, Alois Langer, William Staewen, and Joseph "Jack" Lattuca. Mirowski teamed up with Mower and Staewen and together they commenced their research in 1969 but it was 11 years before they treated their first patient. The work was commenced against much skepticism even by leading experts in the field of arrhythmias and sudden death. There was doubt that their ideas would ever become a clinical reality. In 1972 Bernard Lown, the inventor of the external defibrillator, and Paul Axelrod stated in the journal Circulation – "The very rare patient who has frequent bouts of ventricular fibrillation is best treated in a coronary care unit and is better served by an effective anti-arrhythmic program or surgical correction of inadequate coronary blood flow or ventricular malfunction. In fact, the implanted defibrillator system represents an imperfect solution in search of a plausible and practical application." The problems to be overcome were the design of a system which would allow detection of ventricular fibrillation or ventricular tachycardia. Despite the lack of financial backing and grants, they persisted and the first device was implanted in February 1980 at Johns Hopkins Hospital by Dr. Levi Watkins Jr. The first devices required the chest to be cut open and a mesh electrode sewn onto the heart; the pulse generator was placed in the abdomen. Working mechanism ICDs constantly monitor the rate and rhythm of the heart and can deliver therapies, by way of an electrical shock, when the heart rate exceeds a preset number. More modern devices have software designed to attempt a discrimination between ventricular fibrillation and ventricular tachycardia (VT), and may try to pace the heart faster than its intrinsic rate in the case of VT, to try to break the tachycardia before it progresses to ventricular fibrillation. This is known as overdrive pacing, or anti-tachycardia pacing (ATP). ATP is only effective if the underlying rhythm is ventricular tachycardia, and is never effective if the rhythm is ventricular fibrillation. Many modern ICDs use a combination of various methods to determine if a fast rhythm is normal, supraventricular tachycardia, ventricular tachycardia, or ventricular fibrillation. Rate discrimination evaluates the rate of the lower chambers of the heart (the ventricles) and compares it to the rate in the upper chambers of the heart (the atria). If the rate in the atria is faster than or equal to the rate in the ventricles, then the rhythm is most likely not ventricular in origin, and is usually more benign. If this is the case, the ICD does not provide any therapy, or withholds it for a programmable length of time. Rhythm discrimination will see how regular a ventricular tachycardia is. Generally, ventricular tachycardia is regular. If the rhythm is irregular, it is usually due to conduction of an irregular rhythm that originates in the atria, such as atrial fibrillation. In the picture, an example of torsades de pointes can be seen; this represents a form of irregular ventricular tachycardia. In this case, the ICD will rely on rate, not regularity, to make the correct diagnosis. Morphology discrimination checks the morphology of every ventricular beat and compares it to what the ICD knows is the morphology of normally conducted ventricular impulse for the patient. This normal ventricular impulse is often an average of a multiple of normal beats of the patient acquired in the recent past and known as a template. The integration of these various parameters is very complex, and clinically, the occurrence of inappropriate therapy is still occasionally seen and a challenge for future software advancements. See also Artificial cardiac pacemaker Brugada syndrome Cardiopulmonary resuscitation (CPR) Defibrillation Wearable cardioverter defibrillator Notes References Bardy GH, Lee KL, Mark DB, et al. for the Sudden Cardiac Death in Heart Failure Trial (SCD-HeFT) Investigators. Amiodarone or an implantable cardioverter-defibrillator for congestive heart failure. N Engl J Med 2005; 352:225–237 Full text Kumar and Clarke. Internal Medicine. 2009. Sears S, Matchett M, Conti J. "Effective management of ICD patient psychosocial issues and patient critical events. J Cardiovasc Electrophysiol 2009; 20(11):1297–304. External links A Defibrillator in Action Information on ICDs/S-ICDs from the charity Arrhythmia Alliance East Carolina Heart Institute at ECU, Cardiac Psychology Lab, Focus on ICD Samuel F. Sears, Jr., Ph.D., East Carolina University, Cardiac Psychology, ICD QoL Specialist Video, Coping with an ICD Cardiac electrophysiology Implants (medicine) Medical devices Cardiac procedures
Implantable cardioverter-defibrillator
[ "Biology" ]
3,405
[ "Medical devices", "Medical technology" ]
458,202
https://en.wikipedia.org/wiki/Semiheavy%20water
Semiheavy water is the result of replacing one of the protium (normal hydrogen, H) in normal water with deuterium (H; or less correctly, D). It exists whenever there is water with H and H in the mix. This is because hydrogen atoms (H) are rapidly exchanged between water molecules. Water with 50% H and 50% H, is about 50% HHO and 25% each of HO and HO, in dynamic equilibrium. In normal water, about 1 molecule in 3,200 is HDO (HHO) (one hydrogen in 6,400 is H). By comparison, heavy water DO or HO occurs at a proportion of about 1 molecule in 41 million (i.e., 1 in 6,400). This makes semiheavy water far more common than "normal" heavy water. The freezing point of semiheavy water is close to the freezing point of heavy water at 3.8°C compared to the 3.82°C of heavy water. Production On Earth, semiheavy water occurs naturally in normal water at a proportion of about 1 molecule in 3,200; because 1 in 6,400 hydrogen atoms in water is deuterium, which is 1 part in 3,200 by weight. HDO may be separated from normal water by distillation or electrolysis, or by various chemical exchange processes, all of which exploit a kinetic isotope effect. Partial enrichment also occurs in natural bodies of water under certain evaporation conditions. (For more information about the distribution of deuterium in water, see Vienna Standard Mean Ocean Water and Hydrogen isotope biogeochemistry.) See also Deuterium-depleted water References Further reading Forms of water Deuterated compounds
Semiheavy water
[ "Physics", "Chemistry" ]
356
[ "Forms of water", "Phases of matter", "Matter" ]
458,253
https://en.wikipedia.org/wiki/Secret%20sharing
Secret sharing (also called secret splitting) refers to methods for distributing a secret among a group, in such a way that no individual holds any intelligible information about the secret, but when a sufficient number of individuals combine their 'shares', the secret may be reconstructed. Whereas insecure secret sharing allows an attacker to gain more information with each share, secure secret sharing is 'all or nothing' (where 'all' means the necessary number of shares). In one type of secret sharing scheme there is one dealer and n players. The dealer gives a share of the secret to the players, but only when specific conditions are fulfilled will the players be able to reconstruct the secret from their shares. The dealer accomplishes this by giving each player a share in such a way that any group of t (for threshold) or more players can together reconstruct the secret but no group of fewer than t players can. Such a system is called a -threshold scheme (sometimes it is written as an -threshold scheme). Secret sharing was invented independently by Adi Shamir and George Blakley in 1979. Importance Secret sharing schemes are ideal for storing information that is highly sensitive and highly important. Examples include: encryption keys, missile launch codes, and numbered bank accounts. Each of these pieces of information must be kept highly confidential, as their exposure could be disastrous; however, it is also critical that they should not be lost. Traditional methods for encryption are ill-suited for simultaneously achieving high levels of confidentiality and reliability. This is because when storing the encryption key, one must choose between keeping a single copy of the key in one location for maximum secrecy, or keeping multiple copies of the key in different locations for greater reliability. Increasing reliability of the key by storing multiple copies lowers confidentiality by creating additional attack vectors; there are more opportunities for a copy to fall into the wrong hands. Secret sharing schemes address this problem, and allow arbitrarily high levels of confidentiality and reliability to be achieved. Secret sharing also allows the distributor of the secret to trust a group 'in aggregate'. Traditionally, giving a secret to a group for safekeeping would require that the distributor completely trust all members of the group. Secret sharing schemes allow the distributor to securely store the secret with the group even if not all members can be trusted all the time. So long as the number of traitors is never more than the critical number needed to reconstruct the secret, the secret is safe. Secret sharing schemes are important in cloud computing environments. Thus a key can be distributed over many servers by a threshold secret sharing mechanism. The key is then reconstructed when needed. Secret sharing has also been suggested for sensor networks where the links are liable to be tapped, by sending the data in shares which makes the task of the eavesdropper harder. The security in such environments can be made greater by continuous changing of the way the shares are constructed. "Secure" versus "insecure" secret sharing A secure secret sharing scheme distributes shares so that anyone with fewer than t shares has no more information about the secret than someone with 0 shares. Consider for example the secret sharing scheme in which the secret phrase "password" is divided into the shares "pa––––––", "––ss––––", "––––wo––", and "––––––rd". A person with 0 shares knows only that the password consists of eight letters, and thus would have to guess the password from 268 = 208 billion possible combinations. A person with one share, however, would have to guess only the six letters, from 266 = 308 million combinations, and so on as more persons collude. Consequently, this system is not a "secure" secret sharing scheme, because a player with fewer than t secret shares is able to reduce the problem of obtaining the inner secret without first needing to obtain all of the necessary shares. In contrast, consider the secret sharing scheme where X is the secret to be shared, Pi are public asymmetric encryption keys and Qi their corresponding private keys. Each player J is provided with In this scheme, any player with private key 1 can remove the outer layer of encryption, a player with keys 1 and 2 can remove the first and second layer, and so on. A player with fewer than N keys can never fully reach the secret X without first needing to decrypt a public-key-encrypted blob for which he does not have the corresponding private key – a problem that is currently believed to be computationally infeasible. Additionally we can see that any user with all N private keys is able to decrypt all of the outer layers to obtain X, the secret, and consequently this system is a secure secret distribution system. Limitations Several secret-sharing schemes are said to be information-theoretically secure and can be proven to be so, while others give up this unconditional security for improved efficiency while maintaining enough security to be considered as secure as other common cryptographic primitives. For example, they might allow secrets to be protected by shares with entropy of 128 bits each, since each share would be considered enough to stymie any conceivable present-day adversary, requiring a brute force attack of average size 2127. Common to all unconditionally secure secret sharing schemes, there are limitations: Each share of the secret must be at least as large as the secret itself. This result is based in information theory, but can be understood intuitively. Given shares, no information whatsoever can be determined about the secret. Thus, the final share must contain as much information as the secret itself. There is sometimes a workaround for this limitation by first compressing the secret before sharing it, but this is often not possible because many secrets (keys for example) look like high-quality random data and thus are hard to compress. All secret-sharing schemes use random bits for constructing the shares. To distribute a one-bit secret shares with a threshold of t shares, random bits are needed. To distribute a secret of b bits, entropy of bits is necessary. Trivial secret sharing Note: n is the total number of 'players', among whom the shares are distributed, and t is the minimum number of players required to reveal the secret. t = 1 t = 1 secret sharing is trivial. The secret can simply be distributed to all n participants. t = n There are several secret-sharing schemes for , when all shares are necessary to recover the secret: Encode the secret as a binary number s of any length. For each player i, where i is one fewer than the total number of players, give a random binary number pi of the same length as s. To the player without a share, give the share calculated as , where ⊕ denotes bitwise exclusive or. The secret is the bitwise exclusive-or of all the players' numbers (pi, for 1 ≤ i ≤ n). Instead, (1) can be performed using the binary operation in any group. For example, take the cyclic group of integers with addition modulo 232, which corresponds to 32-bit integers with addition defined with the binary overflow being discarded. The secret s can be partitioned into a vector of M 32-bit integers, which we call vsecret. Then of the players are each given a vector of M 32-bit integers that is drawn independently from a uniform probability distribution, with player i receiving vi. The remaining player is given vn = vsecret − v1 − v2 − ... − vn−1. The secret vector can then be recovered by summing across all the players' vectors. 1 < t < n The difficulty lies in creating schemes that are still secure, but do not require all n shares. When space efficiency is not a concern, trivial schemes can be used to reveal a secret to any desired subsets of the players simply by applying the scheme for each subset. For example, to reveal a secret s to any two of the three players Alice, Bob and Carol, create three () different secret shares for s, giving the three sets of two shares to Alice and Bob, Alice and Carol, and Bob and Carol. t belonging to any desired subset of {1, 2, ..., n} For example, imagine that the board of directors of a company would like to protect their secret formula. The president of the company should be able to access the formula when needed, but in an emergency any 3 of the 12 board members would be able to unlock the secret formula together. One of the ways this can be accomplished is by a secret-sharing scheme with and , where 3 shares are given to the president, and one share is given to each board member. Efficient secret sharing The trivial approach quickly becomes impractical as the number of subsets increases, for example when revealing a secret to any 50 of 100 players, which would require schemes to be created and each player to maintain distinct sets of shares for each scheme. In the worst case, the increase is exponential. This has led to the search for schemes that allow secrets to be shared efficiently with a threshold of players. Shamir's scheme In this scheme, any t out of n shares may be used to recover the secret. The system relies on the idea that one can construct a unique polynomial of degree , such that each of the t points lies on the polynomial. It takes two points to define a straight line, three points to fully define a quadratic, four points to define a cubic curve, and so on. That is, it takes t points to define a polynomial of degree . The method is to create a polynomial of degree with the secret as the first coefficient and the remaining coefficients picked at random. Next find n points on the curve and give one to each of the players. When at least t out of the n players reveal their points, there is sufficient information to fit a th degree polynomial to them, the first coefficient being the secret. Blakley's scheme Two nonparallel lines in the same plane intersect at exactly one point. Three nonparallel planes in space intersect at exactly one point. More generally, any n nonparallel -dimensional hyperplanes intersect at a specific point. The secret may be encoded as any single coordinate of the point of intersection. If the secret is encoded using all the coordinates, even if they are random, then an insider (someone in possession of one or more of the -dimensional hyperplanes) gains information about the secret since he knows it must lie on his plane. If an insider can gain any more knowledge about the secret than an outsider can, then the system no longer has information theoretic security. If only one of the n coordinates is used, then the insider knows no more than an outsider (i.e., that the secret must lie on the x-axis for a 2-dimensional system). Each player is given enough information to define a hyperplane; the secret is recovered by calculating the planes' point of intersection and then taking a specified coordinate of that intersection. Blakley's scheme is less space-efficient than Shamir's; while Shamir's shares are each only as large as the original secret, Blakley's shares are t times larger, where t is the threshold number of players. Blakley's scheme can be tightened by adding restrictions on which planes are usable as shares. The resulting scheme is equivalent to Shamir's polynomial system. Using the Chinese remainder theorem The Chinese remainder theorem can also be used in secret sharing, for it provides us with a method to uniquely determine a number S modulo k many pairwise coprime integers , given that . There are two secret sharing schemes that make use of the Chinese remainder theorem, Mignotte's and Asmuth-Bloom's Schemes. They are threshold secret sharing schemes, in which the shares are generated by reduction modulo the integers , and the secret is recovered by essentially solving the system of congruences using the Chinese remainder theorem. Proactive secret sharing If the players store their shares on insecure computer servers, an attacker could crack in and steal the shares. If it is not practical to change the secret, the uncompromised (Shamir-style) shares can be renewed. The dealer generates a new random polynomial with constant term zero and calculates for each remaining player a new ordered pair, where the x-coordinates of the old and new pairs are the same. Each player then adds the old and new y-coordinates to each other and keeps the result as the new y-coordinate of the secret. All of the non-updated shares the attacker accumulated become useless. An attacker can only recover the secret if he can find enough other non-updated shares to reach the threshold. This situation should not happen because the players deleted their old shares. Additionally, an attacker cannot recover any information about the original secret from the update files because they contain only random information. The dealer can change the threshold number while distributing updates, but must always remain vigilant of players keeping expired shares. Verifiable secret sharing A player might lie about his own share to gain access to other shares. A verifiable secret sharing (VSS) scheme allows players to be certain that no other players are lying about the contents of their shares, up to a reasonable probability of error. Such schemes cannot be computed conventionally; the players must collectively add and multiply numbers without any individual's knowing what exactly is being added and multiplied. Tal Rabin and Michael Ben-Or devised a multiparty computing (MPC) system that allows players to detect dishonesty on the part of the dealer or on part of up to one third of the threshold number of players, even if those players are coordinated by an "adaptive" attacker who can change strategies in realtime depending on what information has been revealed. Computationally secure secret sharing The disadvantage of unconditionally secure secret sharing schemes is that the storage and transmission of the shares requires an amount of storage and bandwidth resources equivalent to the size of the secret times the number of shares. If the size of the secret were significant, say 1 GB, and the number of shares were 10, then 10 GB of data must be stored by the shareholders. Alternate techniques have been proposed for greatly increasing the efficiency of secret sharing schemes, by giving up the requirement of unconditional security. One of these techniques, known as secret sharing made short, combines Rabin's information dispersal algorithm (IDA) with Shamir's secret sharing. Data is first encrypted with a randomly generated key, using a symmetric encryption algorithm. Next this data is split into N pieces using Rabin's IDA. This IDA is configured with a threshold, in a manner similar to secret sharing schemes, but unlike secret sharing schemes the size of the resulting data grows by a factor of (number of fragments / threshold). For example, if the threshold were 10, and the number of IDA-produced fragments were 15, the total size of all the fragments would be (15/10) or 1.5 times the size of the original input. In this case, this scheme is 10 times more efficient than if Shamir's scheme had been applied directly on the data. The final step in secret sharing made short is to use Shamir secret sharing to produce shares of the randomly generated symmetric key (which is typically on the order of 16–32 bytes) and then give one share and one fragment to each shareholder. A related approach, known as AONT-RS, applies an All-or-nothing transform to the data as a pre-processing step to an IDA. The All-or-nothing transform guarantees that any number of shares less than the threshold is insufficient to decrypt the data. Multi-secret and space efficient (batched) secret sharing An information-theoretically secure k-of-n secret-sharing scheme generates n shares, each of size at least that of the secret itself, leading to the total required storage being at least n-fold larger than the secret. In multi-secret sharing designed by Matthew K. Franklin and Moti Yung, multiple points of the polynomial host secrets; the method was found useful in numerous applications from coding to multi-party computations. In space efficient secret sharing, devised by Abhishek Parakh and Subhash Kak, each share is roughly the size of the secret divided by . This scheme makes use of repeated polynomial interpolation and has potential applications in secure information dispersal on the Web and in sensor networks. This method is based on data partitioning involving the roots of a polynomial in finite field. Some vulnerabilities of related space efficient secret sharing schemes were pointed out later. They show that a scheme based on interpolation method cannot be used to implement a scheme when the k secrets to be distributed are inherently generated from a polynomial of degree less than , and the scheme does not work if all of the secrets to be shared are the same, etc. Other uses and applications A secret-sharing scheme can secure a secret over multiple servers and remain recoverable despite multiple server failures. The dealer may act as several distinct participants, distributing the shares among the participants. Each share may be stored on a different server, but the dealer can recover the secret even if several servers break down as long as they can recover at least t shares; however, crackers that break into one server would still not know the secret as long as fewer than t shares are stored on each server. This is one of the major concepts behind the Vanish computer project at the University of Washington, where a random key is used to encrypt data, and the key is distributed as a secret across several nodes in a P2P network. In order to decrypt the message, at least t nodes on the network must be accessible; the principle for this particular project being that the number of secret-sharing nodes on the network will decrease naturally over time, therefore causing the secret to eventually vanish. However, the network is vulnerable to a Sybil attack, thus making Vanish insecure. Any shareholder who ever has enough information to decrypt the content at any point is able to take and store a copy of X. Consequently, although tools and techniques such as Vanish can make data irrecoverable within their own system after a time, it is not possible to force the deletion of data once a malicious user has seen it. This is one of the leading conundrums of digital rights management. A dealer could send t shares, all of which are necessary to recover the original secret, to a single recipient. An attacker would have to intercept all t shares to recover the secret, a task which is more difficult than intercepting a single file, especially if the shares are sent using different media (e.g. some over the Internet, some mailed on CDs). For large secrets, it may be more efficient to encrypt the secret and then distribute the key using secret sharing. Secret sharing is an important primitive in several protocols for secure multiparty computation. Secret sharing can also be used for user authentication in a system. See also References External links Ubuntu Manpage: gfshare – explanation of Shamir Secret Sharing in GF(28) Description of Shamir's and Blakley's schemes Patent for use of secret sharing for recovering PGP (and other?) pass phrases A bibliography on secret-sharing schemes Cryptography
Secret sharing
[ "Mathematics", "Engineering" ]
3,990
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
458,540
https://en.wikipedia.org/wiki/Dry%20stone
Dry stone, sometimes called drystack or, in Scotland, drystane, is a building method by which structures are constructed from stones without any mortar to bind them together. A certain amount of binding is obtained through the use of carefully selected interlocking stones. Dry stone construction is best known in the context of stone walls, traditionally used for the boundaries of fields and churchyards, or as retaining walls for terracing, but dry stone shelters, houses and other structures also exist. The term tends not to be used for the many historic styles which used precisely-shaped stone, but did not use mortar, for example the Greek temple and Inca architecture. The art of dry stone walling was inscribed in 2018 on the UNESCO representative list of the intangible cultural heritage of humanity, for dry stone walls in countries such as France, Greece, Italy, Slovenia, Croatia, Switzerland and Spain. In 2024, Ireland was added to the list. History Some dry stone wall constructions in north-west Europe have been dated back to the Neolithic Age. In County Mayo, Ireland, an entire field system made from dry stone walls, since covered in peat, has been carbon-dated to 3800 BC. These are near contemporary with the, dry stone constructed, neolithic village of Skara Brae, and the Chambered cairn of Scotland. The cyclopean walls of the acropolis of Mycenae, Greece, have been dated to 1350 BC and those of Tiryns slightly earlier. Similar example is Daorson, in Bosnia, built around a prehistoric central fortified settlement or acropolis (existed there c. 17–16th C. BCE to the end of the Bronze Age, c. 9–8th C. BCE), and surrounded by cyclopean walls (similar to Mycenae) dated to the 4th C. BCE. In Belize, the Mayan ruins at Lubaantun illustrate use of dry stone construction in architecture of the 8th and 9th centuries AD. Great Zimbabwe in Zimbabwe, Africa, is an acropolis-like large city complex constructed in dry stone from the 11th to the 15th centuries AD. It is the largest of structures of similar construction throughout the area. Location and terminology Terminology varies regionally. When used as field boundaries, dry stone structures are more commonly known as dykes in Scotland, where professional dry stone wall builders are referred to as 'dykers'. Dry stone walls are characteristic of upland areas of Britain and Ireland where rock outcrops naturally or large stones exist in quantity in the soil. They are especially abundant in the West of Ireland, particularly Connemara. They may also be found throughout the Mediterranean, including retaining walls used for terracing. Such constructions are common where large stones are plentiful (for example, in The Burren) or conditions are too harsh for hedges capable of retaining livestock to be grown as reliable field boundaries. Many thousands of kilometres of such walls exist, most of them centuries old. In the United States they are common in areas with rocky soils, such as New England, New York, New Jersey, and Pennsylvania. They are a notable characteristic of the bluegrass region of central Kentucky, the Ozarks of Arkansas and Missouri, as well as Virginia, where they are usually referred to as rock fences or stone fences, and the Napa Valley in north central California. The technique of construction was brought to America primarily by English and Scots-Irish immigrants. The technique was also taken to Australia (principally western Victoria, some parts of Tasmania, and some parts of New South Wales, particularly around Kiama) and New Zealand (especially Otago). Similar walls also are found in the Swiss–Italian border region, where they are often used to enclose the open space under large natural boulders or outcrops. The higher-lying rock-rich fields and pastures in Bohemia's south-western border range of Šumava (e.g. around the mountain river of Vydra) are often lined by dry stone walls built of field-stones removed from the arable or cultural land. They serve both as cattle/sheep fences and the lot's borders. Sometimes also the dry stone terracing is apparent, often combined with parts of stone masonry (house foundations and shed walls) that are held together by a clay and pine needle "composite" mortar. The dry stone walling tradition of Croatia was added to the UNESCO Representative List of the Intangible Cultural Heritage of Humanity in November 2018, alongside those of Cyprus, France, Greece, Italy, Slovenia, Spain and Switzerland. In Croatia, dry stone walls () were built for a variety of reasons: to clear the earth of stone for crops; to delineate land ownership; or for shelter against the bora wind. Some walls date back to the Liburnian era. Notable examples include the island of Baljenac, which has of dry stone walls despite being only in area, and the vineyards of Primošten. In Peru in the 15th century AD, the Inca made use of otherwise unusable slopes by building dry stone walls to create terraces. They also employed this mode of construction for freestanding walls. Their ashlar type construction in Machu Picchu uses the classic Inca architectural style of polished dry stone walls of regular shape. The Incas were masters of this technique, in which blocks of stone are cut to fit together tightly without mortar. Many junctions are so perfect that not even a knife fits between the stones. The structures have persisted in the high earthquake region because of the flexibility of the walls, and because in their double wall architecture, the two portions of the walls incline into each other. Construction The style and method of construction of a wall will vary, depending on the type of stone available, its intended use and local tradition. Many older walls were constructed from stones and boulders cleared from the fields during preparation for agriculture (field stones) although some used stone quarried nearby. For modern walls, quarried stone is almost always used. One type of wall is called a "double" wall and is constructed by placing two rows of stones along the boundary to be walled. The foundation stones are ideally set into the ground so as to rest firmly on the subsoil. The rows are composed of large flattish stones, diminishing in size as the wall rises. Smaller stones may be used as chocks in areas where the natural stone shape is more rounded. The walls are built up to the desired height layer-by-layer (course by course) and, at intervals, large tie-stones or through stones are placed which span both faces of the wall and sometimes protrude. These have the effect of bonding what would otherwise be two thin walls leaning against each other, greatly increasing the strength of the wall. Diminishing the width of the wall as it gets higher, as traditionally done in Britain, also strengthens the wall considerably. The voids between the facing stones are carefully packed with smaller stones (filling, hearting). The final layer on the top of the wall also consists of large stones, called capstones, coping stones or copes. As with the tie stones, the capstones span the entire width of the wall and prevent it breaking apart. In some areas, such as South Wales, there is a tradition of placing the coping stones on a final layer of flat stones slightly wider than the top of the wall proper (coverbands). In addition to gates, a wall may contain smaller purposely built gaps for the passage or control of wildlife and livestock such as sheep. The smaller holes usually no more than in height are called "Bolt Holes" or "Smoots". Larger ones may be between in height, which are called "Cripple Holes". Boulder walls are a type of single wall in which the wall consists primarily of large boulders, around which smaller stones are placed. Single walls work best with large, flatter stones. Ideally, the largest stones are being placed at the bottom and the whole wall tapers toward the top. Sometimes a row of capstones completes the top of a wall, with the long rectangular side of each capstone perpendicular to the wall alignment. Galloway dykes consist of a base of double-wall construction or larger boulders with single-wall construction above. They appear to be rickety, with many holes, which deters livestock (and people) from attempting to cross them. These dykes are principally found in locations with exceptionally high winds, where a solid wall might be at risk of being unsettled by the buffeting. The porous nature of the wall significantly reduces wind force but takes greater skill to construct. They are also found in grazing areas where they are used to maximize the utility of the available stones (where ploughing was not turning up ever more stones). Another variation is the Cornish hedge or Welsh clawdd, which is a stone-clad earth bank topped by turf, scrub, or trees and characterised by a strict inward-curved batter (the slope of the "hedge"). As with many other varieties of wall, the height is the same as the width of the base, and the top is half the base width. Different regions have made minor modifications to the general method of construction—sometimes because of limitations of building material available, but also to create a look that is distinctive for that area. Whichever method is used to build a dry stone wall, considerable skill is required. Correcting any mistakes invariably means disassembling down to the level of the error. Selection of the correct stone for every position in the wall makes an enormous difference to the lifetime of the finished product, and a skilled waller will take time making the selection. As with many older crafts, skilled wallers, today, are few in number. With the advent of modern wire fencing, fields can be fenced with much less time and expense using wire than using stone walls; however, the initial expense of building dykes is offset by their sturdiness and consequent long, low-maintenance lifetimes. As a result of the increasing appreciation of the landscape and heritage value of dry stone walls, wallers remain in demand, as do the walls themselves. A nationally recognised certification scheme is operated in the UK by the Dry Stone Walling Association, with four grades from Initial to Master Craftsman. Notable examples include the Mourne Wall, a wall in the Mourne Mountains in County Down, Northern Ireland, and the wall around the Ottenby nature reserve, built by Charles X Gustav in the mid-17th century in Öland, Sweden. Other uses While the dry stone technique is most commonly used for the construction of double-wall stone walls and single-wall retaining terracing, dry stone sculptures, buildings, fortifications, bridges, and other structures also exist. Traditional turf-roofed Highland blackhouses were constructed using the double-wall dry stone method. When buildings are constructed using this method, the middle of the wall is generally filled with earth or sand in order to eliminate draughts. During the Iron Age, and perhaps earlier, the technique also was used to build fortifications such as the walls of Eketorp Castle (Öland, Sweden), Maiden Castle, North Yorkshire, Reeth, Dunlough Castle in southwest Ireland and the rampart of the Long Scar Dyke. Many of the dry-stone walls that exist today in Scotland can be dated to the 14th century or earlier when they were built to divide fields and retain livestock. Some extremely well built examples are found on the lands of Muchalls Castle. Dry stone walls can be built against embankments or even vertical terraces. If they are subjected to lateral earth pressure, they are retaining walls of the type gravity wall. The weight of the stones resists the pressure from the retained soil, including any surcharges, and the friction between the stones causes most of them to act as if they were a monolithic gravity wall of the same weight. Dry stone retaining walls were once built in great numbers for agricultural terracing and also to carry paths, roads and railways. Although dry stone is seldom used for these purposes today, a great many are still in use and maintained. New ones are often built in gardens and nature conservation areas. Dry stone retaining structures continue to be a subject of research. In northeastern Somalia, on the coastal plain to Aluula's east are found ruins of an ancient monument in a platform style. The structure is formed by a rectangular dry stone wall that is low in height; the space in between is filled with rubble and manually covered with small stones. Relatively large standing stones are also positioned on the edifice's corners. Near the platform are graves, which are outlined in stones. Measuring , the structure is the largest of a string of ancient platform and enclosed platform monuments exclusive to far northeastern Somalia. In Great Britain, Ireland, France and Switzerland, it is possible to find small dry stone structures built as signs, marking mountain paths or boundaries of owned land. In many countries, cairns, as they are called in Scotland, are used as road and mountaintop markers. Gallery See also Anathyrosis (Greece) Broch (Scotland) Building material Cabanes du Breuil (France) Dry stone hut Great Zimbabwe (Zimbabwe) Machu Picchu (Peru) Mending Wall (US) Nuraghe (Sardegna) Stone industry Stora Alvaret (Sweden) Trullo (Italy) Village des Bories (France) References Further reading United Kingdom and Ireland Colonel F. Rainsford-Hannay, Dry Stone Walling, Faber & Faber. 1952 (new impressions in 1977 et 1999). Alan Brooks, Dry Stone Walling. A Practical Conservation Book, 1977. Alen MacWeeney (photog.) & Richard Conniff, The Stone Walls of Ireland. London: Thames & Hudson, 1986 ; New York: Stewart, Tabori & Chang, 1986. Carolyn Murray-Wooley & Karl Raitz, Rock Fences of the Bluegrass, University Press of Kentucky. 1992. The Dry Stone Walling Association, Dry Stone Walling, Techniques and Traditions. 2004. Patrick McAfee, Irish Stone Walls: History, Building, Conservation, The O'Brien Press. 2011. Alan Brooks and Sean Adcock, Dry Stone Walling, a practical handbook, TCV. 2013 . United States Curtis P. Fields, The Forgotten Art of Building a Stone Wall, 1971 (Vermont). John Vivian, Building Stone Walls, 1976 (Vermont). France Charles Ewald, À construire vous-même : le “cabanon” romain, La Revue des bricoleurs. Bricole et brocante, september 1973. Christian Lassure (text), Dominique Repérant (photos), Cabanes en pierre sèche de France, Edisud, 2004. Christian Lassure, La Pierre sèche, mode d'emploi, éditions Eyrolles, 2008. Louis Cagin & Laetitia Nicolas, Construire en pierre sèche, éditions Eyrolles. 2008. External links How to build a dry stone wall Dry Stone Walling Association of Canada Dry Stone Walls Association of Australia The Dry Stone Wall Association of Ireland Dry Stone Walling Association of Great Britain The Drystone Conservancy, US Project Alpter, Terraced Landscapes of the Alpine Arc, a network of associations in Western Europe Stonemasonry Building stone Types of wall Fences Stone (material) Natural materials Garden features Architectural elements Building materials Intangible Cultural Heritage of Ukraine
Dry stone
[ "Physics", "Technology", "Engineering" ]
3,158
[ "Structural engineering", "Natural materials", "Building engineering", "Architecture", "Construction", "Stonemasonry", "Materials", "Architectural elements", "Types of wall", "Components", "Matter", "Building materials" ]
458,565
https://en.wikipedia.org/wiki/Dimensionless%20physical%20constant
In physics, a dimensionless physical constant is a physical constant that is dimensionless, i.e. a pure number having no units attached and having a numerical value that is independent of whatever system of units may be used. The concept should not be confused with dimensionless numbers, that are not universally constant, and remain constant only for a particular phenomenon. In aerodynamics for example, if one considers one particular airfoil, the Reynolds number value of the laminar–turbulent transition is one relevant dimensionless number of the problem. However, it is strictly related to the particular problem: for example, it is related to the airfoil being considered and also to the type of fluid in which it moves. The term fundamental physical constant is sometimes used to refer to some dimensionless constants. Perhaps the best-known example is the fine-structure constant, α, which has an approximate value of . Terminology It has been argued the term fundamental physical constant should be restricted to the dimensionless universal physical constants that currently cannot be derived from any other source; this stricter definition is followed here. However, the term fundamental physical constant has also been used occasionally to refer to certain universal dimensioned physical constants, such as the speed of light c, vacuum permittivity ε0, Planck constant h, and the Newtonian constant of gravitation G, that appear in the most basic theories of physics. NIST and CODATA sometimes used the term in this less strict manner. Characteristics There is no exhaustive list of such constants but it does make sense to ask about the minimal number of fundamental constants necessary to determine a given physical theory. Thus, the Standard Model requires 25 physical constants. About half of them are the masses of fundamental particles, which become "dimensionless" when expressed relative to the Planck mass or, alternatively, as coupling strength with the Higgs field along with the gravitational constant. Fundamental physical constants cannot be derived and have to be measured. Developments in physics may lead to either a reduction or an extension of their number: discovery of new particles, or new relationships between physical phenomena, would introduce new constants, while the development of a more fundamental theory might allow the derivation of several constants from a more fundamental constant. A long-sought goal of theoretical physics is to find first principles (theory of everything) from which all of the fundamental dimensionless constants can be calculated and compared to the measured values. The large number of fundamental constants required in the Standard Model has been regarded as unsatisfactory since the theory's formulation in the 1970s. The desire for a theory that would allow the calculation of particle masses is a core motivation for the search for "Physics beyond the Standard Model". History In the 1920s and 1930s, Arthur Eddington embarked upon extensive mathematical investigation into the relations between the fundamental quantities in basic physical theories, later used as part of his effort to construct an overarching theory unifying quantum mechanics and cosmological physics. For example, he speculated on the potential consequences of the ratio of the electron radius to its mass. Most notably, in a 1929 paper he set out an argument based on the Pauli exclusion principle and the Dirac equation that fixed the value of the reciprocal of the fine-structure constant as 𝛼−1 = 16 + × 16 × (16–1) = 136. When its value was discovered to be closer to 137, he changed his argument to match that value. His ideas were not widely accepted, and subsequent experiments have shown that they were wrong (for example, none of the measurements of the fine-structure constant suggest an integer value; the modern CODATA value is Though his derivations and equations were unfounded, Eddington was the first physicist to recognize the significance of universal dimensionless constants, now considered among the most critical components of major physical theories such as the Standard Model and ΛCDM cosmology. He was also the first to argue for the importance of the cosmological constant Λ itself, considering it vital for explaining the expansion of the universe, at a time when most physicists (including its discoverer, Albert Einstein) considered it an outright mistake or mathematical artifact and assumed a value of zero: this at least proved prescient, and a significant positive Λ features prominently in ΛCDM. Eddington may have been the first to attempt in vain to derive the basic dimensionless constants from fundamental theories and equations, but he was certainly not the last. Many others would subsequently undertake similar endeavors, and efforts occasionally continue even today. None have yet produced convincing results or gained wide acceptance among theoretical physicists. An empirical relation between the masses of the electron, muon and tau has been discovered by physicist Yoshio Koide, but this formula remains unexplained. Examples Dimensionless fundamental physical constants include: α, the fine-structure constant, (≈ ). This is also the square of the electron charge, expressed in Planck units, which defines the scale of charge of elementary particles with charge. The electron charge is the coupling constant for the electromagnetic interaction. μ or β, the proton-to-electron mass ratio (≈ ), the rest mass of the proton divided by that of the electron. More generally, the ratio of the rest masses of any pair of elementary particles. αs, the coupling constant for the strong force (≈ 1) Fine-structure constant One of the dimensionless fundamental constants is the fine-structure constant: , where e is the elementary charge, ħ is the reduced Planck constant, c is the speed of light in vacuum, and ε0 is the permittivity of free space. The fine-structure constant is fixed to the strength of the electromagnetic force. At low energies, α ≈ , whereas at the scale of the Z boson, about , one measures α ≈ . There is no accepted theory explaining the value of α; Richard Feynman elaborates: Standard Model The original Standard Model of particle physics from the 1970s contained 19 fundamental dimensionless constants describing the masses of the particles and the strengths of the electroweak and strong forces. In the 1990s, neutrinos were discovered to have nonzero mass, and a quantity called the vacuum angle was found to be indistinguishable from zero. The complete Standard Model requires 25 fundamental dimensionless constants (Baez, 2011). At present, their numerical values are not understood in terms of any widely accepted theory and are determined only from measurement. These 25 constants are: the fine structure constant; the strong coupling constant; fifteen masses of the fundamental particles (relative to the Planck mass mP = ), namely: six quarks six leptons the Higgs boson the W boson the Z boson four parameters of the Cabibbo–Kobayashi–Maskawa matrix, describing how quarks oscillate between different forms; four parameters of the Pontecorvo–Maki–Nakagawa–Sakata matrix, which does the same thing for neutrinos. Cosmological constants The cosmological constant, which can be thought of as the density of dark energy in the universe, is a fundamental constant in physical cosmology that has a dimensionless value of approximately 10−122. Other dimensionless constants are the measure of homogeneity in the universe, denoted by Q, which is explained below by Martin Rees, the baryon mass per photon, the cold dark matter mass per photon and the neutrino mass per photon. Barrow and Tipler Barrow and Tipler (1986) anchor their broad-ranging discussion of astrophysics, cosmology, quantum physics, teleology, and the anthropic principle in the fine-structure constant, the proton-to-electron mass ratio (which they, along with Barrow (2002), call β), and the coupling constants for the strong force and gravitation. Martin Rees's 'six numbers' Martin Rees, in his book Just Six Numbers, mulls over the following six dimensionless constants, whose values he deems fundamental to present-day physical theory and the known structure of the universe: N ≈ 1036: the ratio of the electrostatic and the gravitational forces between two protons. This ratio is denoted α/αG in Barrow and Tipler (1986). N governs the relative importance of gravity and electrostatic attraction/repulsion in explaining the properties of baryonic matter; ε ≈ 0.007: The fraction of the mass of four protons that is released as energy when fused into a helium nucleus. ε governs the energy output of stars, and is determined by the coupling constant for the strong force; Ω ≈ 0.3: the ratio of the actual density of the universe to the critical (minimum) density required for the universe to eventually collapse under its gravity. Ω determines the ultimate fate of the universe. If , the universe may experience a Big Crunch. If , the universe may expand forever; λ ≈ 0.7: The ratio of the energy density of the universe, due to the cosmological constant, to the critical density of the universe. Others denote this ratio by ; Q ≈ 10−5: The energy required to break up and disperse an instance of the largest known structures in the universe, namely a galactic cluster or supercluster, expressed as a fraction of the energy equivalent to the rest mass m of that structure, namely mc2; D = 3: the number of macroscopic spatial dimensions. N and ε govern the fundamental interactions of physics. The other constants (D excepted) govern the size, age, and expansion of the universe. These five constants must be estimated empirically. D, on the other hand, is necessarily a nonzero natural number and does not have an uncertainty. Hence most physicists would not deem it a dimensionless physical constant of the sort discussed in this entry. Any plausible fundamental physical theory must be consistent with these six constants, and must either derive their values from the mathematics of the theory, or accept their values as empirical. See also Cabibbo–Kobayashi–Maskawa matrix (Cabibbo angle) Dimensionless numbers in fluid mechanics Dirac large numbers hypothesis Neutrino oscillation Physical cosmology Standard Model Weinberg angle Fine-tuned universe Koide formula References Bibliography Martin Rees, 1999. Just Six Numbers: The Deep Forces that Shape the Universe. London: Weidenfeld & Nicolson. Josef Kuneš, 2012. Dimensionless Physical Quantities in Science and Engineering. Amsterdam: Elsevier. External articles General Fundamental Physical Constants from NIST Values of fundamental constants. CODATA, 2002. John Baez, 2002, "How Many Fundamental Constants Are There?" Simon Plouffe, 2004, "A search for a mathematical expression for mass ratios using a large database. " Articles on variance of the fundamental constants John D. Barrow and Webb, J. K., "Inconstant Constants – Do the inner workings of nature change with time?" Scientific American (June 2005). Michael Duff, 2002 "Comment on time-variation of fundamental constants." Dimensionless constants
Dimensionless physical constant
[ "Physics" ]
2,277
[ "Dimensionless constants", "Physical constants", "Physical quantities", "Fundamental constants" ]
458,675
https://en.wikipedia.org/wiki/Brinell%20scale
The Brinell scale (pronounced ) measures the indentation hardness of materials. It determines hardness through the scale of penetration of an indenter, loaded on a material test-piece. It is one of several definitions of hardness in materials science. The hardness scale is expressed as the Brinell Hardness Number (BHN or BH) and was named for Johan August Brinell, who developed the method in the early 20th century. History Proposed by Swedish engineer Johan August Brinell in 1900, it was the first widely used and standardised hardness test in engineering and metallurgy. The large size of indentation and possible damage to test-piece limits its usefulness. However, it also had the useful feature that the hardness value divided by two gave the approximate UTS in ksi for steels. This feature contributed to its early adoption over competing hardness tests. Test details The typical test uses a diameter steel ball as an indenter with a force. For softer materials, a smaller force is used; for harder materials, a tungsten carbide ball is substituted for the steel ball. The indentation is measured and hardness calculated as: where: BHN = Brinell Hardness Number (kgf/mm) P = applied load in kilogram-force (kgf) D = diameter of indenter (mm) d = diameter of indentation (mm) Brinell hardness is sometimes quoted in megapascals; the Brinell hardness number is multiplied by the acceleration due to gravity, 9.80665 m/s2, to convert it to megapascals. The Brinell hardness number can be correlated with the ultimate tensile strength (UTS), although the relationship is dependent on the material, and therefore determined empirically. The relationship is based on Meyer's index (n) from Meyer's law. If Meyer's index is less than 2.2 then the ratio of UTS to BHN is 0.36. If Meyer's index is greater than 2.2, then the ratio increases. The Brinell hardness is designated by the most commonly used test standards (ASTM E10-14 and ISO 6506–1:2005) as HBW (H from hardness, B from brinell and W from the material of the indenter, tungsten (wolfram) carbide). In former standards HB or HBS were used to refer to measurements made with steel indenters. HBW is calculated in both standards using the SI units as where: F = applied load (newtons) D = diameter of indenter (mm) d = diameter of indentation (mm) Common values When quoting a Brinell hardness number (BHN or more commonly HB), the conditions of the test used to obtain the number must be specified. The standard format for specifying tests can be seen in the example "HBW 10/3000". "HBW" means that a tungsten carbide (from the chemical symbol for tungsten or from the Spanish/Swedish/German name for tungsten, "Wolfram") ball indenter was used, as opposed to "HBS", which means a hardened steel ball. The "10" is the ball diameter in millimeters. The "3000" is the force in kilograms force. The hardness may also be shown as XXX HB YYD2. The XXX is the force to apply (in kgf) on a material of type YY (5 for aluminum alloys, 10 for copper alloys, 30 for steels). Thus a typical steel hardness could be written: 250 HB 30D2. It could be a maximum or a minimum. Standards International (ISO) and European (CEN) Standard US standard (ASTM International) See also Brinelling Hardness comparison Knoop hardness test Leeb rebound hardness test Rockwell scale Vickers hardness test References External links Brinell Hardness Test – Methods, advantages, disadvantages, applications Rockwell to Brinell conversion chart (Brinell, Rockwell A,B,C) Struers hardness conversion table (Vickers, Brinell, Rockwell B,C,D) Brinell Hardness HB conversion chart (MPa, Brinell, Vickers, Rockwell C) Hardness tests Dimensionless numbers Scales de:Härte#Härteprüfung nach Brinell
Brinell scale
[ "Materials_science", "Mathematics" ]
904
[ "Dimensionless numbers", "Mathematical objects", "Materials testing", "Hardness tests", "Numbers" ]
458,866
https://en.wikipedia.org/wiki/Solid%20mechanics
Solid mechanics (also known as mechanics of solids) is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces, temperature changes, phase changes, and other external or internal agents. Solid mechanics is fundamental for civil, aerospace, nuclear, biomedical and mechanical engineering, for geology, and for many branches of physics and chemistry such as materials science. It has specific applications in many other areas, such as understanding the anatomy of living beings, and the design of dental prostheses and surgical implants. One of the most common practical applications of solid mechanics is the Euler–Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, and the relationship between them. Solid mechanics is a vast subject because of the wide range of solid materials available, such as steel, wood, concrete, biological materials, textiles, geological materials, and plastics. Fundamental aspects A solid is a material that can support a substantial amount of shearing force over a given time scale during a natural or industrial process or action. This is what distinguishes solids from fluids, because fluids also support normal forces which are those forces that are directed perpendicular to the material plane across from which they act and normal stress is the normal force per unit area of that material plane. Shearing forces in contrast with normal forces, act parallel rather than perpendicular to the material plane and the shearing force per unit area is called shear stress. Therefore, solid mechanics examines the shear stress, deformation and the failure of solid materials and structures. The most common topics covered in solid mechanics include: stability of structures - examining whether structures can return to a given equilibrium after disturbance or partial/complete failure, see Structure mechanics dynamical systems and chaos - dealing with mechanical systems highly sensitive to their given initial position thermomechanics - analyzing materials with models derived from principles of thermodynamics biomechanics - solid mechanics applied to biological materials e.g. bones, heart tissue geomechanics - solid mechanics applied to geological materials e.g. ice, soil, rock vibrations of solids and structures - examining vibration and wave propagation from vibrating particles and structures i.e. vital in mechanical, civil, mining, aeronautical, maritime/marine, aerospace engineering fracture and damage mechanics - dealing with crack-growth mechanics in solid materials composite materials - solid mechanics applied to materials made up of more than one compound e.g. reinforced plastics, reinforced concrete, fiber glass variational formulations and computational mechanics - numerical solutions to mathematical equations arising from various branches of solid mechanics e.g. finite element method (FEM) experimental mechanics - design and analysis of experimental methods to examine the behavior of solid materials and structures Relationship to continuum mechanics As shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics. Response models A material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain. If the applied stress is sufficiently low (or the imposed strain is small enough), almost all solid materials behave in such a way that the strain is directly proportional to the stress; the coefficient of the proportion is called the modulus of elasticity. This region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, due to ease of computation. However, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, non-linear material models are becoming more common. These are basic models that describe how a solid responds to an applied stress: Elasticity – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load, can be described by the linear elasticity equations such as Hooke's law. Viscoelasticity – These are materials that behave elastically, but also have damping: when the stress is applied and removed, work has to be done against the damping effects and is converted in heat within the material resulting in a hysteresis loop in the stress–strain curve. This implies that the material response has time-dependence. Plasticity – Materials that behave elastically generally do so when the applied stress is less than a yield value. When the stress is greater than the yield stress, the material behaves plastically and does not return to its previous state. That is, deformation that occurs after yield is permanent. Viscoplasticity - Combines theories of viscoelasticity and plasticity and applies to materials like gels and mud. Thermoelasticity - There is coupling of mechanical with thermal responses. In general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fourier's law of heat conduction, as opposed to advanced theories with physically more realistic models. Timeline 1452–1519 Leonardo da Vinci made many contributions 1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure of simple structures 1660: Hooke's law by Robert Hooke 1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains Newton's laws of motion 1750: Euler–Bernoulli beam equation 1700–1782: Daniel Bernoulli introduced the principle of virtual work 1707–1783: Leonhard Euler developed the theory of buckling of columns 1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures 1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy. This theorem includes the method of least work as a special case 1874: Otto Mohr formalized the idea of a statically indeterminate structure. 1922: Timoshenko corrects the Euler–Bernoulli beam equation 1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames. 1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework 1942: R. Courant divided a domain into finite subregions 1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today See also Strength of materials - Specific definitions and the relationships between stress and strain. Applied mechanics Materials science Continuum mechanics Fracture mechanics Impact (mechanics) Solid-state physics References Notes Bibliography L.D. Landau, E.M. Lifshitz, Course of Theoretical Physics: Theory of Elasticity Butterworth-Heinemann, J.E. Marsden, T.J. Hughes, Mathematical Foundations of Elasticity, Dover, P.C. Chou, N. J. Pagano, Elasticity: Tensor, Dyadic, and Engineering Approaches, Dover, R.W. Ogden, Non-linear Elastic Deformation, Dover, S. Timoshenko and J.N. Goodier," Theory of elasticity", 3d ed., New York, McGraw-Hill, 1970. G.A. Holzapfel, Nonlinear Solid Mechanics: A Continuum Approach for Engineering, Wiley, 2000 A.I. Lurie, Theory of Elasticity, Springer, 1999. L.B. Freund, Dynamic Fracture Mechanics, Cambridge University Press, 1990. R. Hill, The Mathematical Theory of Plasticity, Oxford University, 1950. J. Lubliner, Plasticity Theory, Macmillan Publishing Company, 1990. J. Ignaczak, M. Ostoja-Starzewski, Thermoelasticity with Finite Wave Speeds, Oxford University Press, 2010. D. Bigoni, Nonlinear Solid Mechanics: Bifurcation Theory and Material Instability, Cambridge University Press, 2012. Y. C. Fung, Pin Tong and Xiaohong Chen, Classical and Computational Solid Mechanics, 2nd Edition, World Scientific Publishing, 2017, . Mechanics Continuum mechanics Rigid bodies mechanics km:មេកានិចសូលីដ sv:Hållfasthetslära
Solid mechanics
[ "Physics", "Engineering" ]
1,731
[ "Solid mechanics", "Continuum mechanics", "Classical mechanics", "Mechanics", "Mechanical engineering" ]
2,547,532
https://en.wikipedia.org/wiki/Stress%20intensity%20factor
In fracture mechanics, the stress intensity factor () is used to predict the stress state ("stress intensity") near the tip of a crack or notch caused by a remote load or residual stresses. It is a theoretical construct usually applied to a homogeneous, linear elastic material and is useful for providing a failure criterion for brittle materials, and is a critical technique in the discipline of damage tolerance. The concept can also be applied to materials that exhibit small-scale yielding at a crack tip. The magnitude of depends on specimen geometry, the size and location of the crack or notch, and the magnitude and the distribution of loads on the material. It can be written as: where is a specimen geometry dependent function of the crack length, , and the specimen width, , and is the applied stress. Linear elastic theory predicts that the stress distribution () near the crack tip, in polar coordinates () with origin at the crack tip, has the form where is the stress intensity factor (with units of stress × length1/2) and is a dimensionless quantity that varies with the load and geometry. Theoretically, as goes to 0, the stress goes to resulting in a stress singularity. Practically however, this relation breaks down very close to the tip (small ) because plasticity typically occurs at stresses exceeding the material's yield strength and the linear elastic solution is no longer applicable. Nonetheless, if the crack-tip plastic zone is small in comparison to the crack length, the asymptotic stress distribution near the crack tip is still applicable. Stress intensity factors for various modes In 1957, G. Irwin found that the stresses around a crack could be expressed in terms of a scaling factor called the stress intensity factor. He found that a crack subjected to any arbitrary loading could be resolved into three types of linearly independent cracking modes. These load types are categorized as Mode I, II, or III as shown in the figure. Mode I is an opening (tensile) mode where the crack surfaces move directly apart. Mode II is a sliding (in-plane shear) mode where the crack surfaces slide over one another in a direction perpendicular to the leading edge of the crack. Mode III is a tearing (antiplane shear) mode where the crack surfaces move relative to one another and parallel to the leading edge of the crack. Mode I is the most common load type encountered in engineering design. Different subscripts are used to designate the stress intensity factor for the three different modes. The stress intensity factor for mode I is designated and applied to the crack opening mode. The mode II stress intensity factor, , applies to the crack sliding mode and the mode III stress intensity factor, , applies to the tearing mode. These factors are formally defined as: Relationship to energy release rate and J-integral In plane stress conditions, the strain energy release rate () for a crack under pure mode I, or pure mode II loading is related to the stress intensity factor by: where is the Young's modulus and is the Poisson's ratio of the material. The material is assumed to be an isotropic, homogeneous, and linear elastic. The crack has been assumed to extend along the direction of the initial crack For plane strain conditions, the equivalent relation is a little more complicated: For pure mode III loading, where is the shear modulus. For general loading in plane strain, the linear combination holds: A similar relation is obtained for plane stress by adding the contributions for the three modes. The above relations can also be used to connect the J-integral to the stress intensity factor because Critical stress intensity factor The stress intensity factor, , is a parameter that amplifies the magnitude of the applied stress that includes the geometrical parameter (load type). Stress intensity in any mode situation is directly proportional to the applied load on the material. If a very sharp crack, or a V-notch can be made in a material, the minimum value of can be empirically determined, which is the critical value of stress intensity required to propagate the crack. This critical value determined for mode I loading in plane strain is referred to as the critical fracture toughness () of the material. has units of stress times the root of a distance (e.g. MN/m3/2). The units of imply that the fracture stress of the material must be reached over some critical distance in order for to be reached and crack propagation to occur. The Mode I critical stress intensity factor, , is the most often used engineering design parameter in fracture mechanics and hence must be understood if we are to design fracture tolerant materials used in bridges, buildings, aircraft, or even bells. Polishing cannot detect a crack. Typically, if a crack can be seen it is very close to the critical stress state predicted by the stress intensity factor. G–criterion The G-criterion is a fracture criterion that relates the critical stress intensity factor (or fracture toughness) to the stress intensity factors for the three modes. This failure criterion is written as where is the fracture toughness, for plane strain and for plane stress. The critical stress intensity factor for plane stress is often written as . Examples Infinite plate: Uniform uniaxial stress Penny-shaped crack in an infinite domain Finite plate: Uniform uniaxial stress Edge crack in a plate under uniaxial stress Infinite plate: Slanted crack in a biaxial stress field Crack in a plate under point in-plane force Loaded crack in a plate Stack of Parallel Cracks in an Infinite Plate If the crack spacing is much greater than the crack length (h >> a), the interaction effect between neighboring cracks can be ignored, and the stress intensity factor is equal to that of a single crack of length 2a. Then the stress intensity factor at crack tip is If the crack length is much greater than the spacing (a >> h ), the cracks can be considered as a stack of semi-infinite cracks. Then the stress intensity factor at crack tip is Compact tension specimen Single-edge notch-bending specimen See also Fracture mechanics Fracture toughness Strain energy release rate J-integral Material failure theory Paris' law References External links Kathiresan, K. ; Hsu, T. M. ; Brussat, T. R., 1984, Advanced Life Analysis Methods. Volume 2. Crack Growth Analysis Methods for Attachment Lugs Stress Intensity Factor on www.fracturemechanics.org, by Bob McGinty Fracture mechanics
Stress intensity factor
[ "Materials_science", "Engineering" ]
1,303
[ "Structural engineering", "Materials degradation", "Materials science", "Fracture mechanics" ]
2,549,795
https://en.wikipedia.org/wiki/Bioswale
Bioswales are channels designed to concentrate and convey stormwater runoff while removing debris and pollution. Bioswales can also be beneficial in recharging groundwater. Bioswales are typically vegetated, mulched, or xeriscaped. They consist of a swaled drainage course with gently sloped sides (less than 6%). Bioswale design is intended to safely maximize the time water spends in the swale, which aids the collection and removal of pollutants, silt and debris. Depending on the site topography, the bioswale channel may be straight or meander. Check dams are also commonly added along the bioswale to increase stormwater infiltration. A bioswale's make-up can be influenced by many different variables, including climate, rainfall patterns, site size, budget, and vegetation suitability. It is important to maintain bioswales to ensure the best possible efficiency and effectiveness in removal of pollutants from stormwater runoff. Planning for maintenance is an important step, which can include the introduction of filters or large rocks to prevent clogging. Annual maintenance through soil testing, visual inspection, and mechanical testing is also crucial to the health of a bioswale. Bioswales are commonly applied along streets and around parking lots, where substantial automotive pollution settles on the pavement and is flushed by the first instance of rain, known as the first flush. Bioswales, or other types of biofilters, can be created around the edges of parking lots to capture and treat stormwater runoff before releasing it to the watershed or storm sewer. Contaminants addressed Bioswales work to remove pollutants through vegetation and the soil. As the storm water runoff flows through the bioswale, the pollutants are captured and settled by the leaves and stems of the plants. The pollutants then enter the soil where they decompose or can be broken down by bacteria in healthy soil. There are several classes of water pollutants that may be collected or arrested with bioswales. These fall into the categories of silt, inorganic contaminants, organic chemicals and pathogens. Silt. How bioswales and plants are constructed slow the conveyance of silt and reduce the turbidity of receiving waters. Filters can be established to capture debris and silt during the process. Organics. Many organic contaminants including Polycyclic aromatic hydrocarbons will volatilize or degrade over time and Bioswales slow the conveyance of these materials into waterways, and before they can affect aquatic life. Although not all organic material will be captured, the concentration of organic material is greatly reduced by bioswales. Pathogens are deprived of a host or from a nutrient supply long enough for them to become the target of a heterotroph. Common inorganic compounds are macronutrients such as phosphates and nitrates. Principal sources of these nutrients comes from agricultural runoff attributed to excess fertilization. Excess phosphates and nitrates can cause eutrophication in disposal zones and receiving waters. Specific bioswale plants absorb these excess nutrients. Metallic compounds such as mercury, lead, chromium, cadmium and other heavy metals are concentrated in the structures. Unfortunately, these metals slowly poison the surrounding soil. Regular soil removal is required in order to prevent metals from dissolving and releasing back into the environment. Some bioswales are designed to include hyperaccumulator plant species. These plants absorb but do not transform the metals. Cuttings from these plants often decompose back into the pond or are pruned by gardening services that do not know the compost they are collecting is poisonous. Best locations Bioswales can be implemented in areas that require stormwater management to regulate the runoff velocity and decontaminate the runoff. Bioswales are created to handle the first flush of pollutants during the event of rain, therefore, locations that have high areas of impervious surface such as roads, parking lots, or rooftops can benefit from additions of bioswales. They can also be integrated into road medians, curb cutouts, sidewalks, or any public space. Benefits Bioswales are useful low-impact development work to decrease the velocity of stormwater runoff while removing pollutants from the discharge. They are extremely beneficial in protecting surface water and local waterways from excessive pollution from stormwater runoff. The longer the runoff stays within the bioswale, the better the pollutant removal outcome. It is also beneficial in removing standing ponds that could potentially attract mosquitos. Bioswales can also be designed to be aesthetically pleasing and attract animals and create habitats. Bioswales can also be beneficial for groundwater recharge. Maintenance Improper maintenance can lead to high restoration costs to address inefficient bioswales. An accumulation of large sediments, trash, and improper growth of vegetation can all affect the quality and performance of bioswales. It is beneficial at the planning stages to set apart easements to allow for easier maintenance of biowales, whether it be adequate space to locate machinery or safety to those working. Different types of filters can be used to catch sediments. Grass filter strips or rock inlets can be used to filter sediments and particulates; however, without proper maintenance, runoff could flow away from the bioswales due to blockage. Structural inlets have become more common due to the ease of maintenance, use, and its effectiveness. Avoiding the use of floating mulch and selecting the best fit low-maintenance plants ensure better efficiency in the bioswales. Depending on a community's needs for a bioswale, a four step assessment program can be developed. Visual inspection, capacity testing, synthetic runoff, and monitoring are the four steps that can be used to evaluate performance and maintenance of bioswales. Routine inspection is required to ensure that the performance and aesthetics of bioswales are not compromised. Time and frequency of inspections vary based on different local governments, but should occur at least once a year. Various aspects of inspection can take place, either visually or mechanically. Visual observation of the vegetation, water, and inlets are all crucial to ensure performance. Some organizations utilize checklists to streamline the visual inspection process. There are different methods to determine if a bioswale needs maintenance. Bioswales are benchmarked to meet a specific level of infiltration to determine if maintenance is required. A staff gauge is used to measure the infiltration rate. Soil chemistry testing is also required to determine if the soil has a certain off-level of any pollutant. Phosphorus and high levels of salinity in the soil are two common pollutants that should be attended to. Analysis of inflow and outflow pollutant concentration is also another way to determine the performance level of bioswales. Maintenance can span to three different levels of care. Aesthetic maintenance is required to remove weeds that affect the performance of the other plants and the bioswale itself, clean and remove trash, and maintaining the looks of the vegetation. Partial restoration is needed when the inlet is blocked by sediments or when vegetation needs to be replaced. Full restoration is required when the bioswales no longer filter pollutants adequately and overall performance is severely lacking. Design Bioswales experience short, potentially intense, periods of rain, flooding and pollutant loading followed by dry seasons. It is important to take into account how the vegetation selected for the bioswales will grow and understanding what types of plants are considered the best fit. There are four types of bioswales that can be constructed based on the needs of the location. Low grass bioswales utilize low growing grass that can be landscaped, similar to lawns. These types of bioswales tend to be less effective than vegetated bioswales in treating stormwater runoff and sustaining an adequate collection time. Vegetated bioswales are created with taller growing plants, ornamental vegetations, shrubs, and even trees. These types can also be lined with rocks to slow down the velocity of stormwater runoff that is flowing through bioswales to increase collection time for decontamination. Vegetated bioswales can also include vegetation that is highly useful in removing certain chemicals in runoffs very efficiently. Low water use bioswales are helpful in areas that tend to be drier with hotter climate. Xeriscape bioswales are populated with runoff generally only after rain and storms and stay dry otherwise. Wet bioswales are similar to wetlands in which they retain water for a much longer period of time that allows for infiltration of stormwater instead of simply emptying the water at the end of the bioswale into storm drain inlets. Bioswales require a certain soil composition that does not contain more than 5% clay. The soil itself before implementation should not be contaminated. Bioswales should be constructed with a longitudinal slope to allow sediments to settle. Maximum slope of bioswales is 3:1. A minimum clearance is required to ensure that other infrastructure would not be damaged. The overfill drain should be located at least 6 inches above the ground plane to allow for maximum concentration time of stormwater runoff in the bioswales. Rocks can also be used to slow down the runoff velocity. The use of filters is important to prevent inlets from becoming blocked by sediments or trash. Examples Two early examples of scientifically designed bioswales for large scale applications are found in the western US. In 1996, for Willamette River Park in Portland, Oregon, a total of 2330 lineal feet of bioswale was designed and installed to capture and prevent pollutant runoff from entering the Willamette River. Intermittent check dams were installed to further abet silt capture, which reduced by 50% suspended solids entering the river system. A second example of a large scale designed bioswale is at the Carneros Business Park, Sonoma County, California. Starting in 1997 the project design team worked with the California Department of Fish and Game and County of Sonoma to produce a detailed design to channel surface runoff at the perimeter of a large parking area. Surface runoff consists of building roof runoff, parking lot runoff and overland flow from properties to the north of the project site. A total of two lineal miles of bioswale was designed into the project. The purpose of the bioswale was to minimize runoff contaminants from entering Sonoma Creek. The bioswale channel is grass-lined and nearly linear in form. Downslope gradient is approximately 4% and cross-slope gradient is approximately 6%. A relatively recent project established was the "Street Edge Alternatives" (SEA) project in Seattle, Washington, completed in 2001. Rather than using traditional piping, SEA's goal was to create a natural landscape that represented what the area was like before development. The street was 11% more pervious than a standard street and was characterized with evergreen trees and bioswales. The bioswales were planted on graded slopes with wetland and upland plants. Other landscaping also focused on native and salmon-friendly plants. SEA provided a strong benefit for stormwater runoff mitigation that helped continue to protect Seattle's creek ecology. The project street also created a more inviting and aesthetically pleasing site as opposed to hard landscaping. The New York City Department of Environmental Protection (NYC DEP) has built more than 11,000 curbside bioswales, which are referred to as 'rain gardens'. Rain gardens are constructed throughout the city to manage storm water and to improve the water quality of city waterways. The care and tending of rain gardens is a partnership between the NYC DEP and a group of citizen volunteers called "harbor protectors". Rain gardens are inspected and cleaned at least once a week. Permaculture In permaculture, swales are used for water harvesting. See also Bioretention Constructed wetland Green infrastructure Green urbanism Infiltration Rain gardens Riparian zone Soil contamination Storm water Sustainable drainage system Urban runoff Water-sensitive urban design References External links Combating Climate Change with Landscape Architecture Resource Guide Sustainable Residential Design: Improving Water Efficiency Environmental engineering Environmental soil science Hydrology and urban planning Gardening aids Landscape Waste treatment technology Stormwater management Water conservation
Bioswale
[ "Chemistry", "Engineering", "Environmental_science" ]
2,503
[ "Hydrology", "Water treatment", "Stormwater management", "Chemical engineering", "Environmental soil science", "Water pollution", "Civil engineering", "Hydrology and urban planning", "Environmental engineering", "Waste treatment technology" ]
2,552,441
https://en.wikipedia.org/wiki/Event%20%28particle%20physics%29
In particle physics, an event refers to the results just after a fundamental interaction takes place between subatomic particles, occurring in a very short time span, at a well-localized region of space. Because of the uncertainty principle, an event in particle physics does not have quite the same meaning as it does in the theory of relativity, in which an "event" is a point in spacetime which can be known exactly, i.e., a spacetime coordinate. Overview In a typical particle physics event, the incoming particles are scattered or destroyed, and up to hundreds of particles can be produced, although few are likely to be new particles not discovered before. In the old bubble chambers and cloud chambers, "events" could be seen by observing charged particle tracks emerging from the region of the event before they curl due to the magnetic field through the chamber acting on the particles. At modern particle accelerators, events are the result of the interactions which occur from a beam crossing inside a particle detector. Physical quantities used to analyze events include the differential cross section, the flux of the beams (which in turn depends on the number density of the particles in the beam and their average velocity), and the rate and luminosity of the experiment. Individual particle physics events are modeled by scattering theory based on an underlying quantum field theory of the particles and their interactions. The S-matrix is used to characterize the probability of various event outgoing particle states given the incoming particle states. For suitable quantum field theories, the S-matrix may be calculated by a perturbative expansion in terms of Feynman diagrams. Events occur naturally in astrophysics and geophysics, such as subatomic particle showers produced from cosmic ray scattering events. References Notes Further reading Experimental particle physics
Event (particle physics)
[ "Physics" ]
356
[ "Experimental physics", "Particle physics", "Experimental particle physics" ]
2,552,636
https://en.wikipedia.org/wiki/Lund%20string%20model
In particle physics, the Lund string model is a phenomenological model of hadronization. It treats all but the highest-energy gluons as field lines, which are attracted to each other due to the gluon self-interaction and so form a narrow tube (or string) of strong color field. Compared to electric or magnetic field lines, which are spread out because the carrier of the electromagnetic force, the photon, does not interact with itself. The model is named after the particle theory group of Lund University who developed it. It derived from the 1977 PhD thesis of Carsten Peterson, supervised by Bo Andersson and Gösta Gustafson. The model was refined by the contributions by researchers of the group like Torbjörn Sjöstrand, Bo Söderberg, Gunnar Ingelman, Hans-Uno Bengtsson and Ulf Pettersson. In 1979, the model was able to describe gluon jet fragmentation by considered the force field to be similar to a massless relativistic string. The model successfully predicted a specific asymmetry in the particles produced in electron–positron collisions, observed in 1980. String fragmentation is one of the parton fragmentation models used in the PYTHIA/Jetset and the University of California, Los Angeles as event generators, and explains many features of hadronization quite well. In particular, the model predicts that in addition to the particle jets formed along the original paths of two separating quarks, there will be a spray of hadrons produced between the jets by the string itself—which is precisely what is observed. See also QCD string References Quantum chromodynamics Experimental particle physics
Lund string model
[ "Physics" ]
343
[ "Experimental physics", "Particle physics", "Experimental particle physics" ]
15,603,775
https://en.wikipedia.org/wiki/Nitrogen%E2%80%93phosphorus%20detector
The nitrogen–phosphorus detector (NPD) is also known as thermionic specific detector (TSD) is a detector commonly used with gas chromatography, in which thermal energy is used to ionize an analyte. It is a type of flame thermionic detector (FTD), the other being the alkali flame-ionization detector (AFID also known as AFD). With this method, nitrogen and phosphorus can be selectively detected with a sensitivity that is 104 times greater than that for carbon. NP-Mode A concentration of hydrogen gas is used such that it is just below the minimum required for ignition. A rubidium or cesium bead, which is mounted over the nozzle, ignites the hydrogen (by acting catalytically), and forms a cold plasma. Excitation of the alkali metal results in ejection of electrons, which in turn are detected as a current flow between an anode and cathode in the chamber. As nitrogen or phosphorus analytes exit the column, they cause a reduction in the work function of the metal bead, resulting in an increase in current. Since the alkali metal bead is consumed over time, it must be replaced regularly . See also Gas chromatography External links Gas chromatography
Nitrogen–phosphorus detector
[ "Chemistry" ]
268
[ "Chromatography", "Gas chromatography", "Analytical chemistry stubs" ]
23,577,988
https://en.wikipedia.org/wiki/Chi%20site
A Chi site or Chi sequence is a short stretch of DNA in the genome of a bacterium near which homologous recombination is more likely to occur than on average across the genome. Chi sites serve as stimulators of DNA double-strand break repair in bacteria, which can arise from radiation or chemical treatments, or result from replication fork breakage during DNA replication. The sequence of the Chi site is unique to each group of closely related organisms; in E. coli and other enteric bacteria, such as Salmonella, the core sequence is 5'-GCTGGTGG-3' plus important nucleotides about 4 to 7 nucleotides to the 3' side of the core sequence. The existence of Chi sites was originally discovered in the genome of bacteriophage lambda, a virus that infects E. coli, but is now known to occur about 1000 times in the E. coli genome. The Chi sequence serves as a signal to the RecBCD helicase-nuclease that triggers a major change in the activities of this enzyme. Upon encountering the Chi sequence as it unwinds DNA, RecBCD cuts the DNA a few nucleotides to the 3’ side of Chi, within the important sequences noted above; depending on the reaction conditions, this cut is either a simple nick on the 3'-ended strand or the change of nuclease activity from cutting the 3’-ended strand to cutting the 5’-ended strand. In either case the resulting 3’ single-stranded DNA (ssDNA) is bound by multiple molecules of RecA protein that facilitate "strand invasion," in which one strand of a homologous double-stranded DNA is displaced by the RecA-associated ssDNA. Strand invasion forms a joint DNA molecule called a D-loop. Resolution of the D-loop is thought to occur by replication primed by the 3’ end generated at Chi (in the D-loop). Alternatively, the D-loop may be converted into a Holliday junction by cutting of the D-loop and a second exchange of DNA strands; the Holliday junction can be converted into linear duplex DNA by cutting of the Holliday junction and ligation of the resultant nicks. Either type of resolution can generate recombinant DNA molecules if the two interacting DNAs are genetically different, as well as repair the initially broken DNA. Chi sites are sometimes referred to as "recombination hot spots". The name "Chi" is an abbreviation of In reference to E. coli phage lambda, the term is sometimes written as "χ site", using the Greek letter chi; for E. coli and other bacteria the term "Chi" is proper. References Amundsen SK, Sharp JW, Smith GR (2016) RecBCD Enzyme "Chi Recognition" Mutants Recognize Chi Recombination Hotspots in the Right DNA Context. Genetics 204(1):139-52. Taylor AF, Amundsen SK, Smith GR (2016) Unexpected DNA context-dependence identifies a new determinant of Chi recombination hotspots. Nucleic Acids Res. 44(17):8216-28. Smith GR. (2012). How RecBCD Enzyme and Chi Promote DNA Break Repair and Recombination: a Molecular Biologist's View. Microbiol Mol Biol Rev. 76(2): 217-28. Dillingham MS, Kowalczykowski SC. (2008). RecBCD enzyme and the repair of double-stranded DNA breaks. Microbiol Mol Biol Rev. 72(4): 642-671. Amundsen SK, Taylor AF, Reddy M, Smith GR. (2007). Intersubunit signaling in RecBCD enzyme, a complex protein machine regulated by Chi hot spots. Genes Dev 21(24): 3296-3307. Stahl FW. (2005). Chi: A little sequence controls a big enzyme. Genetics 170(2): 487–493. External links Homologous Recombination Interactive Animation, online artwork from Trun N and Trempy J, Fundamental Bacterial Genetics. Biochemistry Genetics
Chi site
[ "Chemistry", "Biology" ]
877
[ "Biochemistry", "Genetics", "nan" ]
23,579,022
https://en.wikipedia.org/wiki/Artur%20Avila
Artur Avila Cordeiro de Melo (born 29 June 1979) is a Brazilian mathematician working primarily in the fields of dynamical systems and spectral theory. He is one of the winners of the 2014 Fields Medal, being the first Latin American and lusophone to win such award. He has been a researcher at both the IMPA and the CNRS (working a half-year in each one). He has been a professor at the University of Zurich since September 2018. Biography At the age of 16, Avila won a gold medal at the 1995 International Mathematical Olympiad and received a scholarship for the Instituto Nacional de Matemática Pura e Aplicada (IMPA) to start a M.S. degree while still attending high school in Colégio de São Bento and Colégio Santo Agostinho in Rio de Janeiro. He completed his M.S. degree in 1997. Later he enrolled in the Federal University of Rio de Janeiro (UFRJ), earning his B.S in mathematics. At the age of 19, Avila began writing his doctoral thesis on the theory of dynamical systems. In 2001 he finished it and received his PhD from IMPA. That same year he moved abroad to France to do postdoctoral research. He works with one-dimensional dynamics and holomorphic functions. Since 2003 he has worked as a researcher for the Centre National de la Recherche Scientifique (CNRS) in France, later becoming a research director in 2008. His post-doctoral supervisor was Jean-Christophe Yoccoz. Mathematical work Much of Artur Avila's work has been in the field of dynamical systems. In March 2005, at age 26, Avila and Svetlana Jitomirskaya proved the "conjecture of the ten martinis," a problem proposed by the American mathematical physicist Barry Simon. Mark Kac promised a reward of ten martinis to whoever solved the problem: whether or not the spectrum of a particular type of operator is a Cantor set, given certain conditions on its parameters. The problem had been unsolved for 25 years when Avila and Jitomirskaya answered it affirmatively. Later that year, Avila and Marcelo Viana proved the Zorich–Kontsevich conjecture that the non-trivial Lyapunov exponents of the Teichmüller flow on the moduli space of Abelian differentials on compact Riemann surfaces are all distinct. Honours and recognition Later, as a research mathematician, he received in 2006 a CNRS Bronze Medal as well as the Salem Prize, and was a Clay Research Fellow. He became the youngest professorial fellow (directeur de recherches) at the CNRS in 2008. The same year, he was awarded one of the ten prestigious European Mathematical Society prizes, and in 2009 he won the Grand Prix Jacques Herbrand from the French Academy of Sciences. In 2017 he gave the Łojasiewicz Lecture (on the "One-frequency Schrödinger operators and the almost reducibility conjecture") at the Jagiellonian University in Kraków. He was a plenary speaker at the International Congress of Mathematicians in 2010. In 2011, he was awarded the Michael Brin Prize in Dynamical Systems. He received the Early Career Award from the International Association of Mathematical Physics in 2012, TWAS Prize in 2013 and the Fields Medal in 2014. He was elected a foreign associate of the US National Academy of Sciences in April 2019. Avila is a member of World Minds. Diplomas, titles and awards 1993: Gold medal at the Olimpíada Brasileira de Matemática, Brazil 1994: Gold medal at the Olimpíada Brasileira de Matemática, Brazil 1995: Gold medal at the Olimpíada Brasileira de Matemática, Brazil 1995: Gold medal at the International Mathematical Olympiad, Canada 2001: PhD Thesis (advisor Welington de Melo) 2005: Cours Peccot at the Collège de France 2006: Invited address at the ICMP 2006: Bronze medal of the CNRS 2006: Salem Prize 2008: Wolff Memorial Lectures, Caltech 2008: Invited address at the European Congress of Mathematics 2008: European Mathematical Society Prize 2009: Grand Prix Jacques Herbrand of the French Academy of Sciences 2010: Porter Lectures, Rice University 2010: Plenary address at the International Congress of Mathematicians 2011: Blyth Lecture Series by the University of Toronto 2011: Michael Brin Prize in Dynamical Systems 2012: International Association of Mathematical Physics Early Career Award 2013: Prize of the Brazilian Mathematical Society 2013: TWAS Prize 2014: Bellow Lectures by the Northwestern University 2014: Fields Medal 2015: TWAS-Lenovo Science Prize 2017: Łojasiewicz Lecture by the Jagiellonian University: One-frequency Schrödinger operators and the almost reducibility conjecture Extra-academic distinctions 2013: Member of the Brazilian Academy of Sciences 2015: Knight of the Legion of Honor 2019: Foreign associates of the National Academy of Sciences References Further reading Moreira Salles, João. "Artur has a problem" (translated from the Portuguese by F. Thomson-Deveaux). Piauí Magazine. Interview with Artur Avila Chalkdust Magazine External links Artur Avila's page at University of Zurich Outdated links Artur Avila's Lattes Platform Claymath fellow page 1979 births Living people 21st-century French mathematicians Fields Medalists International Mathematical Olympiad participants Mathematical analysts People from Rio de Janeiro (city) Members of the Brazilian Academy of Sciences Recipients of the Legion of Honour Dynamical systems theorists French systems scientists Instituto Nacional de Matemática Pura e Aplicada alumni Instituto Nacional de Matemática Pura e Aplicada researchers Brazilian expatriate academics French people of Brazilian descent 21st-century Brazilian mathematicians TWAS laureates Foreign associates of the National Academy of Sciences Naturalized citizens of France Research directors of the French National Centre for Scientific Research Brazilian emigrants to France Academic staff of the University of Zurich Federal University of Rio de Janeiro alumni
Artur Avila
[ "Mathematics" ]
1,209
[ "Mathematical analysis", "Dynamical systems theorists", "Mathematical analysts", "Dynamical systems" ]
23,579,401
https://en.wikipedia.org/wiki/C16H10
{{DISPLAYTITLE:C16H10}} The molecular formula C16H10 (molar mass: 202.25 g/mol, exact mass: 202.0783 u) may refer to: Dibenzopentalene Fluoranthene Pyrene Molecular formulas
C16H10
[ "Physics", "Chemistry" ]
63
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,580,875
https://en.wikipedia.org/wiki/C17H21NO3
{{DISPLAYTITLE:C17H21NO3}} The molecular formula C17H21NO3 (molar mass: 287.35 g/mol) may refer to: Dihydromorphine Etodolac Galantamine Mesembrenone Ritodrine Thesinine Molecular formulas
C17H21NO3
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,580,901
https://en.wikipedia.org/wiki/C4H9NO2
The molecular formula (molar mass: 103.12 g/mol) may refer to: α-Aminobutyric acid β-Aminobutyric acid γ-Aminobutyric acid (GABA) 2-Aminoisobutyric acid 3-Aminoisobutyric acid Nitroisobutane n-Nitrobutane Butyl nitrite Dimethylglycine Isobutyl nitrite Molecular formulas
C4H9NO2
[ "Physics", "Chemistry" ]
95
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,581,112
https://en.wikipedia.org/wiki/Selective%20receptor%20modulator
In the field of pharmacology, a selective receptor modulator or SRM is a type of drug that has different effects in different tissues. A SRM may behave as an agonist in some tissues while as an antagonist in others. Hence selective receptor modulators are sometimes referred to as tissue selective drugs or mixed agonists / antagonists. This tissue selective behavior is in contrast to many other drugs that behave either as agonists or antagonists regardless of the tissue in question. Classes Classes of selective receptor modulators include: Selective androgen receptor modulator (SARM) Selective estrogen receptor modulator (SERM) Selective glucocorticoid receptor modulator (SEGRM) Selective progesterone receptor modulator (SPRM) Selective PPAR modulator (SPPARM) including SPPARMγ (affecting the PPARγ) and SPPARMα (PPARα) See also Agonist–antagonist Selective glucocorticoid receptor agonist (SEGRA) References Pharmacodynamics
Selective receptor modulator
[ "Chemistry" ]
215
[ "Pharmacology", "Pharmacology stubs", "Pharmacodynamics", "Medicinal chemistry stubs" ]
1,836,606
https://en.wikipedia.org/wiki/Allylic%20rearrangement
An allylic rearrangement or allylic shift is an organic chemical reaction in which reaction at a center vicinal to a double bond causes the double bond to shift to an adjacent pair of atoms: It is encountered in both nucleophilic and electrophilic substitution, although it is usually suppressed relative to non-allylic substitution. For example, reaction of 1-chloro-2-butene with sodium hydroxide gives 2-buten-1-ol and 3-buten-2-ol: In the similar substitution of 1-chloro-3-methyl-2-butene, the secondary 2-methyl-3-buten-2-ol is produced in a yield of 85%, while that for the primary 3-methyl-2-buten-1-ol is 15%. Allylic shifts occur because the transition state is an allyl intermediate. In other respects they are similar to classical nucleophilic substitution, and admit both bimolecular and monomolecular mechanisms (respectively the SN2' and SN1'/SNi' substitutions). Scope Allylic shifts become the dominant reaction pathway when there is substantial resistance to a normal (non-allylic) substitution. For nucleophilic substitution, such resistance is known when there is substantial steric hindrance at or around the leaving group, or if there is a geminal substituent destabilizing an accumulation of positive charge. The effects of substitution at the vinyl group are less clear. Although rarer still than SN', allylic shifts can occur vinylogously, as a "butadienylic shift": SN2' reduction In SN2' reduction, a hydride allylically displaces a good leaving group in a formal organic reduction, similar to the Whiting diene synthesis. One example occurred in taxol total synthesis (ring C): The hydride is lithium aluminium hydride and the leaving group a phosphonium salt; the allylic shift causes the exocyclic double bond in the product. Only when the cyclohexane ring is properly substituted will the proton add trans to the adjacent methyl group. Electrophilic allyl shifts Allyl shifts can also take place with electrophiles. In the example below the carbonyl group in benzaldehyde is activated by diboronic acid prior to reaction with the allyl alcohol (see: Prins reaction): The active catalyst system in this reaction is a combination of a palladium pincer compound and p-toluenesulfonic acid, the reaction product is obtained as a single regioisomer and stereoisomer. Examples Repeated allylic shifts can "flip-flop" a double-bond between two possible locations: An SN2' reaction should explain the outcome of the reaction of an aziridine carrying a methylene bromide group with methyllithium: In this reaction one equivalent of acetylene is lost. Named reactions Ferrier rearrangement Meyer–Schuster rearrangement References Rearrangement reactions Reaction mechanisms
Allylic rearrangement
[ "Chemistry" ]
648
[ "Reaction mechanisms", "Organic reactions", "Physical organic chemistry", "Chemical kinetics", "Rearrangement reactions" ]
1,837,480
https://en.wikipedia.org/wiki/Knudsen%20gas
A Knudsen gas is a gas in a state of such low density that the average distance travelled by the gas molecules between collisions (mean free path) is greater than the diameter of the receptacle that contains it. If the mean free path is much greater than the diameter, the flow regime is dominated by collisions between the gas molecules and the walls of the receptacle, rather than intermolecular collisions with each other. It is named after Martin Knudsen. Knudsen number For a Knudsen gas, the Knudsen number must be greater than 1. The Knudsen number can be defined as: where is the mean free path [m] is the diameter of the receptacle [m]. When , the flow regime of the gas is transitional flow. In this regime the intermolecular collisions between gas particles are not yet negligible compared to collisions with the wall. However when , the flow regime is free molecular flow, so the intermolecular collisions between the particles are negligible compared to the collisions with the wall. Example For example, consider a receptacle of air at room temperature and pressure with a mean free path of 68nm. If the diameter of the receptacle is less than 68nm, the Knudsen number would greater than 1, and this sample of air would be considered a Knudsen gas. It would not be a Knudsen gas if the diameter of the receptacle is greater than 68nm. See also Free streaming Kinetic theory References Gases Phases of matter
Knudsen gas
[ "Physics", "Chemistry" ]
322
[ "Statistical mechanics stubs", "Matter", "Phases of matter", "Statistical mechanics", "Physical chemistry stubs", "Gases" ]
1,837,735
https://en.wikipedia.org/wiki/Wittig%20reaction
The Wittig reaction or Wittig olefination is a chemical reaction of an aldehyde or ketone with a triphenyl phosphonium ylide called a Wittig reagent. Wittig reactions are most commonly used to convert aldehydes and ketones to alkenes. Most often, the Wittig reaction is used to introduce a methylene group using methylenetriphenylphosphorane (Ph3P=CH2). Using this reagent, even a sterically hindered ketone such as camphor can be converted to its methylene derivative. Reaction mechanism Mechanistic studies have focused on unstabilized ylides, because the intermediates can be followed by NMR spectroscopy. The existence and interconversion of the betaine (3a and 3b) is subject of ongoing research. For lithium-free Wittig reactions, studies support a concerted formation of the oxaphosphetane without intervention of a betaine. In particular, phosphonium ylides 1 react with carbonyl compounds 2 via a [2+2] cycloaddition that is sometimes described as having [π2s+π2a] topology to directly form the oxaphosphetanes 4a and 4b. Under lithium-free conditions, the stereochemistry of the product 5 is due to the kinetically controlled addition of the ylide 1 to the carbonyl 2. When lithium is present, there may be equilibration of the intermediates, possibly via betaine species 3a and 3b. Bruce E. Maryanoff and A. B. Reitz identified the issue about equilibration of Wittig intermediates and termed the process "stereochemical drift". For many years, the stereochemistry of the Wittig reaction, in terms of carbon-carbon bond formation, had been assumed to correspond directly with the Z/E stereochemistry of the alkene products. However, certain reactants do not follow this simple pattern. Lithium salts can also exert a profound effect on the stereochemical outcome. Mechanisms differ for aliphatic and aromatic aldehydes and for aromatic and aliphatic phosphonium ylides. Evidence suggests that the Wittig reaction of unbranched aldehydes under lithium-salt-free conditions do not equilibrate and are therefore under kinetic reaction control. E. Vedejs has put forth a theory to explain the stereoselectivity of stabilized and unstabilized Wittig reactions. Strong evidence indicated that under Li-free conditions, Wittig reactions involving unstabilized (R1= alkyl, H), semistabilized (R1 = aryl), and stabilized (R1 = EWG) Wittig reagents all proceed via a [2+2]/retro-[2+2] mechanism under kinetic control, with oxaphosphetane as the one and only intermediate. Scope and limitations Functional group tolerance The Wittig reagents generally tolerate carbonyl compounds containing several kinds of functional groups such as OH, OR, nitroarenes, epoxides, and sometimes esters and amides. Even ketone, aldehyde, and nitrile groups can be present if conjugated with the ylide — these are the stabilised ylides mentioned above. Bis-ylides (containing two P=C bonds) have also been made and used successfully. There can be a problem with sterically hindered ketones, where the reaction may be slow and give poor yields, particularly with stabilized ylides, and in such cases the Horner–Wadsworth–Emmons (HWE) reaction (using phosphonate esters) is preferred. Another reported limitation is the often labile nature of aldehydes, which can oxidize, polymerize or decompose. In a so-called tandem oxidation-Wittig process the aldehyde is formed in situ by oxidation of the corresponding alcohol. Stereochemistry For the reaction with aldehydes, the double bond geometry is readily predicted based on the nature of the ylide. With unstabilised ylides (R3 = alkyl) this results in (Z)-alkene product with moderate to high selectivity. If the reaction is performed in dimethylformamide in the presence of lithium iodide or sodium iodide, the product is almost exclusively the Z-isomer. With stabilized ylides (R3 = ester or ketone), the (E)-alkene is formed with high selectivity. The (E)/(Z) selectivity is often poor with semistabilized ylides (R3 = aryl). To obtain the (E)-alkene for unstabilized ylides, the Schlosser modification of the Wittig reaction can be used. Alternatively, the Julia olefination and its variants also provide the (E)-alkene selectively. Ordinarily, the Horner–Wadsworth–Emmons reaction provides the (E)-enoate (α,β-unsaturated ester), just as the Wittig reaction does. To obtain the (Z)-enolate, the Still-Gennari modification of the Horner-Wadsworth-Emmons reaction can be used. Schlosser modification The main limitation of the traditional Wittig reaction is that the reaction proceeds mainly via the erythro betaine intermediate, which leads to the Z-alkene. The erythro betaine can be converted to the threo betaine using phenyllithium at low temperature. This modification affords the E-alkene. Allylic alcohols can be prepared by reaction of the betaine ylide with a second aldehyde. For example: Example An example of its use is in the synthesis of leukotriene A methyl ester. The first step uses a stabilised ylide, where the carbonyl group is conjugated with the ylide preventing self condensation, although unexpectedly this gives mainly the cis product. The second Wittig reaction uses a non-stabilised Wittig reagent, and as expected this gives mainly the cis product. History The Wittig reaction was reported in 1954 by Georg Wittig and his coworker Ulrich Schöllkopf. In part for this contribution, Wittig was awarded the Nobel Prize in Chemistry in 1979. See also Corey–Chaykovsky reagent Horner–Wadsworth–Emmons reaction Julia olefination Peterson olefination Tebbe's reagent Organophosphorus chemistry Homologation reaction Kauffmann olefination Titanium–zinc methylenation References External links Wittig reaction in Organic Syntheses, Coll. Vol. 10, p. 703 (2004); Vol. 75, p. 153 (1998). (Article) Wittig reaction in Organic Syntheses, Coll. Vol. 5, p. 361 (1973); Vol. 45, p. 33 (1965). (Article) Olefination reactions Carbon-carbon bond forming reactions Name reactions German inventions Homologation reactions 1954 in science 1954 in West Germany
Wittig reaction
[ "Chemistry" ]
1,547
[ "Carbon-carbon bond forming reactions", "Olefination reactions", "Coupling reactions", "Organic reactions", "Name reactions" ]
1,838,548
https://en.wikipedia.org/wiki/Active%20Body%20Control
Active Body Control, or ABC, is the Mercedes-Benz brand name used to describe electronically controlled hydropneumatic suspension. This suspension improves ride quality and allows for control of the vehicle body motions, allowing for reduced body roll in many driving situations including cornering, accelerating, and braking. Mercedes-Benz has been experimenting with these capabilities for automobile suspension since the air suspension of the 1963 600 and the hydropneumatic (fluid and air) suspension of the 1974 6.9. ABC was only offered on rear-wheel drive models, as all-wheel drive 4MATIC models were available only with Airmatic semi-active air suspension, with the 2019 Mercedes-Benz GLE 450 4MATIC being the first AWD to have ABC available. The production version was introduced at the 1999 Geneva Motor Show on the new Mercedes-Benz CL-Class C215. Description In the ABC system, a computer detects body movement from sensors located throughout the vehicle, and controls the action of the active suspension with the use of hydraulic servomechanisms. The hydraulic pressure to the servos is supplied by a high pressure radial piston hydraulic pump, operating at 3,000psi. Accumulators regulate the hydraulic pressure, by means of an enclosed nitrogen bubble separated from the hydraulic fluid by a membrane. A total of 13 sensors continually monitor body movement and vehicle level and supply the ABC controller with new data every ten milliseconds. Four level sensors, one at each wheel measure the ride level of the vehicle, three accelerometers measure the vertical body acceleration, one acceleration sensor measures the longitudinal and one sensor the transverse body acceleration. As the ABC controller receives and processes data, it operates four hydraulic servos, each mounted on an air and pressurized hydraulic fluid strut, beside each wheel. Almost instantaneously, the servo regulated suspension generates counter forces to body lean, dive and squat during various driving maneuvers. A suspension strut, consisting of a steel coil spring and a shock absorber connected in parallel, as well as a hydraulically controlled adjusting cylinder, are located between the vehicle body and wheel. These components adjust the cylinder in the direction of the suspension strut, and change the suspension length. This creates a force which acts on the suspension and dampening of the vehicle in the frequency range up to five hertz. The system also incorporates height adjustable suspension, which in this case lowers the vehicle up to between the speeds of for better aerodynamics, fuel consumption, and handling. The ABC system also allows self-levelling suspension, which raises or lowers the vehicle in response to changing load (i.e. the loading or unloading of passengers or cargo). Each vehicle equipped with ABC has an “ABC Sport” button that allows the driver to adjust the suspension range for different driving style preferences. This feature allows the driver to adjust the suspension to maintain a more level ride in more demanding driving conditions. The reliable function of the ABC system requires a regular hydraulic oil change and filter replacement. The 1991 Mercedes-Benz C112, 1995 Mercedes-Benz Vario Research Car and the 1996 Mercedes-Benz F200 already featured prototype versions of ABC. The first complete and ready-for-production version of ABC was introduced in 1999 on the top-of-the-line Mercedes-Benz CL-Class (C215). In 2006, the Mercedes-Benz CL-Class (C216) introduced the second generation Active Body Control suspension, referred to as ABC Plus or ABC II in technical documentation. This updated suspension reduced body roll by 45% compared to the first generation ABC suspension. ABC Plus had an updated hydraulic system design, with shorter hydraulic lines, and the pulsation damper was relocated to be mounted directly on the tandem pump. In 2010 a crosswind stabilization function was introduced. In strong gusts of crosswind, and depending on the direction and intensity of the wind having an effect on the vehicle, this system varies the wheel load distribution in such a way that the effects of winds are largely compensated or reduced to a minimum. For this purpose the ABC control unit uses the yaw rate, lateral acceleration, steering angle and road speed sensors of the Electronic Stability Program ESP®. Magic Body Control In 2007, the Mercedes-Benz F700 concept introduced the PRE-SCAN suspension, an early prototype road scanning suspension, using lidar sensors, based on Active Body Control. In 2013 the Mercedes-Benz S-Class (W222) introduced the series production version of PRE-SCAN, but with a stereo camera instead of laser projectors. The system dubbed Magic Body Control is fitted with a road-sensing system (Road Surface Scan) that pre-loads the shocks for the road surface detected. Using a stereo camera, the system scans the road surface up to 15 meters ahead of the vehicle at speeds up to , and it adjusts the shock damping at each wheel to account for imperfections in the road. Initially only available on 8-cylinder models and above, Magic Ride Control attempts to isolate the car's body by predicting rather than reacting to broken pavement and speed humps. The ABC has undergone major modifications for the new S-Class: the wheel damping is now continuously adjustable, the spring strut response has been improved and the pump efficiency has been further enhanced. A digital interface connects the control unit and the sensors, while the fast FlexRay bus connects the control unit and the vehicle electronics. Processing power is more than double that of the previous system. In 2014 the new C217 S-Class Coupe introduced an update to Magic Body Control, called Active Curve Tilting. This new system allows the vehicle to lean up to 2.5 degrees into a turn, similar to a tilting train. The leaning is intended to counter the effect of centrifugal force on the occupants and is available only on rear-wheel drive models Vehicles Vehicles, chronological order: Mercedes-Benz C112 Mercedes-Benz Vario Research Car Mercedes-Benz F200 Mercedes-Benz CL-Class C215 Mercedes-Benz S-Class (W220), standard on S600 and S65 AMG, optional on other trims Mercedes-Benz SL Class R230 Mercedes-Benz S-Class (W221) Mercedes-Benz CL-Class (C216) Mercedes-Benz SL Class R231 Mercedes-Benz S-Class (W222): Magic Body Control Mercedes-Benz S-Class (C217): Magic Body Control Mercedes-Benz GLE (C167): E-Active Body Control Mercedes-Benz GLS (X167): E-Active Body Control Mercedes-Benz S-Class (W223): E-Active Body Control Timeline of active suspension development 1955 Citroën DS had hydropneumatic suspension designed by Paul Magès - the first car with height adjustable suspension and self-levelling suspension; leveraging the fact that gas/air absorbs force, while fluid transfers force smoothly 1962 Mercedes-Benz W112 platform featured an air suspension on the 300SE model and the 1963 Mercedes-Benz 600 model 1965 Rolls-Royce Silver Shadow licensed technology from the Citroën DS: hydropneumatic suspension offering self-levelling 1974 Maserati Quattroporte II used the height adjustable suspension and self-levelling suspension from the Citroën SM 1975 Mercedes-Benz 450SEL 6.9 with fully Hydropneumatic suspension similar in technology, but not geometry, to Citroën design 1979 Mercedes-Benz W126 then new S class had even more sophisticated height adjustable suspension and self-levelling suspension. 1984 Mercedes-Benz W124 selected models of E class had this technology (rear only hydraulic suspension) height adjustable suspension and self-levelling suspension. 1985 Bose Corporation founder and CEO Dr. Amar Bose Designed a suspension that mixed passenger comfort and vehicle control, this system used linear electromagnetic motors, power amplifiers, control algorithms and computation speed. Early-1980s through early '90s, Lotus Engineering, the consultant branch of Lotus Cars, experimented with active suspension layouts, combining Electrohydraulic servo valve technology from aerospace, a variety of sensors and both analog and digital controllers. About 100 prototype cars and trucks (and several racing cars) were built for a wide variety of customers, with variants of the high bandwidth Lotus Active system. 1986-Lotus Engineering and Moog Inc. formed joint venture Moog-Lotus Systems Inc. to commercialize the Lotus technology with electro-hydraulic servo valves designed by Moog. The joint venture was later purchased by the TRW Steering and Suspension Division. 1989 Citroën XM had a similar electronic control of hydraulic suspension, branded Hydractive. 1989 Toyota Celica with Toyota Active Control Suspension 1991 Infiniti Q45 was optionally equipped with "Full Active Suspension", a world-first in production automobiles. 1991 Toyota Soarer had a fully active hydraulic suspension system on the 1991 UZZ32 model:Toyota Active Control Suspension. 1994 Citroën Xantia ACTIVA variant introduced active anti-roll bars as an extension to their Hydractive II suspension. 1999 Mercedes-Benz CL-Class C215 introduces Active Body Control. 2023 BYD Auto introduces "DiSus" hydropneumatic suspension on the Yangwang U8 SUV and U9 sportscar. The suspension features the ability to drive with only three wheels fitted, and jump in the air while parked remaining level. References External links Mercedes-Benz USA Mercedes-Benz International Mercedes-Benz Automotive suspension technologies Automotive technology tradenames Automotive safety technologies Auto parts Mechanical power control
Active Body Control
[ "Physics" ]
1,943
[ "Mechanics", "Mechanical power control" ]
1,840,129
https://en.wikipedia.org/wiki/Solomon%20Marcus
Solomon Marcus (; 1 March 1925 – 17 March 2016) was a Romanian mathematician, member of the Mathematical Section of the Romanian Academy (full member from 2001) and emeritus professor of the University of Bucharest's Faculty of Mathematics. His main research was in the fields of mathematical analysis, mathematical and computational linguistics and computer science. He also published numerous papers on various cultural topics: poetics, linguistics, semiotics, philosophy, and history of science and education. Early life and education He was born in Bacău, Romania, to Sima and Alter Marcus, a Jewish family of tailors. From an early age he had to live through dictatorships, war, infringements on free speech and free thinking as well as anti-Semitism. At the age of 16 or 17 he started tutoring younger pupils in order to help his family financially. He graduated from Ferdinand I High School in 1944, and completed his studies at the University of Bucharest's Faculty of Science, Department of Mathematics, in 1949. He continued tutoring throughout college and later recounted in an interview that he had to endure hunger during those years and that till the age of 20 he only wore hand-me-downs from his older brothers. Academic career Marcus obtained his PhD in Mathematics in 1956, with a thesis on the Monotonic functions of two variables, written under the direction of Miron Nicolescu. He was appointed Lecturer in 1955, Associate Professor in 1964, and became a Professor in 1966 (Emeritus in 1991). Marcus has contributed to the following areas: Mathematical Analysis, Set Theory, Measure and Integration Theory, and Topology Theoretical Computer Science Linguistics Poetics and Theory of Literature Semiotics Cultural Anthropology History and Philosophy of Science Education. Publications by and on Marcus Marcus published about 50 books, which have been translated into English, French, German, Italian, Spanish, Russian, Greek, Hungarian, Czech, Serbo-Croatian, and about 400 research articles in specialized journals in almost all European countries, in the United States, Canada, South America, Japan, India, and New Zealand among others; he is cited by more than a thousand authors, including mathematicians, computer scientists, linguists, literary researchers, semioticians, anthropologists and philosophers. He is recognised as one of the initiators of mathematical linguistics and of mathematical poetics, and has been a member of the editorial board of tens of international scientific journals covering all his domains of interest. Marcus is featured in the 1999 book People and Ideas in Theoretical Computer Science. and the 2015 The Human Face of Computing . A collection of his papers in English followed by some interviews and a brief autobiography was published in 2007 as Words and Languages Everywhere. The book Meetings with Solomon Marcus (Spandugino Publishing House, Bucharest, Romania, 2010, 1500 pages), edited by Lavinia Spandonide and Gheorghe Păun for Marcus' 85th birthday, includes recollections by several hundred people from a large variety of scientific and cultural fields, and from 25 countries. It also contains a longer autobiography. Death Marcus died of cardiac infections at the in Bucharest after a short stay at in Bucharest. Honours National Order of Faithful Service in the rank of Grand Officer, 2011. Order of the Star of Romania (Romania's highest civil Order) in the rank of Commander, 2015. Romanian Royal Family: Knight of the Royal Decoration of Nihil Sine Deo. Notes References Alexandra Bellow, Cristian S. Calude. Solomon Marcus (1925–2016), Notices of the American Mathematical Society 64,10 (2017), 1216. G. Păun, I. Petre, G. Rozenberg and A. Salomaa (eds.). At the intersection of computer science with biology, chemistry and physics – In Memory of Solomon Marcus Theoretical computer science 701 (2017), 1–234. Global Perspectives on Science and Spirituality (GPSS) Publication list on his web page, at the "Simion Stoilow" Institute of Mathematics of the Romanian Academy International Journal of Computers, Communications & Control, Vol.I (2006), No.1, pp. 73–79, "Grigore C. Moisil: A Life Becoming a Myth", by Solomon Marcus, Editor's note about the author (p. 79) Marcus' articles on semiotics at Potlatch External links Solomon Marcus at the University of Bucharest 1925 births 2016 deaths 20th-century Romanian mathematicians Romanian semioticians Titular members of the Romanian Academy Academic staff of the University of Bucharest University of Bucharest alumni Romanian Jews People from Bacău Mathematical analysts Theoretical computer scientists Computational linguistics researchers Commanders of the Order of the Star of Romania Recipients of the National Order of Faithful Service
Solomon Marcus
[ "Mathematics" ]
957
[ "Mathematical analysis", "Mathematical analysts" ]
1,840,608
https://en.wikipedia.org/wiki/S/2004%20S%2012
S/2004 S 12 is a natural satellite of Saturn. Its discovery was announced by Scott S. Sheppard, David C. Jewitt, Jan Kleyna, and Brian G. Marsden on 4 May 2005 from observations taken between 12 December 2004 and 9 March 2005. S/2004 S 12 is about 5 kilometres in diameter, and orbits Saturn at an average distance of 19,855,000 kilometres in about 1,044 days, at an inclination of 163.9° to the ecliptic, in a retrograde direction and with an eccentricity of 0.371. This moon was considered lost until its recovery was announced on 12 October 2022. (In 2021, it had also been found in Canada-France-Hawaii Telescope observations from 2019.) References Institute for Astronomy Saturn Satellite Data Jewitt's New Satellites of Saturn page MPEC 2005-J13: Twelve New Satellites of Saturn, 3 May 2005 (discovery and ephemeris) Norse group Moons of Saturn Irregular satellites Discoveries by Scott S. Sheppard Astronomical objects discovered in 2005 Moons with a retrograde orbit Recovered astronomical objects
S/2004 S 12
[ "Astronomy" ]
225
[ "Recovered astronomical objects", "Astronomical objects" ]
1,840,863
https://en.wikipedia.org/wiki/Right%20circular%20cylinder
A right circular cylinder is a cylinder whose generatrices are perpendicular to the bases. Thus, in a right circular cylinder, the generatrix and the height have the same measurements. It is also less often called a cylinder of revolution, because it can be obtained by rotating a rectangle of sides and around one of its sides. Fixing as the side on which the revolution takes place, we obtain that the side , perpendicular to , will be the measure of the radius of the cylinder. In addition to the right circular cylinder, within the study of spatial geometry there is also the oblique circular cylinder, characterized by not having the geratrices perpendicular to the bases. Elements of the right circular cylinder Bases: the two parallel and congruent circles of the bases; Axis: the line determined by the two points of the centers of the cylinder's bases; Height: the distance between the two planes of the cylinder's bases; Generatrices: the line segments parallel to the axis and that have ends at the points of the bases' circles. Lateral and total areas The lateral surface of a right cylinder is the meeting of the generatrices. It can be obtained by the product between the length of the circumference of the base and the height of the cylinder. Therefore, the lateral surface area is given by: . Where: represents the lateral surface area of the cylinder; is approximately 3.14; is the distance between the lateral surface of the cylinder and the axis, i.e. it is the value of the radius of the base; is the height of the cylinder; is the length of the circumference of the base, since , that is, . Note that in the case of the right circular cylinder, the height and the generatrix have the same measure, so the lateral area can also be given by: . The area of the base of a cylinder is the area of a circle (in this case we define that the circle has a radius with measure ): . To calculate the total area of a right circular cylinder, you simply add the lateral area to the area of the two bases: . Replacing and , we have: or even . Volume Through Cavalieri's principle, which defines that if two solids of the same height, with congruent base areas, are positioned on the same plane, such that any other plane parallel to this plane sections both solids, determining from this section two polygons with the same area, then the volume of the two solids will be the same, we can determine the volume of the cylinder. This is because the volume of a cylinder can be obtained in the same way as the volume of a prism with the same height and the same area of the base. Therefore, simply multiply the area of the base by the height: . Since the area of a circle of radius , which is the base of the cylinder, is given by it follows that: or even . Equilateral cylinder The equilateral cylinder is characterized by being a right circular cylinder in which the diameter of the base is equal to the value of the height (geratrix). Then, assuming that the radius of the base of an equilateral cylinder is then the diameter of the base of this cylinder is and its height is . Its lateral area can be obtained by replacing the height value by : . The result can be obtained in a similar way for the total area: . For the equilateral cylinder it is possible to obtain a simpler formula to calculate the volume. Simply substitute the radius and height measurements defined earlier into the volume formula for a straight circular cylinder: Meridian section It is the intersection between a plane containing the axis of the cylinder and the cylinder. In the case of the right circular cylinder, the meridian section is a rectangle, because the generatrix is perpendicular to the base. The equilateral cylinder, on the other hand, has a square meridian section because its height is congruent to the diameter of the base. Examples of objects with a right circular cylinder shape See also Cylinder Geometry Solid geometry References Bibliography Balestri, Rodrigo (2016). Matemática: interação e tecnologia (in Portuguese) (2 ed.). São Paulo: Leya. Conexões com a matemática (in Portuguese) (1 ed.). São Paulo: Moderna. 2010. Dolce, Osvaldo; Pompeo, José Nicolau (2013). Fundamentos da matemática elementar 9: geometria plana (in Portuguese) (9 ed.). São Paulo: Atual. Dolce, Osvaldo; Pompeo, José Nicolau (2005). Fundamentos da matemática elementar, 10: geometria espacial, posição e métrica (in Portuguese). São Paulo: Atual. Giovanni, José Ruy; Giovanni Jr., José Ruy; Bonjorno, José Roberto (2011). Matemática fundamental: uma nova abordagem (in Portuguese). São Paulo: FTD. Paiva, Manoel (2004). Matemática (in Portuguese) (1 ed.). São Paulo: Moderna. Euclidean solid geometry Geometry Solids Multi-dimensional geometry
Right circular cylinder
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,083
[ "Euclidean solid geometry", "Phases of matter", "Space", "Condensed matter physics", "Geometry", "Solids", "Spacetime", "Matter" ]
1,841,012
https://en.wikipedia.org/wiki/Gamma%20counter
A gamma counter is an instrument to measure gamma radiation emitted by a radionuclide. Unlike survey meters, gamma counters are designed to measure small samples of radioactive material, typically with automated measurement and movement of multiple samples. Operation Gamma counters are usually scintillation counters. In a typical system, a number of samples are placed in sealed vials or test tubes, and moved along a track. One at a time, they move down inside a shielded detector, set to measure specific energy windows characteristic of the particular isotope. Within this shielded detector there is a scintillation crystal that surrounds the radioactive sample. Gamma rays emitted from the radioactive sample interact with the crystal, are absorbed, and light is emitted. A detector, such as a photomultiplier tube converts the visible light to an electrical signal. Depending on the half-life and concentration of the sample, measurement times may vary from 0.02 minutes to several hours. If the photon has too low of an energy level it will be absorbed into the scintillation crystal and never be detected. If the photon has too high of an energy level the photons may just pass right through the crystal without any interaction. Thus the thickness of the crystal is very important when sampling radioactive materials using the Gamma Counter. Applications Gamma counters are standard tools used in the research and development of new radioactive compounds used for diagnosing (and treating disease) as in PET scanning. Gamma counters are used in radiobinding assays, radioimmunoassays (RIA) and nuclear medicine measurements such as GFR and hematocrit. Some gamma counters can be used for gamma spectroscopy to identify radioactive materials based on their output energy spectrum, e.g. as a wipe test counter. References Particle detectors
Gamma counter
[ "Physics", "Technology", "Engineering" ]
355
[ "Nuclear and atomic physics stubs", "Particle detectors", "Measuring instruments", "Nuclear physics" ]
1,841,740
https://en.wikipedia.org/wiki/Curare
Curare ( or ; or ) is a common name for various alkaloid arrow poisons originating from plant extracts. Used as a paralyzing agent by indigenous peoples in Central and South America for hunting and for therapeutic purposes, curare only becomes active when it contaminates a wound or is introduced directly to the bloodstream; it is not active when ingested orally. Curare is prepared by boiling the bark of one of the dozens of plant sources, leaving a dark, heavy paste that can be applied to arrow or dart heads. These poisons cause weakness of the skeletal muscles and, when administered in a sufficient dose, eventual death by asphyxiation due to paralysis of the diaphragm. In medicine, curare has been used as a treatment for tetanus and strychnine poisoning and as a paralyzing agent for surgical procedures. History The word 'curare' is derived from wurari, from the Carib language of the Macusi of Guyana. It has its origins in the Carib phrase "mawa cure" meaning of the Mawa vine, scientifically known as Strychnos toxifera. Curare is also known among indigenous peoples as Ampi, Woorari, Woorara, Woorali, Wourali, Wouralia, Ourare, Ourari, Urare, Urari, and Uirary. The noun 'curare' is not to be confused with the Latin verb 'curare' ('to heal, cure, take care of'). Classification In 1895, pharmacologist Rudolf Boehm sought to classify the various alkaloid poisons based on the containers used for their preparation. He believed curare could be categorized into three main types as seen below. However useful it appeared, it became rapidly outmoded. Richard Gill, a plant collector, found that the indigenous peoples began to use a variety of containers for their curare preparations, henceforth invalidating Boehm's basis of classification. Tube or bamboo curare: Mainly composed of the toxin D-tubocurarine, this poison is found packed into hollow bamboo tubes derived from Chondrodendron and other genera in the Menispermaceae. According to their LD50 values, tube curare is thought to be the most toxic. Pot curare: Mainly composed of alkaloid components protocurarine (the active ingredient), protocurine (a weak toxicity), and protocuridine (non-toxic) from both Menispermaceae and Loganiaceae/Strychnaceae. This subtype is found originally packed in terra cotta pots. Calabash or gourd curare: Mainly composed of CtoxiferineI, this poison was originally packed into hollow gourds from Loganiaceae/Strychnaceae alone. Manske also observed in his 1955 The Alkaloids:The results of the early [pre-1900] work were very inaccurate because of the complexity and variation of the composition of the mixtures of alkaloids involved [...] these were impure, non-crystalline alkaloids [...] Almost all curare preparations were and are complex mixtures, and many of the physiological actions attributed to the early curarizing preparations were undoubtedly due to impurities, particularly to other alkaloids present. The curare preparations are now considered to be of two main types, those from Chondrodendron or other members of the Menispermaceae family and those from Strychnos, a genus of the Loganiaceae [ now Strychnaceae ] family. Some preparations may contain alkaloids from both [...] and the majority have other secondary ingredients. Hunting uses Curare was used as a paralyzing poison by many South American indigenous people. Since it was too expensive to be used in warfare, curare was mainly used for hunting. The prey was shot by arrows or blowgun darts dipped in curare, leading to asphyxiation owing to the inability of the victim's respiratory muscles to contract. In particular, the poison was used by the Kalinago, indigenous people of the Lesser Antilles in the Caribbean, on the tips of their arrows. In addition, the Yagua people, indigenous to Colombia and northeastern Peru, commonly used these toxins via blowpipes to target prey 30 to 40 paces distant. Due to its popularity among the indigenous people as means of paralyzing prey, certain tribes would create monopolies from curare production. Thus, curare became a symbol of wealth among the indigenous populations. In 1596, Sir Walter Raleigh mentioned the arrow poison in his book Discovery of the Large, Rich, and Beautiful Empire of Guiana (which relates to his travels in Trinidad and Guayana), though the poison he described was possibly not curare. In 1780, Abbe Felix Fontana discovered that it acted on the voluntary muscles rather than the nerves and the heart. In 1832, Alexander von Humboldt gave the first western account of how the toxin was prepared from plants by Orinoco River natives.During 1811–1812, Sir Benjamin Collins Brody experimented with curare (woorara). He was the first to show that curare does not kill the animal and the recovery is complete if the animal's respiration is maintained artificially. In 1825, Charles Waterton described a classical experiment in which he kept a curarized female donkey alive by artificial respiration with a bellows through a tracheostomy. Waterton is also credited with bringing curare to Europe. Robert Hermann Schomburgk, who was a trained botanist, identified the vine as one of the genus Strychnos and gave it the now accepted name Strychnos toxifera. Medical use George Harley (1829–1896) showed in 1850 that curare (wourali) was effective for the treatment of tetanus and strychnine poisoning. In 1857, Claude Bernard (1813–1878) published the results of his experiments in which he demonstrated that the mechanism of action of curare was a result of interference in the conduction of nerve impulses from the motor nerve to the skeletal muscle, and that this interference occurred at the neuromuscular junction. From 1887, the Burroughs Wellcome catalogue listed under its 'Tabloids' brand name, grain (5.4mg) tablets of curare (price: 8shillings) for use in preparing a solution for hypodermic injection. In 1914, Henry Hallett Dale (1875–1968) described the physiological actions of acetylcholine. After 25 years, he showed that acetylcholine is responsible for neuromuscular transmission, which can be blocked by curare.The best known and historically most important toxin (because of its medical applications) is d-tubocurarine. It was isolated from the crude drug – from a museum sample of curare – in 1935 by Harold King of London, working in Sir Henry Dale's laboratory. King also established its chemical structure. Pascual Scannone, a Venezuelan anesthesiologist who trained and specialized in New York City, did extensive research on curare as a possible paralyzing agent for patients during surgical procedures. In 1942, he became the first person in Latin America to use curare during a medical procedure when he successfully performed a tracheal intubation in a patient to whom he administered curare for muscle paralysis at the El Algodonal Hospital in Caracas, Venezuela. After its introduction in 1942, curare/curare-derivatives became a widely used paralyzing agent during medical and surgical procedures. In medicine, curare has been superseded by a number of curare-like agents, such as pancuronium, which have a similar pharmacodynamic profile, but fewer side effects. Chemical structure The various components of curare are organic compounds classified as either isoquinoline or indole alkaloids. Tubocurarine is one of the major active components in the South American dart poison. As an alkaloid, tubocurarine is a naturally occurring compound that consists of nitrogenous bases, although the chemical structure of alkaloids is highly variable. Tubocurarine and C toxiferine consist of a cyclic system with quaternary ammonium ions. On the other hand, while acetylcholine does not contain a cyclic system, it does contain a quaternary ammonium ion. Because of this shared moiety, curare alkaloids can bind readily to the active site of receptors for acetylcholine (ACh) at the neuromuscular junction, blocking nerve impulses from being sent to the skeletal muscles, effectively paralyzing the muscles of the body. Pharmacological properties Curare is an example of a non-depolarizing muscle relaxant that blocks the nicotinic acetylcholine receptor (nAChR), one of the two types of acetylcholine (ACh) receptors, at the neuromuscular junction. The main toxin of curare, d-tubocurarine, occupies the same position on the receptor as ACh with an equal or greater affinity, and elicits no response, making it a competitive antagonist. The antidote for curare poisoning is an acetylcholinesterase (AChE) inhibitor (anti-cholinesterase), such as physostigmine or neostigmine. By blocking ACh degradation, AChE inhibitors raise the amount of ACh in the neuromuscular junction; the accumulated ACh will then correct for the effect of the curare by activating the receptors not blocked by toxin at a higher rate. The time of onset varies from within one minute (for tubocurarine in intravenous administration, penetrating a larger vein), to between 15 and 25 minutes (for intramuscular administration, where the substance is applied in muscle tissue). It is harmless if taken orally because curare compounds are too large and highly charged to pass through the lining of the digestive tract to be absorbed into the blood. For this reason, people can safely eat curare-poisoned prey, and it has no effect on its flavor. Anesthesia Isolated attempts to use curare during anesthesia date back to 1912 by Arthur Lawen of Leipzig, but curare came to anesthesia via psychiatry (electroplexy). In 1939 Abram Elting Bennett used it to modify metrazol induced convulsive therapy. Muscle relaxants are used in modern anesthesia for many reasons, such as providing optimal operating conditions and facilitating intubation of the trachea. Before muscle relaxants, anesthesiologists needed to use larger doses of the anesthetic agent, such as ether, chloroform or cyclopropane to achieve these aims. Such deep anesthesia risked killing patients who were elderly or had heart conditions. The source of curare in the Amazon was first researched by Richard Evans Schultes in 1941. Since the 1930s, it was being used in hospitals as a muscle relaxant. He discovered that different types of curare called for as many as 15 ingredients, and in time helped to identify more than 70 species that produced the drug. In the 1940s, it was used on a few occasions during surgery as it was mistakenly thought to be an analgesic or anesthetic. The patients reported feeling the full intensity of the pain though they were not able to do anything about it since they were essentially paralyzed. On January 23, 1942, Harold Griffith and Enid Johnson gave a synthetic preparation of curare (Intercostrin/Intocostrin) to a patient undergoing an appendectomy (to supplement conventional anesthesia). Safer curare derivatives, such as rocuronium and pancuronium, have superseded d-tubocurarine for anesthesia during surgery. When used with halothane d-tubocurarine can cause a profound fall in blood pressure in some patients as both the drugs are ganglion blockers. However, it is safer to use d-tubocurarine with ether. In 1954, an article was published by Beecher and Todd suggesting that the use of muscle relaxants (drugs similar to curare) increased death due to anesthesia nearly sixfold. This was refuted in 1956. Modern anesthetists have at their disposal a variety of muscle relaxants for use in anesthesia. The ability to produce muscle relaxation irrespective of sedation has permitted anesthetists to adjust the two effects independently and on the fly to ensure that their patients are safely unconscious and sufficiently relaxed to permit surgery. The use of neuromuscular blocking drugs carries with it the risk of anesthesia awareness. Plant sources There are dozens of plants from which isoquinoline and indole alkaloids with curarizing effects can be isolated, and which were utilized by indigenous tribes of Central and South America for the production of arrow poisons. Among them are: In family Menispermaceae: Genus Chondrodendron notably C. tomentosum Genus Curarea, species C. toxicofera and C. tecunarum Genus Sciadotenia toxifera Genus Telitoxicum Genus Abuta Genus Caryomene Genus Anomospermum Genus Orthomene Genus Cissampelos, section L. (Cocculeae) of genus Other families: several species of the genus Strychnos of family Loganiaceae including S. toxifera, S. guianensis, S. castelnaei, S. usambarensis a plant in the subfamily Aroideae of family Araceae called taja at least three members of the genus Artanthe of family Piperaceae Paullinia cururu in the family Sapindaceae Some plants in the family Aristolochiaceae have also been reported as sources. Alkaloids with curare-like activity are present in plants of the fabaceous genus Erythrina. Toxicity Administration must be parenteral, as gastro-intestinal absorption is ineffective. The toxicity of curare alkaloids in humans has not been systematically established, but it is considered highly toxic and slow-acting, with a lowest reported lethal dose of 375 μg/kg (unknown route of administration). For animals, the median lethal dose of tubocurarine is: 1200 μg/kg (dog, intravenous) 140 μg/kg (mouse, intravenous) 1300 μg/kg (rabbit, intravenous) 3200 μg/kg (mouse, intraperitoneal) 500 μg/kg (mouse, subcutaneous) 2700 μg/kg (rabbit, subcutaneous) 270 mg/kg (rabbit, oral) Death can be prevented by artificial ventilation until curare subsides and muscle function is regained, in which case no permanent effects of poisoning occur. Preparation In 1807, Alexander von Humboldt provided the first eye-witness account of curare preparation. A mixture of young bark scrapings of the Strychnos plant, other cleaned plant parts, and occasionally snake venom is boiled in water for two days. This liquid is then strained and evaporated to create a dark, heavy, viscid paste that would be tested for its potency later. This curare paste was described to be very bitter in taste. In 1938, Richard Gill and his expedition collected samples of processed curare and described its method of traditional preparation; one of the plant species used at that time was Chondrodendron tomentosum''. Adjuvants Various irritating herbs, stinging insects, poisonous worms, and various parts of amphibians and reptiles are added to the preparation. Some of these accelerate the onset of action or increase the toxicity; others prevent the wound from healing or blood from coagulating. Diagnosis and management of curare poisoning Curare poisoning can be indicated by typical signs of neuromuscular-blocking drugs such as paralysis including respiration but not directly affecting the heart. Curare poisoning can be managed by artificial respiration such as mouth-to-mouth resuscitation. In a study of 29 army volunteers that were paralyzed with curare, artificial respiration managed to keep oxygen saturation always above 85%, a level at which there is no evidence of altered state of consciousness. Yet, curare poisoning mimics total locked-in syndrome in that there is paralysis of every voluntarily controlled muscle in the body (including the eyes), making it practically impossible for the victim to confirm consciousness while paralyzed. Spontaneous breathing is resumed after the end of the duration of action of curare, which is generally between 30 minutes and 8 hours, depending on the variant of the toxin and dosage. Cardiac muscle is not directly affected by curare, but if more than four to six minutes has passed since respiratory cessation the cardiac muscle may stop functioning due to oxygen deprivation, making cardiopulmonary resuscitation including chest compressions necessary. Chemical antidote Since tubocurarine and the other components of curare bind reversibly to the ACh receptors, treatment for curare poisoning involves adding an acetylcholinesterase (AChE) inhibitor, which will stop the destruction of acetylcholine so that it can compete with curare. This can be done by administration of acetylcholinesterase (AChE) inhibitors such as pyridostigmine, neostigmine, physostigmine, and edrophonium. Acetylcholinesterase is an enzyme used to break down the acetylcholine (ACh) neurotransmitter left over in motor neuron synapses. The aforementioned inhibitors, termed "anticurare" drugs, reversibly bind to the enzyme's active site, prohibiting its ability to bind to its original target, ACh. By blocking ACh degradation, AChE inhibitors can effectively raise the amount of ACh present in the neuromuscular junction. The accumulated ACh will then correct for the effect of the curare by activating the receptors not blocked by toxin at a higher rate, restoring activity to the motor neurons and bodily movement. Gallery See also Arrow poison, what curare was originally used for Poison dart frog, another source of arrow poison Strychnine, a related alkaloid poison that occurs in some of the same plants as curare References Further reading – contains papers and records pertaining to Griffith's introduction of curare into anesthesiology Muscle relaxants Neuromuscular blockers Neurotoxins Nicotinic antagonists Plant toxins
Curare
[ "Chemistry" ]
3,933
[ "Neurochemistry", "Neurotoxins", "Chemical ecology", "Plant toxins" ]
1,842,075
https://en.wikipedia.org/wiki/Pro-p%20group
In mathematics, a pro-p group (for some prime number p) is a profinite group such that for any open normal subgroup the quotient group is a p-group. Note that, as profinite groups are compact, the open subgroups are exactly the closed subgroups of finite index, so that the discrete quotient group is always finite. Alternatively, one can define a pro-p group to be the inverse limit of an inverse system of discrete finite p-groups. The best-understood (and historically most important) class of pro-p groups is the p-adic analytic groups: groups with the structure of an analytic manifold over such that group multiplication and inversion are both analytic functions. The work of Lubotzky and Mann, combined with Michel Lazard's solution to Hilbert's fifth problem over the p-adic numbers, shows that a pro-p group is p-adic analytic if and only if it has finite rank, i.e. there exists a positive integer such that any closed subgroup has a topological generating set with no more than elements. More generally it was shown that a finitely generated profinite group is a compact p-adic Lie group if and only if it has an open subgroup that is a uniformly powerful pro-p-group. The Coclass Theorems have been proved in 1994 by A. Shalev and independently by C. R. Leedham-Green. Theorem D is one of these theorems and asserts that, for any prime number p and any positive integer r, there exist only finitely many pro-p groups of coclass r. This finiteness result is fundamental for the classification of finite p-groups by means of directed coclass graphs. Examples The canonical example is the p-adic integers The group of invertible n by n matrices over has an open subgroup U consisting of all matrices congruent to the identity matrix modulo . This U is a pro-p group. In fact the p-adic analytic groups mentioned above can all be found as closed subgroups of for some integer n, Any finite p-group is also a pro-p-group (with respect to the constant inverse system). Fact: A finite homomorphic image of a pro-p group is a p-group. (due to J.P. Serre) See also Residual property (mathematics) Profinite group (See Property or Fact 5) References Infinite group theory Topological groups P-groups Properties of groups
Pro-p group
[ "Mathematics" ]
515
[ "Mathematical structures", "Space (mathematics)", "Properties of groups", "Topological spaces", "Topology stubs", "Topology", "Algebraic structures", "Topological groups" ]
18,505,470
https://en.wikipedia.org/wiki/Zinterol
Zinterol is a beta-adrenergic agonist. Its structure is based on soterenol (antiarrhythmic) and phentermine. References Sulfonamides Phenols Phenylethanolamines
Zinterol
[ "Chemistry" ]
54
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
18,505,995
https://en.wikipedia.org/wiki/Resource%20productivity
Resource productivity is the quantity of good or service (outcome) that is obtained through the expenditure of unit resource. This can be expressed in monetary terms as the monetary yield per unit resource. For example, when applied to crop irrigation it is the yield of crop obtained through use of a given volume of irrigation water, the “crop per drop”, which could also be expressed as monetary return from product per use of unit irrigation water. Resource productivity and resource intensity are key concepts used in sustainability measurement as they attempt to decouple the direct connection between resource use and environmental degradation. Their strength is that they can be used as a metric for both economic and environmental cost. Although these concepts are two sides of the same coin, in practice they involve very different approaches and can be viewed as reflecting, on the one hand, the efficiency of resource production as outcome per unit of resource use (resource productivity) and, on the other hand, the efficiency of resource consumption as resource use per unit outcome (resource intensity). The sustainability objective is to maximize resource productivity while minimizing resource intensity. Scientific and political debates on resource productivity are regularly held at, among others, the World Resources Forum conferences. See also Bioeconomics Econophysics Energy and Environment Environmental economics Energy Accounting Ecodynamics Ecological Economics Industrial ecology Population dynamics Thermoeconomics Sustainability accounting Resource intensity Resource efficiency Sustainable development Systems ecology The Natural Edge Project References Sustainability metrics and indices Natural resource management Resource economics Thermodynamics Energy economics
Resource productivity
[ "Physics", "Chemistry", "Mathematics", "Environmental_science" ]
302
[ "Energy economics", "Environmental social science stubs", "Thermodynamics", "Environmental social science", "Dynamical systems" ]
18,507,794
https://en.wikipedia.org/wiki/Monocopter
A monocopter or gyropter is a rotorcraft that uses a single rotating blade. The concept is similar to the whirling helicopter seeds that fall from some trees. The name gyropter is sometimes applied to monocopters in which the entire aircraft rotates about its center of mass as it flies. The name "monocopter" has also been applied to the personal jet pack constructed by Andreas Petzoldt. History Papin-Rouilly The Gyroptère was designed in 1913–1914 by Alphonse Papin and Didier Rouilly in France, inspired by a maple seed.<ref>{{cite journal | last = Meyer | first = H. | title = Een Nieuw Vliegtuig, De Gyroptère van Papin & Rouilly | trans-title= | journal = Kampioen | issue = XXXIe jaargang, n° 30, Technisch Bijblad, 8e jaargang, No.30 []| publisher = ANWB | location = The Hague | date = 12 June 1914 | url = https://books.google.com/books?id=Z6-ZKwCiZ3MC&pg=RA1-PA191 | language= nl | pages= 185–187| access-date= January 25, 2022}}</ref> Papin and Rouilly obtained French patents 440,593 and 440,594 for their invention, and later obtained US patent 1,133,660 in 1915. The Gyroptère was characterized in the contemporary French journal La Nature in 1914 as "" (a giant boomerang). Following demonstrations of small rocket-powered models, the Army ordered a manned prototype in 1913. Papin and Rouilly's "Gyroptère" weighed including the float on which it was mounted. It had a single hollow blade with an area of , counterweighted by a fan driven by an 80 hp Le Rhone rotary engine spinning at 1,200 rpm, which produced an output of just over of air per second. The fan also propelled air through the hollow blade, from which it escaped through an L-shaped tube at a speed of . Directional control was to be achieved by means of a small auxiliary tube through which some of the air was driven and which could be directed in whatever direction the pilot wished. The pilot's position was located at the centre of gravity between the blade and the fan. Testing was delayed due to the outbreak of World War I and did not take place until 31 March 1915 on Lake Cercey on the Côte-d'Or. Due to the difficulty of balancing the craft, a rotor speed of only 47 rpm was achieved instead of the 60 rpm which had been calculated as necessary for takeoff. In addition, the rotary engine used was not powerful enough; it had originally been planned to use a 100 hp car engine, which proved unobtainable. Unfortunately, the aircraft became unstable and the pilot had to abandon it, after which it sank. Sikorsky XV-2 The Sikorsky XV-2, also known by the Sikorsky Aircraft model number S-57, was a planned experimental stoppable rotor aircraft that was developed for a joint research program between the United States Air Force and the United States Army. The design utilized a single-rotor design: a counterweight provided stability to the rotor system, while a tip-jet arrangement powered the rotor, which was to be retracted into the upper fuselage when stopped, with the XV-2 then flying like a conventional aircraft on delta wings. A single jet engine was to be provided for forward flight, and was to be equipped with thrust vectoring for steering in hover and for anti-torque control in lieu of a tail rotor. The program was cancelled before construction of the prototype began. Bölkow Bo 103 The Bölkow Bo 103 was an ultralight helicopter designed for reconnaissance and command-control purposes and constructed by Bölkow Entwicklungen KG in 1961 as part of a research order by the German Federal Ministry of Defense. It had a diameter monoblade rotor constructed of GRP in a single piece that incorporated its counterweight. A single prototype was built, but work was stopped in 1962 due to lack of interest on the part of the West German armed forces. VJ-1X The VJ-1X was an ultralight single blade helicopter powered by a rotor-mounted pulsejet. Windspire, Inc. include the plans for sale in their book How to Build a Jet Helicopter. UAVs Monocopters, in which the entire aircraft rotates about its center of mass as it flies, present advantages and challenges as unmanned aerial vehicles (UAVs) to the designer. As highly centripetal machines, they cannot be manned. The first of these monocopters were constructed by Dr. Charles W. McCutchen and powered by reciprocating model airplane engines in 1952. He flew them at Lake Placid and named them "Charybdis machines". Other early experimenters were William Foshag and Joe Carter. These types of monocopters caught on in the model airplane world, particularly in Eastern Europe, where free flight record-setting models were constructed by George Horvath of Hungary, Sergei Vorabyev and V. Naidovsky of Russia, and Steffan Purice of Romania. An exception to the lack of US enthusiasm was Francis Boreham's "Buzzcopter" of 1964 and Ken Willard's "Rotoriser" of 1984. In 2002, Ron Jesme made the first successful electric propeller monocopter. Daedalus Research of Logan Utah also manufactured a monocopter kit, "Maple Seed," using a 0.049 model-airplane engine. Gordon Mandell of the M.I.T. Model Rocket Society designed a model-rocket engine powered monocopter, which he named "turbocopter," and published the design concept in his column "Wayward Wind" in Model Rocketry Magazine in 1969. A later version of this was researched at MIT in 1980. This design prompted Korey Kline, an early member of the Tripoli Rocketry Association, to design his own rocket-powered monocopters which fly on long-burn model rocket engines. They were demonstrated at various rocket launch events in the 1980s to crowds that raved at their performance. A few were manufactured as kits by Ace Rocketry at that time. Korey Kline published very little about monocopters, rocket or otherwise, and so by the 1990s the monocopter had faded from view. Edward Miller of Pennsylvania began experimenting with them again in the late 1990s, as well as Francis Graham, a Kent State University, Ohio, physics professor. By 1999 both were flying rocket monocopters. Francis Graham wrote a book, Monocopters'', with some theory of their flight characteristics, in 1999, sold by Apogee Components of Colorado Springs. Ed Miller went on to build the largest high power rocket monocopters ever flown, with 8 foot large fiberglass-covered wooden wings, and also sells them. Chuck Rudy flew a large monocopter with a hybrid rocket engine, using solid and liquid fuel. Francis Graham continued to promote monocopters and organized a small conference held in Washington, Pennsylvania, in 2001. He also presented a paper on the subject at the 2003 Century-of-Flight conference sponsored by the AIAA in Dayton. Joseph Peklicz of Martin's Ferry scaled down the monocopter into a kit form using small model rocket engines and sold many to individuals and schools. His kits are still available and widely sold. In 2008, Art Applewhite of Kerrville, Texas, began selling a popular line of rocket-powered monocopter kits as well. Monocopters that rotate entirely had no practical purpose prior to 2003, but, due in part to Graham's book, that would change. Patent 7,104,862 was awarded in 2006 to Michael A. Dammar of Vera-Tech Aero RPV Corp. of Edina, Minnesota, for a monocopter military reconnaissance device that was remotely controlled and took short exposures. Another remote-controlled monocopter, which could fly indoors on an electric motor, and which uses the Earth's magnetic field as a reference, was developed by Woody Hoburg and James Houghton at MIT in 2007–2008. See also Tip jet Notes Citations Bibliography Helicopters Aircraft configurations
Monocopter
[ "Engineering" ]
1,780
[ "Aircraft configurations", "Aerospace engineering" ]
1,226,822
https://en.wikipedia.org/wiki/Transforming%20growth%20factor
Transforming growth factor (, or TGF) is used to describe two classes of polypeptide growth factors, TGFα and TGFβ. The name "Transforming Growth Factor" is somewhat arbitrary, since the two classes of TGFs are not structurally or genetically related to one another, and they act through different receptor mechanisms. Furthermore, they do not always induce cellular transformation, and are not the only growth factors that induce cellular transformation. Types TGFα is upregulated in some human cancers. It is produced in macrophages, brain cells, and keratinocytes, and induces epithelial development. It belongs to the EGF family. TGFβ exists in three known subtypes in humans, TGFβ1, TGFβ2, and TGFβ3. These are upregulated in Marfan's syndrome and some human cancers, and play crucial roles in tissue regeneration, cell differentiation, embryonic development, and regulation of the immune system. Isoforms of transforming growth factor-beta (TGF-β1) are also thought to be involved in the pathogenesis of pre-eclampsia. They belong to the transforming growth factor beta family. TGFβ receptors are single pass serine/threonine kinase receptors. Function These proteins were originally characterized by their capacity to induce oncogenic transformation in a specific cell culture system, rat kidney fibroblasts. Application of the transforming growth factors to normal rat kidney fibroblasts induces the cultured cells to proliferate and overgrow, no longer subject to the normal inhibition caused by contact between cells. See also Bone morphogenetic protein TGF beta signaling pathway Tubuloglomerular feedback References External links Tumor growth factor (TGF) citations Growth factors Signal transduction
Transforming growth factor
[ "Chemistry", "Biology" ]
374
[ "Biochemistry", "Neurochemistry", "Growth factors", "Signal transduction" ]
1,227,042
https://en.wikipedia.org/wiki/Failure%20cause
Failure causes are defects in design, process, quality, or part application, which are the underlying cause of a failure or which initiate a process which leads to failure. Where failure depends on the user of the product or process, then human error must be considered. Component failure / failure modes A part failure mode is the way in which a component failed "functionally" on the component level. Often a part has only a few failure modes. For example, a relay may fail to open or close contacts on demand. The failure mechanism that caused this can be of many different kinds, and often multiple factors play a role at the same time. They include corrosion, welding of contacts due to an abnormal electric current, return spring fatigue failure, unintended command failure, dust accumulation and blockage of mechanism, etc. Seldom only one cause (hazard) can be identified that creates system failures. The real root causes can in theory in most cases be traced back to some kind of human error, e.g. design failure, operational errors, management failures, maintenance induced failures, specification failures, etc. Failure scenario A scenario is the complete identified possible sequence and combination of events, failures (failure modes), conditions, system states, leading to an end (failure) system state. It starts from causes (if known) leading to one particular end effect (the system failure condition). A failure scenario is for a system the same as the failure mechanism is for a component. Both result in a failure mode (state) of the system / component. Rather than the simple description of symptoms that many product users or process participants might use, the term failure scenario / mechanism refers to a rather complete description, including the preconditions under which failure occurs, how the thing was being used, proximate and ultimate/final causes (if known), and any subsidiary or resulting failures that result. The term is part of the engineering lexicon, especially of engineers working to test and debug products or processes. Carefully observing and describing failure conditions, identifying whether failures are reproducible or transient, and hypothesizing what combination of conditions and sequence of events led to failure is part of the process of fixing design flaws or improving future iterations. The term may be applied to mechanical systems failure. Types of failure causes Mechanical failure Some types of mechanical failure mechanisms are: excessive deflection, buckling, ductile fracture, brittle fracture, impact, creep, relaxation, thermal shock, wear, corrosion, stress corrosion cracking, and various types of fatigue. Each produces a different type of fracture surface, and other indicators near the fracture surface(s). The way the product is loaded, and the loading history are also important factors which determine the outcome. Of critical importance is design geometry because stress concentrations can magnify the applied load locally to very high levels, and from which cracks usually grow. Over time, as more is understood about a failure, the failure cause evolves from a description of symptoms and outcomes (that is, effects) to a systematic and relatively abstract model of how, when, and why the failure comes about (that is, causes). The more complex the product or situation, the more necessary a good understanding of its failure cause is to ensuring its proper operation (or repair). Cascading failures, for example, are particularly complex failure causes. Edge cases and corner cases are situations in which complex, unexpected, and difficult-to-debug problems often occur. Failure by corrosion Materials can be degraded by their environment by corrosion processes, such as rusting in the case of iron and steel. Such processes can also be affected by load in the mechanisms of stress corrosion cracking and environmental stress cracking. See also Failure analysis Failure mode and effects analysis (FMEA) Failure modes, effects, and diagnostic analysis (FMEDA) Failure rate Forensic electrical engineering Forensic engineering Hazard analysis Ultimate failure Notes Failure Reliability engineering Maintenance
Failure cause
[ "Engineering" ]
795
[ "Systems engineering", "Maintenance", "Mechanical engineering", "Reliability engineering" ]
1,227,154
https://en.wikipedia.org/wiki/Geodetic%20airframe
A geodetic airframe is a type of construction for the airframes of aircraft developed by British aeronautical engineer Barnes Wallis in the 1930s (who sometimes spelt it "geodesic"). Earlier, it was used by Prof. Schütte for the Schütte Lanz Airship SL 1 in 1909. It makes use of a space frame formed from a spirally crossing basket-weave of load-bearing members. The principle is that two geodesic arcs can be drawn to intersect on a curving surface (the fuselage) in a manner that the torsional load on each cancels out that on the other. Early examples The "diagonal rider" structural element was used by Joshua Humphreys in the first US Navy sail frigates in 1794. Diagonal riders are viewable in the interior hull structure of the preserved USS Constitution on display in Boston Harbor. The structure was a pioneering example of placing "non-orthogonal" structural components within an otherwise conventional structure for its time. The "diagonal riders" were included in these American naval vessels' construction as one of five elements to reduce the problem of hogging in the ship's hull, and did not make up the bulk of the vessel's structure, they do not constitute a completely "geodetic" space frame. Calling any diagonal wood brace (as used on gates, buildings, ships or other structures with cantilevered or diagonal loads) an example of geodesic design is a misnomer. In a geodetic structure, the strength and structural integrity, and indeed the shape, come from the diagonal "braces" - the structure does not need the "bits in between" for part of its strength (implicit in the name space frame) as does a more conventional wooden structure. Aeroplanes The earliest-known use of a geodetic airframe design for any aircraft was for the pre-World War I Schütte-Lanz SL1 rigid airship's envelope structure] of 1911, with the airship capable of up to a 38.3 km/h (23.8 mph) top airspeed. The Latécoère 6 was a French four-engined biplane bomber of the early 1920s. It was of advanced all-metal construction and probably the first aeroplane to use geodetic construction. Only one was built. Barnes Wallis, inspired by his earlier experience with light alloy structures and the use of geodesically-arranged wiring to distribute the lifting loads of the gasbags in the design of the R100 airship, evolved the geodetic construction method (although it is commonly stated, there was no geodetic structure in R100). Wallis used the term "geodetic" to apply to the airframe; it is referred to as "Vickers-Wallis construction" in some early company documents. "Geodesic" is used in the United States for aircraft structures. The system was later used by Wallis's employer, Vickers-Armstrongs in a series of bomber aircraft, the Wellesley, Wellington, Warwick and Windsor. In these aircraft, the fuselage and wing were built up from duralumin alloy channel-beams that were formed into a large framework. Wooden battens were screwed onto the metal, to which the doped linen skin of the aircraft was fixed. The Windsor had a woven metal skin. The metal lattice-work gave a light and very strong structure. The benefit of the geodetic construction was larger internal volume for a given streamlined shape. Flight magazine described a geodetic frame as sheet metal covering in which diamond shaped holes have been cut leaving behind the geodetic strips. The benefit was offset by having to construct the fuselage as a complete assembly unlike aircraft using stressed-skin construction which could be built in sections. In addition, fabric covering on the geodetic frame was not suitable for higher flying aircraft that had to be pressurised. The difficulty of providing a pressurised compartment in a geodetic frame was a challenge during the design of the high altitude Wellington Mk. V. The pressure cabin, which expanded and contracted independently of the rest of the airframe, had to be attached at the nodal points of the structure. Geodetic wing and fin structures, taken from the Wellington, were used on the post-war Vickers VC.1 Viking, but with a metal stressed-skin fuselage. Later production Vikings were completely stressed-skin construction marking the end of geodetic construction at Vickers. See also Design principle Figure of the Earth Geodesic dome Geodesic (disambiguation) Geodetic system References Inline citations Sources Airship technology Structural system Aerospace engineering Vickers Barnes Wallis
Geodetic airframe
[ "Technology", "Engineering" ]
940
[ "Structural system", "Structural engineering", "Building engineering", "Aerospace engineering" ]
1,227,509
https://en.wikipedia.org/wiki/Zaytsev%27s%20rule
In organic chemistry, Zaytsev's rule (or Zaitsev's rule, Saytzeff's rule, Saytzev's rule) is an empirical rule for predicting the favored alkene product(s) in elimination reactions. While at the University of Kazan, Russian chemist Alexander Zaytsev studied a variety of different elimination reactions and observed a general trend in the resulting alkenes. Based on this trend, Zaytsev proposed that the alkene formed in greatest amount is that which corresponded to removal of the hydrogen from the alpha-carbon having the fewest hydrogen substituents. For example, when 2-iodobutane is treated with alcoholic potassium hydroxide (KOH), but-2-ene is the major product and but-1-ene is the minor product. More generally, Zaytsev's rule predicts that in an elimination reaction the most substituted product will be the most stable, and therefore the most favored. The rule makes no generalizations about the stereochemistry of the newly formed alkene, but only the regiochemistry of the elimination reaction. While effective at predicting the favored product for many elimination reactions, Zaytsev's rule is subject to many exceptions. Many of them include exceptions under Hofmann product (analogous to Zaytsev product). These include compounds having quaternary nitrogen and leaving groups like NR3+, SO3H, etc. In these eliminations the Hofmann product is preferred. In case the leaving group is halogens, except fluorine; others give the Zaytsev product. History Alexander Zaytsev first published his observations regarding the products of elimination reactions in Justus Liebigs Annalen der Chemie in 1875. Although the paper contained some original research done by Zaytsev's students, it was largely a literature review and drew heavily upon previously published work. In it, Zaytsev proposed a purely empirical rule for predicting the favored regiochemistry in the dehydrohalogenation of alkyl iodides, though it turns out that the rule is applicable to a variety of other elimination reactions as well. While Zaytsev's paper was well referenced throughout the 20th century, it was not until the 1960s that textbooks began using the term "Zaytsev's rule". Zaytsev was not the first chemist to publish the rule that now bears his name. Aleksandr Nikolaevich Popov published an empirical rule similar to Zaytsev's in 1872, and presented his findings at the University of Kazan in 1873. Zaytsev had cited Popov's 1872 paper in previous work and worked at the University of Kazan, and was thus probably aware of Popov's proposed rule. In spite of this, Zaytsev's 1875 Liebigs Annalen paper makes no mention of Popov's work. Any discussion of Zaytsev's rule would be incomplete without mentioning Vladimir Vasilyevich Markovnikov. Zaytsev and Markovnikov both studied under Alexander Butlerov, taught at the University of Kazan during the same period, and were bitter rivals. Markovnikov, who published in 1870 what is now known as Markovnikov's rule, and Zaytsev held conflicting views regarding elimination reactions: the former believed that the least substituted alkene would be favored, whereas the latter felt the most substituted alkene would be the major product. Perhaps one of the main reasons Zaytsev began investigating elimination reactions was to disprove his rival. Zaytsev published his rule for elimination reactions just after Markovnikov published the first article in a three-part series in Comptes Rendus detailing his rule for addition reactions. Thermodynamic considerations The hydrogenation of alkenes to alkanes is exothermic. The amount of energy released during a hydrogenation reaction, known as the heat of hydrogenation, is inversely related to the stability of the starting alkene: the more stable the alkene, the lower its heat of hydrogenation. Examining the heats of hydrogenation for various alkenes reveals that stability increases with the amount of substitution. The increase in stability associated with additional substitutions is the result of several factors. Alkyl groups are electron donating by inductive effect, and increase the electron density on the sigma bond of the alkene. Also, alkyl groups are sterically large, and are most stable when they are far away from each other. In an alkane, the maximum separation is that of the tetrahedral bond angle, 109.5°. In an alkene, the bond angle increases to near 120°. As a result, the separation between alkyl groups is greatest in the most substituted alkene. Hyperconjugation, which describes the stabilizing interaction between the HOMO of the alkyl group and the LUMO of the double bond, also helps explain the influence of alkyl substitutions on the stability of alkenes. In regards to orbital hybridization, a bond between an sp2 carbon and an sp3 carbon is stronger than a bond between two sp3-hybridized carbons. Computations reveal a dominant stabilizing hyperconjugation effect of 6 kcal/mol per alkyl group. Steric effects In E2 elimination reactions, a base abstracts a proton that is beta to a leaving group, such as a halide. The removal of the proton and the loss of the leaving group occur in a single, concerted step to form a new double bond. When a small, unhindered base – such as sodium hydroxide, sodium methoxide, or sodium ethoxide – is used for an E2 elimination, the Zaytsev product is typically favored over the least substituted alkene, known as the Hofmann product. For example, treating 2-Bromo-2-methyl butane with sodium ethoxide in ethanol produces the Zaytsev product with moderate selectivity. Due to steric interactions, a bulky base – such as potassium tert-butoxide, triethylamine, or 2,6-lutidine – cannot readily abstract the proton that would lead to the Zaytsev product. In these situations, a less sterically hindered proton is preferentially abstracted instead. As a result, the Hofmann product is typically favored when using bulky bases. When 2-Bromo-2-methyl butane is treated with potassium tert-butoxide instead of sodium ethoxide, the Hofmann product is favored. Steric interactions within the substrate also prevent the formation of the Zaytsev product. These intramolecular interactions are relevant to the distribution of products in the Hofmann elimination reaction, which converts amines to alkenes. In the Hofmann elimination, treatment of a quaternary ammonium iodide salt with silver oxide produces hydroxide ions, which act as a base and eliminate the tertiary amine to give an alkene. In the Hofmann elimination, the least substituted alkene is typically favored due to intramolecular steric interactions. The quaternary ammonium group is large, and interactions with alkyl groups on the rest of the molecule are undesirable. As a result, the conformation necessary for the formation of the Zaytsev product is less energetically favorable than the conformation required for the formation of the Hofmann product. As a result, the Hofmann product is formed preferentially. The Cope elimination is very similar to the Hofmann elimination in principle but occurs under milder conditions. It also favors the formation of the Hofmann product, and for the same reasons. Stereochemistry In some cases, the stereochemistry of the starting material can prevent the formation of the Zaytsev product. For example, when menthyl chloride is treated with sodium ethoxide, the Hofmann product is formed exclusively, but in very low yield: This result is due to the stereochemistry of the starting material. E2 eliminations require anti-periplanar geometry, in which the proton and leaving group lie on opposite sides of the C-C bond, but in the same plane. When menthyl chloride is drawn in the chair conformation, it is easy to explain the unusual product distribution. Formation of the Zaytsev product requires elimination at the 2-position, but the isopropyl group – not the proton – is anti-periplanar to the chloride leaving group; this makes elimination at the 2-position impossible. In order for the Hofmann product to form, elimination must occur at the 6-position. Because the proton at this position has the correct orientation relative to the leaving group, elimination can and does occur. As a result, this particular reaction produces only the Hofmann product. See also Markovnikov's rule Hofmann elimination Cope elimination References Bibliography External links Online course of chemistry English Translation of 1875 German article on 'The order of addition and of elimination of hydrogen and iodine in organic compounds' by Alexander Zaytsev. Eponymous chemical rules Physical organic chemistry 1875 in science
Zaytsev's rule
[ "Chemistry" ]
1,936
[ "Physical organic chemistry" ]
1,227,519
https://en.wikipedia.org/wiki/Horn-satisfiability
In formal logic, Horn-satisfiability, or HORNSAT, is the problem of deciding whether a given set of propositional Horn clauses is satisfiable or not. Horn-satisfiability and Horn clauses are named after Alfred Horn. A Horn clause is a clause with at most one positive literal, called the head of the clause, and any number of negative literals, forming the body of the clause. A Horn formula is a propositional formula formed by conjunction of Horn clauses. Horn satisfiability is actually one of the "hardest" or "most expressive" problems which is known to be computable in polynomial time, in the sense that it is a P-complete problem. The Horn satisfiability problem can also be asked for propositional many-valued logics. The algorithms are not usually linear, but some are polynomial; see Hähnle (2001 or 2003) for a survey. Algorithm The problem of Horn satisfiability is solvable in linear time. The problem of deciding the truth of quantified Horn formulae can be also solved in polynomial time. A polynomial-time algorithm for Horn satisfiability is recursive: A first termination condition is a formula in which all the clauses currently existing contain negative literals. In this case, all the variables currently in the clauses can be set to false. A second termination condition is an empty clause. In this case, the formula has no solutions. In the other cases, the formula contains a positive unit clause , so we do a unit propagation: the literal is set to true, all the clauses containing are removed, and all clauses containing have this literal removed. The result is a new Horn formula, so we reiterate. This algorithm also allows determining a truth assignment of satisfiable Horn formulae: all variables contained in a unit clause are set to the value satisfying that unit clause; all other literals are set to false. The resulting assignment is the minimal model of the Horn formula, that is, the assignment having a minimal set of variables assigned to true, where comparison is made using set containment. Using a linear algorithm for unit propagation, the algorithm is linear in the size of the formula. Examples Trivial case In the Horn formula , each clause has a negated literal. Therefore, setting each variable to false satisfies all clauses, hence it is a solution. Solvable case In the Horn formula , one clause forces f to be true. Setting f to true and simplifying gives . Now b must be true. Simplification gives . Now it is a trivial case, so the remaining variables can all be set to false. Thus, a satisfying assignment is , , , , , . Unsolvable case In the Horn formula , one clause forces f to be true. Subsequent simplification gives . Now b has to be true. Simplification gives . We obtained an empty clause, hence the formula is unsatisfiable. Generalization A generalization of the class of Horn formulae is that of renamable-Horn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. Horn satisfiability and renamable Horn satisfiability provide one of two important subclasses of satisfiability that are solvable in polynomial time; the other such subclass is 2-satisfiability. Dual-Horn SAT A dual variant of Horn SAT is Dual-Horn SAT, in which each clause has at most one negative literal. Negating all variables transforms an instance of Dual-Horn SAT into Horn SAT. It was proven in 1951 by Horn that Dual-Horn SAT is in P. See also Unit propagation Boolean satisfiability problem 2-satisfiability References Further reading Logic in computer science P-complete problems Satisfiability problems
Horn-satisfiability
[ "Mathematics" ]
845
[ "Logic in computer science", "Automated theorem proving", "Mathematical logic", "Computational problems", "P-complete problems", "Mathematical problems", "Satisfiability problems" ]
1,228,638
https://en.wikipedia.org/wiki/Chemical%20shift
In nuclear magnetic resonance (NMR) spectroscopy, the chemical shift is the resonant frequency of an atomic nucleus relative to a standard in a magnetic field. Often the position and number of chemical shifts are diagnostic of the structure of a molecule. Chemical shifts are also used to describe signals in other forms of spectroscopy such as photoemission spectroscopy. Some atomic nuclei possess a magnetic moment (nuclear spin), which gives rise to different energy levels and resonance frequencies in a magnetic field. The total magnetic field experienced by a nucleus includes local magnetic fields induced by currents of electrons in the molecular orbitals (electrons have a magnetic moment themselves). The electron distribution of the same type of nucleus (e.g. ) usually varies according to the local geometry (binding partners, bond lengths, angles between bonds, and so on), and with it the local magnetic field at each nucleus. This is reflected in the spin energy levels (and resonance frequencies). The variations of nuclear magnetic resonance frequencies of the same kind of nucleus, due to variations in the electron distribution, is called the chemical shift. The size of the chemical shift is given with respect to a reference frequency or reference sample (see also chemical shift referencing), usually a molecule with a barely distorted electron distribution. Operating frequency The operating (or Larmor) frequency of a magnet (usually quoted as absolute value in MHz) is calculated from the Larmor equation where is the induction of the magnet (SI units of tesla), and is the magnetogyric ratio of the nucleus an empirically measured fundamental constant determined by the details of the structure of each nucleus. For example, the proton operating frequency for a 1-tesla magnet is calculated as MRI scanners are often referred to by their field strengths (e.g. "a 7 T scanner"), whereas NMR spectrometers are commonly referred to by the corresponding proton Larmor frequency (e.g. "a 300 MHz spectrometer", which has a of 7 T). While chemical shift is referenced in order that the units are equivalent across different field strengths, the actual frequency separation in hertz scales with field strength (). As a result, the difference of chemical shift between two signals (ppm) represents a larger number of hertz on machines that have larger , and therefore the signals are less likely to be overlapping in the resulting spectrum. This increased resolution is a significant advantage for analysis. (Larger-field machines are also favoured on account of having intrinsically higher signal arising from the Boltzmann distribution of magnetic spin states.) Chemical shift referencing Chemical shift is usually expressed in parts per million (ppm) by frequency, because it is calculated from where is the absolute resonance frequency of the sample, and is the absolute resonance frequency of a standard reference compound, measured in the same applied magnetic field . Since the numerator is usually expressed in hertz, and the denominator in megahertz, is expressed in ppm. The detected frequencies (in Hz) for 1H, 13C, and 29Si nuclei are usually referenced against TMS (tetramethylsilane), TSP (trimethylsilylpropanoic acid), or DSS, which by the definition above have a chemical shift of zero if chosen as the reference. Other standard materials are used for setting the chemical shift for other nuclei. Thus an NMR signal observed at a frequency 300 Hz higher than the signal from TMS, where the TMS resonance frequency is 300 MHz, has a chemical shift of Although the absolute resonance frequency depends on the applied magnetic field, the chemical shift is independent of external magnetic field strength. On the other hand, the resolution of NMR will increase with applied magnetic field. Referencing methods Practically speaking, diverse methods may be used to reference chemical shifts in an NMR experiment, which can be subdivided into indirect and direct referencing methods. Indirect referencing uses a channel other than the one of interest to adjust chemical shift scale correctly, i.e. the solvent signal in the deuterium (lock) channel can be used to reference the a 1H NMR spectrum. Both indirect and direct referencing can be done as three different procedures: Internal referencing, where the reference compound is added directly to the system under study." In this common practice, users adjust residual solvent signals of 1H or 13C NMR spectra with calibrated spectral tables. If substances other than the solvent itself are used for internal referencing, the sample has to be combined with the reference compound, which may affect the chemical shifts. External referencing, involving sample and reference contained separately in coaxial cylindrical tubes. With this procedure, the reference signal is still visible in the spectrum of interest, although the reference and the sample are physically separated by a glass wall. Magnetic susceptibility differences between the sample and the reference phase need to be corrected theoretically, which lowers the practicality of this procedure. Substitution method: The use of separate cylindrical tubes for the sample and the reference compound, with (in principle) spectra recorded individually for each. Similar to external referencing, this method allows referencing without sample contamination. If field/frequency locking via the 2H signal of the deuterated solvent is used and the solvents of reference and analyte are the same, the use of this methods is straightforward. Problems may arise if different solvents are used for the reference compound and the sample as (just like for external referencing) magnetic susceptibility differences need to be corrected theoretically. If this method is used without field/frequency locking, shimming procedures between the sample and the reference need to be avoided as they change the applied magnetic field (and thereby influence the chemical shift). Modern NMR spectrometers commonly make use of the absolute scale, which defines the 1H signal of TMS as 0 ppm in proton NMR and the center frequencies of all other nuclei as percentage of the TMS resonance frequency: The use of the deuterium (lock) channel, so the 2H signal of the deuterated solvent, and the Ξ value of the absolute scale is a form of internal referencing and is particularly useful in heteronuclear NMR spectroscopy as local reference compounds may not be always be available or easily used (i.e. liquid NH3 for 15N NMR spectroscopy). This system, however, relies on accurately determined 2H NMR chemical shifts enlisted in the spectrometer software and correctly determined Ξ values by IUPAC. A recent study for 19F NMR spectroscopy revealed that the use of the absolute scale and lock-based internal referencing led to errors in chemical shifts. These may be negated by inclusion of calibrated reference compounds. The induced magnetic field The electrons around a nucleus will circulate in a magnetic field and create a secondary induced magnetic field. This field opposes the applied field as stipulated by Lenz's law and atoms with higher induced fields (i.e., higher electron density) are therefore called shielded, relative to those with lower electron density. Electron-donating alkyl groups, for example, lead to increased shielding whereas electron-withdrawing substituents such as nitro groups lead to deshielding of the nucleus. Not only substituents cause local induced fields. Bonding electrons can also lead to shielding and deshielding effects. A striking example of this is the pi bonds in benzene. Circular current through the hyperconjugated system causes a shielding effect at the molecule's center and a deshielding effect at its edges. Trends in chemical shift are explained based on the degree of shielding or deshielding. Nuclei are found to resonate in a wide range to the left (or more rare to the right) of the internal standard. When a signal is found with a higher chemical shift: The applied effective magnetic field is lower, if the resonance frequency is fixed (as in old traditional CW spectrometers) The frequency is higher, when the applied magnetic field is static (normal case in FT spectrometers) The nucleus is more deshielded The signal or shift is downfield or at low field or paramagnetic. Conversely a lower chemical shift is called a diamagnetic shift, and is upfield and more shielded. Diamagnetic shielding In real molecules protons are surrounded by a cloud of charge due to adjacent bonds and atoms. In an applied magnetic field () electrons circulate and produce an induced field () which opposes the applied field. The effective field at the nucleus will be . The nucleus is said to be experiencing a diamagnetic shielding. Factors causing chemical shifts Important factors influencing chemical shift are electron density, electronegativity of neighboring groups and anisotropic induced magnetic field effects. Electron density shields a nucleus from the external field. For example, in proton NMR the electron-poor tropylium ion has its protons downfield at 9.17 ppm, those of the electron-rich cyclooctatetraenyl anion move upfield to 6.75 ppm and its dianion even more upfield to 5.56 ppm. A nucleus in the vicinity of an electronegative atom experiences reduced electron density and the nucleus is therefore deshielded. In proton NMR of methyl halides (CH3X) the chemical shift of the methyl protons increase in the order from 2.16 ppm to 4.26 ppm reflecting this trend. In carbon NMR the chemical shift of the carbon nuclei increase in the same order from around −10 ppm to 70 ppm. Also when the electronegative atom is removed further away the effect diminishes until it can be observed no longer. Anisotropic induced magnetic field effects are the result of a local induced magnetic field experienced by a nucleus resulting from circulating electrons that can either be paramagnetic when it is parallel to the applied field or diamagnetic when it is opposed to it. It is observed in alkenes where the double bond is oriented perpendicular to the external field with pi electrons likewise circulating at right angles. The induced magnetic field lines are parallel to the external field at the location of the alkene protons which therefore shift downfield to a 4.5 ppm to 7.5 ppm range. The three-dimensional space where a diamagnetic shift is called the shielding zone with a cone-like shape aligned with the external field. The protons in aromatic compounds are shifted downfield even further with a signal for benzene at 7.73 ppm as a consequence of a diamagnetic ring current. Alkyne protons by contrast resonate at high field in a 2–3 ppm range. For alkynes the most effective orientation is the external field in parallel with electrons circulation around the triple bond. In this way the acetylenic protons are located in the cone-shaped shielding zone hence the upfield shift. Magnetic properties of most common nuclei 1H and 13C are not the only nuclei susceptible to NMR experiments. A number of different nuclei can also be detected, although the use of such techniques is generally rare due to small relative sensitivities in NMR experiments (compared to 1H) of the nuclei in question, the other factor for rare use being their slender representation in nature and organic compounds. 1H, 13C, 15N, 19F and 31P are the five nuclei that have the greatest importance in NMR experiments: 1H because of high sensitivity and vast occurrence in organic compounds 13C because of being the key component of all organic compounds despite occurring at a low abundance (1.1%) compared to the major isotope of carbon 12C, which has a spin of 0 and therefore is NMR-inactive. 15N because of being a key component of important biomolecules such as proteins and DNA 19F because of high relative sensitivity 31P because of frequent occurrence in organic compounds and moderate relative sensitivity Chemical shift manipulation In general, the associated increased signal-to-noise and resolution has driven a move towards increasingly high field strengths. In limited cases, however, lower fields are preferred; examples are for systems in chemical exchange, where the speed of the exchange relative to the NMR experiment can cause additional and confounding linewidth broadening. Similarly, while avoidance of second order coupling is generally preferred, this information can be useful for elucidation of chemical structures. Using refocussing pulses placed between recording of successive points of the free induction decay, in an analogous fashion to the spin echo technique in MRI, the chemical shift evolution can be scaled to provide apparent low-field spectra on a high-field spectrometer. In a similar fashion, it is possible to upscale the effect of J-coupling relative to the chemical shift using pulse sequences that include additional J-coupling evolution periods interspersed with conventional spin evolutions. Other chemical shifts The Knight shift (first reported in 1949) and Shoolery's rule are observed with pure metals and methylene groups, respectively. The NMR chemical shift in its present-day meaning first appeared in journals in 1950. Chemical shifts with a different meaning appear in X-ray photoelectron spectroscopy as the shift in atomic core-level energy due to a specific chemical environment. The term is also used in Mössbauer spectroscopy, where similarly to NMR it refers to a shift in peak position due to the local chemical bonding environment. As is the case for NMR the chemical shift reflects the electron density at the atomic nucleus. See also EuFOD, a shift agent MRI Nuclear magnetic resonance Nuclear magnetic resonance spectroscopy of carbohydrates Nuclear magnetic resonance spectroscopy of nucleic acids Nuclear magnetic resonance spectroscopy of proteins Random coil index Relaxation (NMR) Solid-state NMR TRISPHAT, a chiral shift reagent for cations Zeeman effect References External links chem.wisc.edu BioMagResBank NMR Table Proton chemical shifts Carbon chemical shifts Online tutorials (these generally involve combined use of IR, 1H NMR, 13C NMR and mass spectrometry) Problem set 1 (see also this link for more background information on spin-spin coupling) Problem set 2 Problem set 4 Problem set 5 Combined solutions to problem set 5 (Problems 1–32) and (Problems 33–64) Nuclear chemistry Nuclear physics Nuclear magnetic resonance spectroscopy pl:Spektroskopia NMR#Przesunięcie chemiczne
Chemical shift
[ "Physics", "Chemistry" ]
2,961
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy", "Nuclear chemistry", "nan", "Nuclear physics", "Spectroscopy" ]
1,228,679
https://en.wikipedia.org/wiki/Gyromagnetic%20ratio
In physics, the gyromagnetic ratio (also sometimes known as the magnetogyric ratio in other disciplines) of a particle or system is the ratio of its magnetic moment to its angular momentum, and it is often denoted by the symbol , gamma. Its SI unit is the radian per second per tesla (rad⋅s−1⋅T−1) or, equivalently, the coulomb per kilogram (C⋅kg−1). The term "gyromagnetic ratio" is often used as a synonym for a different but closely related quantity, the -factor. The -factor only differs from the gyromagnetic ratio in being dimensionless. For a classical rotating body Consider a nonconductive charged body rotating about an axis of symmetry. According to the laws of classical physics, it has both a magnetic dipole moment due to the movement of charge and an angular momentum due to the movement of mass arising from its rotation. It can be shown that as long as its charge and mass density and flow are distributed identically and rotationally symmetric, its gyromagnetic ratio is where is its charge and is its mass. The derivation of this relation is as follows. It suffices to demonstrate this for an infinitesimally narrow circular ring within the body, as the general result then follows from an integration. Suppose the ring has radius , area , mass , charge , and angular momentum . Then the magnitude of the magnetic dipole moment is For an isolated electron An isolated electron has an angular momentum and a magnetic moment resulting from its spin. While an electron's spin is sometimes visualized as a literal rotation about an axis, it cannot be attributed to mass distributed identically to the charge. The above classical relation does not hold, giving the wrong result by the absolute value of the electron's -factor, which is denoted : where is the Bohr magneton. The gyromagnetic ratio due to electron spin is twice that due to the orbiting of an electron. In the framework of relativistic quantum mechanics, where is the fine-structure constant. Here the small corrections to the relativistic result come from the quantum field theory calculations of the anomalous magnetic dipole moment. The electron -factor is known to twelve decimal places by measuring the electron magnetic moment in a one-electron cyclotron: The electron gyromagnetic ratio is The electron -factor and are in excellent agreement with theory; see Precision tests of QED for details. Gyromagnetic factor not as a consequence of relativity Since a gyromagnetic factor equal to 2 follows from Dirac's equation, it is a frequent misconception to think that a -factor 2 is a consequence of relativity; it is not. The factor 2 can be obtained from the linearization of both the Schrödinger equation and the relativistic Klein–Gordon equation (which leads to Dirac's). In both cases a 4-spinor is obtained and for both linearizations the -factor is found to be equal to 2. Therefore, the factor 2 is a consequence of the minimal coupling and of the fact of having the same order of derivatives for space and time. Physical spin- particles which cannot be described by the linear gauged Dirac equation satisfy the gauged Klein–Gordon equation extended by the term according to, Here, and stand for the Lorentz group generators in the Dirac space, and the electromagnetic tensor respectively, while is the electromagnetic four-potential. An example for such a particle is the spin companion to spin in the representation space of the Lorentz group. This particle has been shown to be characterized by and consequently to behave as a truly quadratic fermion. For a nucleus Protons, neutrons, and many nuclei carry nuclear spin, which gives rise to a gyromagnetic ratio as above. The ratio is conventionally written in terms of the proton mass and charge, even for neutrons and for other nuclei, for the sake of simplicity and consistency. The formula is: where is the nuclear magneton, and is the -factor of the nucleon or nucleus in question. The ratio equal to , is 7.622593285(47) MHz/T. The gyromagnetic ratio of a nucleus plays a role in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). These procedures rely on the fact that bulk magnetization due to nuclear spins precess in a magnetic field at a rate called the Larmor frequency, which is simply the product of the gyromagnetic ratio with the magnetic field strength. With this phenomenon, the sign of determines the sense (clockwise vs counterclockwise) of precession. Most common nuclei such as 1H and 13C have positive gyromagnetic ratios. Approximate values for some common nuclei are given in the table below. Larmor precession Any free system with a constant gyromagnetic ratio, such as a rigid system of charges, a nucleus, or an electron, when placed in an external magnetic field (measured in teslas) that is not aligned with its magnetic moment, will precess at a frequency (measured in hertz) proportional to the external field: For this reason, values of , in units of hertz per tesla (Hz/T), are often quoted instead of . Heuristic derivation The derivation of this ratio is as follows: First we must prove the torque resulting from subjecting a magnetic moment to a magnetic field is The identity of the functional form of the stationary electric and magnetic fields has led to defining the magnitude of the magnetic dipole moment equally well as , or in the following way, imitating the moment of an electric dipole: The magnetic dipole can be represented by a needle of a compass with fictitious magnetic charges on the two poles and vector distance between the poles under the influence of the magnetic field of earth By classical mechanics the torque on this needle is But as previously stated so the desired formula comes up. is the unit distance vector. The spinning electron model here is analogous to a gyroscope. For any rotating body the rate of change of the angular momentum equals the applied torque : Note as an example the precession of a gyroscope. The earth's gravitational attraction applies a force or torque to the gyroscope in the vertical direction, and the angular momentum vector along the axis of the gyroscope rotates slowly about a vertical line through the pivot. In place of a gyroscope, imagine a sphere spinning around the axis with its center on the pivot of the gyroscope, and along the axis of the gyroscope two oppositely directed vectors both originated in the center of the sphere, upwards and downwards Replace the gravity with a magnetic flux density represents the linear velocity of the pike of the arrow along a circle whose radius is where is the angle between and the vertical. Hence the angular velocity of the rotation of the spin is Consequently, This relationship also explains an apparent contradiction between the two equivalent terms, gyromagnetic ratio versus magnetogyric ratio: whereas it is a ratio of a magnetic property (i.e. dipole moment) to a gyric (rotational, from , "turn") property (i.e. angular momentum), it is also, at the same time, a ratio between the angular precession frequency (another gyric property) and the magnetic field. The angular precession frequency has an important physical meaning: It is the angular cyclotron frequency, the resonance frequency of an ionized plasma being under the influence of a static finite magnetic field, when we superimpose a high frequency electromagnetic field. See also Charge-to-mass ratio Chemical shift Landé -factor Larmor equation Proton gyromagnetic ratio References Atomic physics Nuclear magnetic resonance Ratios
Gyromagnetic ratio
[ "Physics", "Chemistry", "Mathematics" ]
1,626
[ "Nuclear magnetic resonance", "Quantum mechanics", "Arithmetic", "Atomic physics", " molecular", "Nuclear physics", "Atomic", "Ratios", " and optical physics" ]
1,229,421
https://en.wikipedia.org/wiki/Robocrane
The Robocrane is a kind of manipulator resembling a Stewart platform but using an octahedral assembly of cables instead of struts. Like the Stewart platform, the Robocrane has six degrees of freedom (x, y, z, pitch, roll, & yaw). It was developed by Dr. James S. Albus of the US National Institute of Standards and Technology (NIST), using the Real-Time Control System which is a hierarchical control system. Given its unusual ability to "fly" tools around a work site, it has many possible applications, including stone carving, ship building, bridge construction, inspection, pipe or beam fitting and welding. Albus invented and developed a new generation of robot cranes based on six cables and six winches configured as a Stewart platform. The NIST RoboCraneTM has the capacity to lift and precisely manipulate heavy loads over large volumes with fine control in all six degrees of freedom. Laboratory RoboCranes have demonstrated the ability to manipulate tools such as saws, grinders, and welding torches, and to lift and precisely position heavy objects such as steel beams and cast iron pipe. In 1992, the RoboCrane was selected by Construction Equipment magazine as one of the 100 most significant new products of the year for construction and related industries. It was also selected by Popular Science magazine for the "Best of What's New" award as one of the 100 top products, technologies, and scientific achievements of 1992. A version of the RoboCrane has been commercially developed for the United States Air Force to enable rapid paint stripping, inspection, and repainting of very large military aircraft such as the C-5 Galaxy. RoboCrane is expected to save the United States Air Force $8 million annually at each of its maintenance facilities. This project was recognized in 2008 by a National Laboratories Award for technology transfer. Potential future applications of the RoboCrane include ship building, construction of high rise buildings, highways, bridges, tunnels, and port facilities; cargo handling, ship-to-ship cargo transfer on the high seas, radioactive and toxic waste clean-up; and underwater applications such as salvage, drilling, cable maintenance, and undersea waste site management. References External links Manipulation and Mobility Systems Group Citations: The NIST ROBOCRANE - Albus, Bostelman, Dagalakis RoboCrane a page from Carnegie Mellon University MEL Gallery Movies RoboCrane -
Robocrane
[ "Engineering" ]
498
[ "Industrial robots" ]
764,468
https://en.wikipedia.org/wiki/Post%27s%20theorem
In computability theory Post's theorem, named after Emil Post, describes the connection between the arithmetical hierarchy and the Turing degrees. Background The statement of Post's theorem uses several concepts relating to definability and recursion theory. This section gives a brief overview of these concepts, which are covered in depth in their respective articles. The arithmetical hierarchy classifies certain sets of natural numbers that are definable in the language of Peano arithmetic. A formula is said to be if it is an existential statement in prenex normal form (all quantifiers at the front) with alternations between existential and universal quantifiers applied to a formula with bounded quantifiers only. Formally a formula in the language of Peano arithmetic is a formula if it is of the form where contains only bounded quantifiers and Q is if m is even and if m is odd. A set of natural numbers is said to be if it is definable by a formula, that is, if there is a formula such that each number is in if and only if holds. It is known that if a set is then it is for any , but for each m there is a set that is not . Thus the number of quantifier alternations required to define a set gives a measure of the complexity of the set. Post's theorem uses the relativized arithmetical hierarchy as well as the unrelativized hierarchy just defined. A set of natural numbers is said to be relative to a set , written , if is definable by a formula in an extended language that includes a predicate for membership in . While the arithmetical hierarchy measures definability of sets of natural numbers, Turing degrees measure the level of uncomputability of sets of natural numbers. A set is said to be Turing reducible to a set , written , if there is an oracle Turing machine that, given an oracle for , computes the characteristic function of . The Turing jump of a set is a form of the Halting problem relative to . Given a set , the Turing jump is the set of indices of oracle Turing machines that halt on input when run with oracle . It is known that every set is Turing reducible to its Turing jump, but the Turing jump of a set is never Turing reducible to the original set. Post's theorem uses finitely iterated Turing jumps. For any set of natural numbers, the notation indicates the –fold iterated Turing jump of . Thus is just , and is the Turing jump of . Post's theorem and corollaries Post's theorem establishes a close connection between the arithmetical hierarchy and the Turing degrees of the form , that is, finitely iterated Turing jumps of the empty set. (The empty set could be replaced with any other computable set without changing the truth of the theorem.) Post's theorem states: A set is if and only if is recursively enumerable by an oracle Turing machine with an oracle for , that is, if and only if is . The set is -complete for every . This means that every set is many-one reducible to . Post's theorem has many corollaries that expose additional relationships between the arithmetical hierarchy and the Turing degrees. These include: Fix a set . A set is if and only if is . This is the relativization of the first part of Post's theorem to the oracle . A set is if and only if . More generally, is if and only if . A set is defined to be arithmetical if it is for some . Post's theorem shows that, equivalently, a set is arithmetical if and only if it is Turing reducible to for some m. Proof of Post's theorem Formalization of Turing machines in first-order arithmetic The operation of a Turing machine on input can be formalized logically in first-order arithmetic. For example, we may use symbols , , and for the tape configuration, machine state and location along the tape after steps, respectively. 's transition system determines the relation between and ; their initial values (for ) are the input, the initial state and zero, respectively. The machine halts if and only if there is a number such that is the halting state. The exact relation depends on the specific implementation of the notion of Turing machine (e.g. their alphabet, allowed mode of motion along the tape, etc.) In case halts at time , the relation between and must be satisfied only for k bounded from above by . Thus there is a formula in first-order arithmetic with no unbounded quantifiers, such that halts on input at time at most if and only if is satisfied. Implementation example For example, for a prefix-free Turing machine with binary alphabet and no blank symbol, we may use the following notations: is the 1-ary symbol for the configuration of the whole tape after steps (which we may write as a number with LSB first, the value of the m-th location on the tape being its m-th least significant bit). In particular is the initial configuration of the tape, which corresponds the input to the machine. is the 1-ary symbol for the Turing machine state after steps. In particular, , the initial state of the Turing machine. is the 1-ary symbol for the Turing machine location on the tape after steps. In particular . is the transition function of the Turing machine, written as a function from a doublet (machine state, bit read by the machine) to a triplet (new machine state, bit written by the machine, +1 or -1 machine movement along the tape). is the j-th bit of a number . This can be written as a first-order arithmetic formula with no unbounded quantifiers. For a prefix-free Turing machine we may use, for input n, the initial tape configuration where cat stands for concatenation; thus is a length string of followed by and then by . The operation of the Turing machine at the first steps can thus be written as the conjunction of the initial conditions and the following formulas, quantified over for all : . Since M has a finite domain, this can be replaced by a first-order quantifier-free arithmetic formula. The exact formula obviously depends on M. . Note that at the first steps, never arrives at a location along the tape greater than . Thus the universal quantifier over j can be bounded by +1, as bits beyond this location have no relevance for the machine's operation. T halts on input at time at most if and only if is satisfied, where: This is a first-order arithmetic formula with no unbounded quantifiers, i.e. it is in . Recursively enumerable sets Let be a set that can be recursively enumerated by a Turing machine. Then there is a Turing machine that for every in , halts when given as an input. This can be formalized by the first-order arithmetical formula presented above. The members of are the numbers satisfying the following formula: This formula is in . Therefore, is in . Thus every recursively enumerable set is in . The converse is true as well: for every formula in with k existential quantifiers, we may enumerate the –tuples of natural numbers and run a Turing machine that goes through all of them until it finds the formula is satisfied. This Turing machine halts on precisely the set of natural numbers satisfying , and thus enumerates its corresponding set. Oracle machines Similarly, the operation of an oracle machine with an oracle O that halts after at most steps on input can be described by a first-order formula , except that the formula now includes: A new predicate, , giving the oracle answer. This predicate must satisfy some formula to be discussed below. An additional tape - the oracle tape - on which has to write the number m for every call O(m) to the oracle; writing on this tape can be logically formalized in a similar manner to writing on the machine's tape. Note that an oracle machine that halts after at most steps has time to write at most digits on the oracle tape. So the oracle can only be called with numbers m satisfying . If the oracle is for a decision problem, is always "Yes" or "No", which we may formalize as 0 or 1. Suppose the decision problem itself can be formalized by a first-order arithmetic formula . Then halts on after at most steps if and only if the following formula is satisfied: where is a first-order formula with no unbounded quantifiers. Turing jump If O is an oracle to the halting problem of a machine , then is the same as "there exists such that starting with input m is at the halting state after steps". Thus: where is a first-order formula that formalizes . If is a Turing machine (with no oracle), is in (i.e. it has no unbounded quantifiers). Since there is a finite number of numbers m satisfying , we may choose the same number of steps for all of them: there is a number , such that halts after steps precisely on those inputs for which it halts at all. Moving to prenex normal form, we get that the oracle machine halts on input if and only if the following formula is satisfied: (informally, there is a "maximal number of steps" such every oracle that does not halt within the first steps does not stop at all; however, for every, each oracle that halts after steps does halt). Note that we may replace both and by a single number - their maximum - without changing the truth value of . Thus we may write: For the oracle to the halting problem over Turing machines, is in and is in . Thus every set that is recursively enumerable by an oracle machine with an oracle for , is in . The converse is true as well: Suppose is a formula in with existential quantifiers followed by universal quantifiers. Equivalently, has > existential quantifiers followed by a negation of a formula in ; the latter formula can be enumerated by a Turing machine and can thus be checked immediately by an oracle for . We may thus enumerate the –tuples of natural numbers and run an oracle machine with an oracle for that goes through all of them until it finds a satisfaction for the formula. This oracle machine halts on precisely the set of natural numbers satisfying , and thus enumerates its corresponding set. Higher Turing jumps More generally, suppose every set that is recursively enumerable by an oracle machine with an oracle for is in . Then for an oracle machine with an oracle for , is in . Since is the same as for the previous Turing jump, it can be constructed (as we have just done with above) so that in . After moving to prenex formal form the new is in . By induction, every set that is recursively enumerable by an oracle machine with an oracle for , is in . The other direction can be proven by induction as well: Suppose every formula in can be enumerated by an oracle machine with an oracle for . Now Suppose is a formula in with existential quantifiers followed by universal quantifiers etc. Equivalently, has > existential quantifiers followed by a negation of a formula in ; the latter formula can be enumerated by an oracle machine with an oracle for and can thus be checked immediately by an oracle for . We may thus enumerate the –tuples of natural numbers and run an oracle machine with an oracle for that goes through all of them until it finds a satisfaction for the formula. This oracle machine halts on precisely the set of natural numbers satisfying , and thus enumerates its corresponding set. References Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ; Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987. Theorems in the foundations of mathematics Computability theory Mathematical logic hierarchies
Post's theorem
[ "Mathematics" ]
2,536
[ "Mathematical theorems", "Foundations of mathematics", "Mathematical logic", "Theorems in the foundations of mathematics", "Computability theory", "Mathematical problems", "Mathematical logic hierarchies" ]
765,175
https://en.wikipedia.org/wiki/Air%20lock
An air lock is a restriction of, or complete stoppage of liquid flow caused by vapour trapped in a high point of a liquid-filled pipe system. The gas, being less dense than the liquid, rises to any high points. This phenomenon is known as vapor lock, or air lock. Flushing the system with high flow or pressures can help move the gas away from the highest point. Also, a tap (or automatic vent valve) can be installed to let the gas out. Air lock problems often occur when one is trying to recommission a system after it has been deliberately (for servicing) or accidentally emptied. Take, for example, a central heating system using a circulating pump to pump water through radiators. When filling such a system, air is trapped in the radiators. This air has to be vented using screw valves built into the radiators. Depending on the pipe layout – if there are any upside down 'U's in the circuit – it will be necessary to vent the highest point(s). Otherwise, air lock may cause waterfall flow where the loss of hydraulic head is equal to the height of airlock; If the hydraulic grade line drops below the output of the pipe, the flow through that part of the circuit would stop completely. Note that circulating pumps usually do not generate enough pressure to overcome air locks. Fig 1 shows a reservoir which feeds a gravity distribution system – for drinking water or irrigation. If the ground in which the pipe is laid has high points – such as Hi1, 2 etc. and low points between them such as Lo1, 2 etc., then if the pipe is filled from the top, and was empty, the pipe fills OK as far as Hi1. If the water flow velocity is below the rising velocity of air bubbles, then water trickles down to the low point Lo2 and traps the remaining air between Hi1 and Lo2. As more water flows down, the upward leg Lo2 to Hi2 fills up. This exerts a pressure on the trapped air of either H2 m of water (WG = water gauge) or H1, whichever is less. If H2 is greater than H1, then you have a full air lock, and the water level in the up leg Lo2 to Hi2 stops at H1 and no further water can flow. If H1 is greater than H2, then some water can flow, but the full pipe hydraulic head H3 will not be reached and so flow is much less than expected. If there are further undulations, then the back pressure effects add together. Long pipelines built across fairly level, but undulating land are bound to have many such high and low points. To avoid air or gas lock, automatic vents are fitted which let air or gas out when above a certain pressure. They may also be designed to let air in under vacuum. There are many other design considerations for design of water pipeline systems, e.g. The air lock phenomenon can be used in a number of useful ways. The adjacent diagram shows an 'S' trap. This has the properties a) that liquid can flow from top (1) to bottom (4) unhindered and b) that gas cannot flow through the trap unless it has enough extra pressure to overcome the liquid head of the trap. This is usually about 75 to 100 mm of water and prevents foul smelling air coming back from water drainage systems via connections to toilets, sinks and so on. 'S' traps work well unless the drainage water has sand in it – which then collects in the 'U' part of the 'S'. See also Flush toilet⁣ – tank style with siphon-flush valve Siphon Vapor lock References Plumbing Water physics
Air lock
[ "Physics", "Materials_science", "Engineering" ]
763
[ "Construction", "Plumbing", "Water physics", "Condensed matter physics" ]
765,970
https://en.wikipedia.org/wiki/Dehn%20twist
In geometric topology, a branch of mathematics, a Dehn twist is a certain type of self-homeomorphism of a surface (two-dimensional manifold). Definition Suppose that c is a simple closed curve in a closed, orientable surface S. Let A be a tubular neighborhood of c. Then A is an annulus, homeomorphic to the Cartesian product of a circle and a unit interval I: Give A coordinates (s, t) where s is a complex number of the form with and . Let f be the map from S to itself which is the identity outside of A and inside A we have Then f is a Dehn twist about the curve c. Dehn twists can also be defined on a non-orientable surface S, provided one starts with a 2-sided simple closed curve c on S. Example Consider the torus represented by a fundamental polygon with edges a and b Let a closed curve be the line along the edge a called . Given the choice of gluing homeomorphism in the figure, a tubular neighborhood of the curve will look like a band linked around a doughnut. This neighborhood is homeomorphic to an annulus, say in the complex plane. By extending to the torus the twisting map of the annulus, through the homeomorphisms of the annulus to an open cylinder to the neighborhood of , yields a Dehn twist of the torus by a. This self homeomorphism acts on the closed curve along b. In the tubular neighborhood it takes the curve of b once along the curve of a. A homeomorphism between topological spaces induces a natural isomorphism between their fundamental groups. Therefore one has an automorphism where [x] are the homotopy classes of the closed curve x in the torus. Notice and , where is the path travelled around b then a. Mapping class group It is a theorem of Max Dehn that maps of this form generate the mapping class group of isotopy classes of orientation-preserving homeomorphisms of any closed, oriented genus- surface. W. B. R. Lickorish later rediscovered this result with a simpler proof and in addition showed that Dehn twists along explicit curves generate the mapping class group (this is called by the punning name "Lickorish twist theorem"); this number was later improved by Stephen P. Humphries to , for , which he showed was the minimal number. Lickorish also obtained an analogous result for non-orientable surfaces, which require not only Dehn twists, but also "Y-homeomorphisms." See also Fenchel–Nielsen coordinates Lantern relation References Andrew J. Casson, Steven A Bleiler, Automorphisms of Surfaces After Nielsen and Thurston, Cambridge University Press, 1988. . Stephen P. Humphries, "Generators for the mapping class group," in: Topology of low-dimensional manifolds (Proc. Second Sussex Conf., Chelwood Gate, 1977), pp. 44–47, Lecture Notes in Math., 722, Springer, Berlin, 1979. W. B. R. Lickorish, "A representation of orientable combinatorial 3-manifolds." Ann. of Math. (2) 76 1962 531—540. W. B. R. Lickorish, "A finite set of generators for the homotopy group of a 2-manifold", Proc. Cambridge Philos. Soc. 60 (1964), 769–778. Geometric topology Homeomorphisms
Dehn twist
[ "Mathematics" ]
737
[ "Topology", "Homeomorphisms", "Geometric topology" ]