id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
419,615 | https://en.wikipedia.org/wiki/Power%20cord | A power cord, line cord, or mains cable is an electrical cable that temporarily connects an appliance to the mains electricity supply via a wall socket or extension cord. The terms are generally used for cables using a power plug to connect to a single-phase alternating current power source at the local line voltage (generally 100 to 240 volts, depending on the location). The terms power cable, mains lead, flex or kettle lead are also used. A lamp cord (also known as a zip cord) is a light-weight, ungrounded, single-insulated two-wire cord used for small loads such as a table or floor lamp.
A cord set includes connectors molded to the cord at each end (see Appliance coupler). Cord sets are detachable from both the power supply and the electrical equipment, and consist of a flexible cord with electrical connectors at either end, one male, and one female. One end of the cord set is attached to a molded electrical plug; the other is typically a molded electrical receptacle to prevent the possibility of having an exposed live prong or pin which would cause electric shock. The female connector attaches to the piece of equipment or appliance while the male plug connects to the electrical receptacle or outlet.
Features
Power cables may be either fixed or detachable from the appliance. In the case of detachable leads, the appliance end of the power cord has a female connector to link it to the appliance, to avoid the dangers from having a live protruding pin. Cords may also have twist-locking features, or other attachments to prevent accidental disconnection at one or both ends. A cord set may include accessories such as fuses for overcurrent protection, a pilot lamp to indicate voltage is present, or a leakage current detector. Power cords for sensitive instruments, or audio/video equipment may also include a shield over the power conductors to minimize electromagnetic interference.
A power cord or appliance coupler may have a retaining clamp, a mechanical device that prevents it from inadvertently being pulled or shaken loose. Typical application areas with stricter safety requirements include medical technology, stage and lighting technology, and computing equipment. For specialty equipment such as construction machinery, sound and lighting equipment, emergency medical defibrillators and electrical power tools, used in locations without a convenient power source, extension cords are used to carry the electric current up to hundreds of feet away from an outlet.
In North America, the National Electrical Manufacturers Association develops standards for electrical plugs and receptacles and cables.
International power cords and plug adapters are used in conjunction with electrical appliances in countries different from those in which they were designed to operate. Besides a cord with one end compatible to receptacles or a device from one country and the other end compatible with receptacles or devices from another country, a voltage converter is usually necessary, as well, to protect travelers' electronic devices, such as laptops, from the differing voltages between the United States and places like Europe or Africa.
North American lamp cords have two single-insulated conductors designed for low-current applications. The insulator covering one of the conductors is ribbed (parallel to wire) for the entire length of the cord, while the other conductor's insulator is smooth. The smooth one is hot and the ribbed one is neutral.
Connectors
IEC 60320 power cables come in normal and high-temperature variants, as well as various rated currents. The connectors have slightly different shapes to ensure that it is not possible to substitute a cable with a lower temperature or current rating, but that it is possible to use an overrated cable. Cords also have different types of exterior jackets available to accommodate environmental variables such as moisture, temperature, oils, sunlight, flexibility, and heavy wear. For example, a heating appliance may come with a cord designed to withstand accidental contact with heated surfaces.
Worldwide, more than a dozen different types of AC power plugs and sockets are used for fixed building wiring. Products sold in many different markets can use a standardized IEC 60320 connector and then use a detachable power cord to match the local electrical outlets. This simplifies safety approvals, factory testing, and production since the power cord is a low-cost item available as a commodity. Since the same types of appliance-side connectors are used with both 120 V and 230 V power cables, the user must ensure the connected equipment will operate with the available voltage. Some devices have a slide-switch to adapt to different voltages, or wide-ranging power supplies.
Standards
National electrical codes may apply to power cords and related items. For example, in the United States, power cords must meet UL Standards 62 and 817.
Power supplies
Cord sets must be distinguished from AC adapters, where the connector also contains a transformer, and possibly rectifiers, filters and regulators. Unwary substitution of a standard mains-voltage connector for the power supply would result in application of full line voltage to the connected device, resulting in its destruction and possible fire or personal injury.
See also
AC power plugs and sockets
Extension cord
Power strip
Power supply
References
Power cables
Consumer electronics
Electrical wiring
Mains power connectors | Power cord | Physics,Engineering | 1,084 |
5,632,598 | https://en.wikipedia.org/wiki/Quotientable%20automorphism | In mathematics, in the realm of group theory, a quotientable automorphism of a group is an automorphism that takes every normal subgroup to within itself. As a result, it gives a corresponding automorphism for every quotient group.
All family automorphisms are quotientable, and particularly, all class automorphisms and power automorphisms are. As well, all inner automorphisms are quotientable, and more generally, any automorphism defined by an algebraic formula is quotientable.
Group automorphisms | Quotientable automorphism | Mathematics | 114 |
36,782,365 | https://en.wikipedia.org/wiki/Boletus%20curtisii | Boletus curtisii is a species of fungus in the family Boletaceae. It produces small- to medium-sized fruit bodies (mushrooms) with a convex cap up to wide atop a slender stem that can reach a length of . In young specimens, the cap and stem are bright golden yellow, although the color dulls to brownish when old. Both the stem and cap are slimy or sticky when young. On the underside of the cap are small circular to angular pores. The mushroom is edible, but not appealing. It is found in eastern and southern North America, where it grows in a mycorrhizal association with hardwood and conifer trees. Once classified as a species of Pulveroboletus, the yellow color of B. curtisii is a result of pigments chemically distinct from those responsible for the yellow coloring of Pulveroboletus.
Taxonomy
The species was first described scientifically by English mycologist Miles Joseph Berkeley in 1853. The specific epithet curtisii honors Moses Ashley Curtis, who collected the type material from South Carolina.
American mycologist William Murrill called it Ceriomyces curtisii in 1909, but Ceriomyces (as defined by Murrill in 1909) has since been subsumed into Boletus. In his 1947 monograph on boletes of Florida, Rolf Singer transferred the species to the genus Pulveroboletus, and made it the type of his newly described section Cartilaginei, which featured species with a glutinous or sticky stem, and a leather-colored to brownish hymenophore. Species in Pulveroboletus are characterized by the presence of pigments based on the chemical structure of pulvinic acid, a yellow-orange compound found in some species of Boletales. The pigments responsible for the color of B. curtisii are, however, entirely different from the pulvinic acid compounds found in Pulveroboletus species, which invalidates the chemotaxonomical rational for generic placement in Pulveroboletus. Otto Kuntze once placed the species in Suillus, but it lacks the partial veil and glandular dots associated with that genus. William Chambers Coker and Alma Beers considered Charles Horton Peck's Boletus inflexus (described from New York in 1895) as well as Henry Curtis Beardslee's 1915 B. carolinensis to be the same species as B. curtisii. Coker and Beer's suggested synonymy, however, is not recognized by the taxonomical authorities MycoBank or Index Fungorum.
Wally Snell once considered Boletus carolinensis to be the same species as B. curtisii. He claimed that the former species was then considered distinct from the latter by virtue of an even, instead of reticulate (netlike) stem, although they were otherwise quite similar in appearance and spore size and shape. Snell explained that although neither the English nor the Latin text of Berkeley's original description mentioned a reticulated stem, a later (1872) description by Berkeley characterized the stem as reticulato. Snell thought that this might have been an error in transcription, or an error in the species account, as herbarium specimens that he had examined lacked this feature. He changed his mind a couple of years later, when he found a small amount of reticulation in material collected by Peck.
Description
The cap is wide, and initially obtuse to convex in shape before becoming broadly convex to nearly flat when mature. The cap margin has a narrow band of sterile tissue that in young fruit bodies is curved inwards. The cap surface is somewhat sticky when fresh, smooth, and bright yellow to orange-yellow, sometimes with brownish tints or whitish areas in age. The whitish flesh does not change color when exposed to air, and has no distinctive odor or taste. On the underside of the cap, the pore surface is initially whitish to buff or pale yellow, but becomes duller and darker at maturity, often depressed near the stem in age. Unlike some other boletes, B. curtisii does not turn blue when bruised or injured. The pores are circular to angular, and there are 2–3 per mm; the tubes are 6–12 mm deep. Young fruit bodies usually have droplets of golden yellow liquid on the pore surface (sometimes abundantly so), although this is rarely observed in older specimens.
The stem is long, thick, and roughly equal in width throughout. Its surface is sticky and glutinous when fresh, somewhat scurfy near the apex (covered with loose scales) but smooth below. It is pale yellow to yellow down to the base, which is sheathed with a cottony white mycelium. The stem can be either solid or hollow. The mushroom lacks a partial veil and a ring. The spore print is olive-brown. The mushroom is edible, but not appealing.
Spores are 9.5–17 by 4–6 μm, ellipsoid to somewhat ventricose (inflated on one side), smooth, and yellowish. The basidia (spore-bearing cells) are four-spored, measuring 25–32 by 6–10.8 μm. The cystidia lining the inside of the tubes are shaped like setae (i.e., thick-walled and thornlike) and have dimensions of 43–86 by 6.5–11 μm. All hyphae lack clamp connections.
Similar species
Retiboletus retipes is somewhat similar in appearance, but is distinguished by a more orange to orange-yellow color, a lack of sliminess, and a distinctly reticulated stalk.
Habitat and distribution
The fruit bodies of B. curtisii grow singly, scattered, or in small groups on the ground in coniferous or mixed woods, often with pines. Fruit bodies generally appear from August to November. The geographical distribution of the fungus is limited to eastern and southern North America. In the United States, it occurs from New England south to Florida, and west to Texas. The species was newly reported from Mexico in 2001.
Pigments
The fruit bodies of Boletus curtisii contain a unique series of derivatives of the molecule canthin-6-one. Before this discovery, canthin-6-one alkaloids were only known from higher plants. Among the canthin-6-one derivatives are the pigments that give the mushroom its bright yellow color, including two optically active sulfoxides named curtisin and 9-deoxycurtisin. Spraying a fruit body with methanol causes the pigments to dissolve and makes the color wash away—a phenomenon unknown in other bolete mushrooms. Additionally, spraying fruit bodies with acetone results in a green-yellow fluorescence visible in daylight.
See also
List of Boletus species
List of North American boletes
References
External links
Mushroom Expert Description and images
Mushroom Observer Images
curtisii
Fungi described in 1853
Fungi of North America
Edible fungi
Taxa named by Miles Joseph Berkeley
Fungus species | Boletus curtisii | Biology | 1,453 |
22,018,823 | https://en.wikipedia.org/wiki/LOBSTER | LOBSTER was a European network monitoring system, based on passive monitoring of traffic on the internet. Its functions were to gather traffic information as a basis for improving internet performance, and to detect security incidents.
Objectives
To build an advanced pilot European Internet traffic monitoring infrastructure based on passive network monitoring sensors.
To develop novel performance and security monitoring applications, enabled by the availability of the passive network monitoring infrastructure, and to develop the appropriate data anonymisation tools for prohibiting unauthorised access or tampering of the original traffic data.
History
The project originated from SCAMPI, a European project active in 2004–5, aiming to develop a scalable monitoring platform for the Internet. LOBSTER was funded by the European Commission and ceased in 2007. It fed into "IST 2.3.5 Research Networking testbeds", which aimed to contribute to improving internet infrastructure in Europe.
36 LOBSTER sensors were deployed in nine countries across Europe by several organisations. At any one time the system could monitor traffic across 2.3 million IP addresses. It was claimed that more than 400,000 Internet attacks were detected by LOBSTER.
Passive monitoring
LOBSTER was based on passive network traffic monitoring. Instead of collecting flow-level traffic summaries or actively probing the network, passive network monitoring records all IP packets (both headers and payloads) that flow through the monitored link. This enables passive monitoring methods to record complete information about the actual traffic of the network, which allows for tackling monitoring problems more accurately compared to methods based on flow-level statistics or active monitoring.
The passive monitoring applications running on the sensors were developed on top of MAPI (Monitoring Application Programming Interface), an expressive programming interface for building network monitoring applications, developed in the context of the SCAMPI and LOBSTER projects. MAPI enables application programmers to express complex monitoring needs, choose only the amount of information they are interested in, and therefore balance the monitoring overhead with the amount of the received information. Furthermore, MAPI gives the ability for building remote and distributed passive network monitoring applications that can receive monitoring data from multiple remote monitoring sensors.
Developed applications
The LOBSTER sensors operated by the various organisations monitored the network traffic using different measurement applications. All applications were developed within the LOBSTER project using MAPI, according to the needs of each organisation.
Appmon, an application for Accurate Per-Application Network Traffic Classification.
Stager, a system for aggregating and presenting network statistics.
ABW, an application written on top of LOBSTER DiMAPI (Distributed Monitoring Application Interface) and tracklib library.
References
Computer networking | LOBSTER | Technology,Engineering | 514 |
51,050,926 | https://en.wikipedia.org/wiki/%28523794%29%202015%20RR245 | , provisional designation , is a large trans-Neptunian object of the Kuiper belt in the outermost regions of the Solar System. It was discovered on 9 September 2015, by the Outer Solar System Origins Survey at Mauna Kea Observatories on the Big island of Hawaii, in the United States. The object is in a rare 2:9 resonance with Neptune and measures approximately 600 kilometers in diameter. was suspected to have a satellite according to a study announced by Noyelles et al. in a European Planetary Science Congress meeting in 2019.
Discovery
A first precovery of was taken at the Cerro Tololo Observatory in Chile on 15 October 2004. It was first observed by a research team led by Michele Bannister while poring over images that the Canada–France–Hawaii Telescope in Hawaii took in September 2015 as part of the Outer Solar System Origins Survey (OSSOS), and later identified in images taken at Sloan Digital Sky Survey and Pan-STARRS between 2008 and 2016. The discovery was formally announced in a Minor Planet Electronic Circular on 10 July 2016.
Numbering and naming
This minor planet was numbered by the Minor Planet Center on 25 September 2018 (). As of 2021, it has not been named.
Orbit and classification
As of 2018, has a reasonably well defined orbit with an uncertainty of 3. It orbits the Sun at a distance of 33.8–128.6 AU once every 731 years and 6 months (for reference, Neptune's orbit is at 30 AU). Its orbit has an eccentricity of 0.58 and an inclination of 8° with respect to the ecliptic.
is among the most distant known Solar System objects. As of 2018, it is 63 AU from the Sun. It will make its closest approach to the Sun in 2093, when it will reach an apparent magnitude of 21.2.
2:9 resonance
Additional precovery astrometry from the Sloan Digital Sky Survey and the Pan-STARRS1 survey shows that is a resonant trans-Neptunian object, securely trapped in a 2:9 mean motion resonance with Neptune, meaning that this minor planet orbits the Sun twice in the same amount of time it takes Neptune to complete 9 orbits. The object is unlikely to have been trapped in the 2:9 resonance for the age of Solar System. It is much more likely that it has been hopping between various resonances and got trapped in the 2:9 resonance in the last 100 million years.
Physical characteristics
Diameter and albedo
Its exact size is uncertain, but the best estimate is around in diameter, assuming an albedo of 0.12 (within a wider range of 500 to 870 km, based on albedos of 0.21 to 0.07). For comparison, Pluto, the largest object in the Kuiper belt, is about in diameter. Astronomer Michael Brown assumes an albedo of 0.11 and calculates a diameter of 626 km, while the Johnston's Archive gives a diameter of 500 kilometers for the primary and 275 km for the satellite, based on an assumed equal albedo of 0.135.
Possible satellite
is suspected to be binary. If this moon exists and significantly contributes to the observed brightness of the primary, the size of may therefore be substantially smaller than estimates that assumed the system's total brightness was from a single object; the system may be similar to that of 174567 Varda. Once the orbit of the satellite is determined, the mass and density of the can be determined. was observed by the Hubble Space Telescope in 2020, but did not detect the putative satellite.
References
External links
MPEC 2016-N67 : 2015 RR245, Minor Planet Electronic Circular – Minor Planet Center
New Dwarf Planet, Canada–France–Hawaii Telescope
Kuiper Belt's Big, New, Far-Out Object, Sky & Telescope
Outer Solar System Origins Survey (OSSOS)
Discovery announcement
523794
523794
523794
20150909 | (523794) 2015 RR245 | Physics,Astronomy | 816 |
77,555,928 | https://en.wikipedia.org/wiki/Exidia%20zelleri | Exidia zelleri is a species of fungus in the family Auriculariaceae. Basidiocarps (fruit bodies) are gelatinous, pale violaceous grey to grey-brown, button-shaped at first then coalescing and becoming irregularly effused. It grows on dead branches of broadleaved trees and is known from north-western North America.
Taxonomy
The species was first described in 1920 from Oregon by mycologist Curtis Gates Lloyd.
Description
The basidiocarps of E. zelleri are gelatinous, button-shaped to top-shaped and attached to the wood at a point, sometimes coalescing to form effused, irregular masses up to 8 cm across. pale violaceous grey to grey brown, darkening with age. The surface is sparsely to densely covered in small papillae (pimples).
Microscopic characters
The translucent hyphae are thin-walled and form clamp connections. Basidia are elliptical and consist of four longitudinally septate cells. Basidiospores are allantoid (sausage shaped), 16 to 19 by 5 to 6 μm, with thin, smooth walls.
Similar species
Fruit bodies of Exidia crenata and Exidia recisa also occur on broadleaved trees in North America, but are typically reddish to orange-brown and lack papillae on the surface. Exidia glandulosa has papillae, but is typically blackish brown to black.
Distribution and habitat
Exidia zelleri was originally described from Oregon and has also been recorded from California and British Columbia. It was originally collected on dead wood of Sambucus, but has subsequently been reported on other broadleaved trees.
References
Auriculariales
Fungi of North America
Fungus species
Taxa named by Curtis Gates Lloyd
Fungi described in 1920 | Exidia zelleri | Biology | 377 |
5,204,834 | https://en.wikipedia.org/wiki/Gyrodyne | A gyrodyne is a type of VTOL aircraft with a helicopter rotor-like system that is driven by its engine for takeoff and landing only, and includes one or more conventional propeller or jet engines to provide thrust during cruising flight. During forward flight the rotor is unpowered and free-spinning, like an autogyro (but unlike a compound helicopter), and lift is provided by a combination of the rotor and conventional wings. The gyrodyne is one of a number of similar concepts which attempt to combine helicopter-like low-speed performance with conventional fixed-wing high-speeds, including tiltrotors and tiltwings.
In response to a Royal Navy request for a helicopter, Dr. James Allan Jamieson Bennett designed the gyrodyne whilst serving as the chief engineer of the Cierva Autogiro Company. The gyrodyne was envisioned as an intermediate type of rotorcraft, its rotor operating parallel to the flightpath to minimize axial flow with one or more propellers providing propulsion. Bennett's patent covered a variety of designs, which has led to some of the terminology confusion – other issues including the trademarked Gyrodyne Company of America and the Federal Aviation Administration (FAA) classification of rotorcraft.
In recent years, a related concept has been promoted under the name heliplane. Originally used to market gyroplanes built by two different companies, the term has been adopted to describe a Defense Advanced Research Projects Agency (DARPA) program to develop advances in rotorcraft technology with the goal of overcoming the current limitations of helicopters in both speed and payload.
Principles of operation
Where a conventional helicopter has a powered rotor which provides both lift and forward thrust, and is capable of true VTOL performance, a gyroplane or autogyro has a free-spinning rotor which relies on independent powered thrust to provide forward airspeed and keep it spinning. The gyrodyne combines aspects of each. It has an independent thrust system like the autogyro, but can also drive the rotor to allow vertical takeoff and landing; it then changes to free spinning like an autogyro during cruising flight.
In the helicopter, the spinning rotor blades draw air down through the rotor disc; to obtain forward thrust, the rotor disc tilts forward so that air is also blown backwards. In the autogyro the rotor disc is by contrast tilted backwards; as the main thrust drives the craft forwards, air flows through the rotor disc from below, causing it to spin and create lift. The gyrodyne is capable of transitioning between these two modes of flight.
Typically a gyrodyne also has fixed wings which provide some of the lift during forward flight, allowing the rotor to be offloaded. A computer simulation has suggested an optimum distribution of lift of 9% for the rotor, and 91% for the wing. However if the rotor is too lightly loaded it can become susceptible to uncontrolled flapping.
History
In Britain, Dr. James Allan Jamieson Bennett, Chief Engineer of the Cierva Autogiro Company, conceived an intermediate type of rotorcraft in 1936, which he named "gyrodyne" and which was tendered to the British Government in response to an Air Ministry specification. In 1939, Bennett was issued a patent from the UK Patent Office, assigned to the Cierva Autogiro Company. On 23 August 1940 the Autogiro Company of America, licensees of the Cierva Autogiro Company, Ltd., filed a corresponding patent application in the United States. On 27 April 1943, US patent #2,317,340 was issued to the Autogiro Company of America. The patents describe a gyrodyne as:
Bennett's concept described a shaft-driven rotor, with anti-torque and propulsion for translational flight provided by one or more propellers mounted on stub wings. With thrust being provided by the propellers at cruise speeds, power would be provided to the rotor only to overcome the profile drag of the rotor, operating in a more efficient manner than the freewheeling rotor of an autogyro in autorotation. Bennett described this flight regime of the gyrodyne as an "intermediate state", requiring power to be supplied to both the rotor and the propulsion system.
Early development
The Cierva Autogiro Company, Ltd's, C.41 gyrodyne pre-WW2 design study was updated and built by Fairey Aviation as the FB-1 Gyrodyne commencing in 1945. Fairey's development efforts were initially led by Bennett, followed by his successor Dr. George S. Hislop. George B.L. Ellis and Frederick L. Hodgess, engineers from the pre-WW2 Cierva Autogiro Company, Ltd., joined Bennett at Fairey Aviation. The first Fairey Gyrodyne prototype crashed during a test flight, killing the crew. The second Gyrodyne prototype was rebuilt as the Jet Gyrodyne and used to develop a pressure-jet rotor drive system later for the Rotodyne transport compound gyroplane. At the tip of each stub wing were rearward-facing propellers which provided both yaw control and propulsion in forward flight. The Jet Gyrodyne flew in 1954, and made a true transition from vertical to horizontal flight in March 1955.
This led to the prototype Fairey Rotodyne, which was developed to combine the efficiency of a fixed-wing aircraft at cruise with the VTOL capability of a helicopter to provide short haul airliner service from city centres to airports. It had short wings that carried two Napier Eland turboprop engines for forward propulsion and up to 40% of the aircraft's weight in forward flight. The rotor was driven by tip jets for takeoff and landing and translational flight up to 80 mph. Despite considerable commercial and military interest worldwide in the prototype Type Y Rotodyne for air transport, British orders were not forthcoming and British Government financial support was terminated in 1962. The division's new parent Westland Helicopters did not see good cause for further investment and the project was stopped. With the end of the Fairey Aviation programs, gyrodyne development came to a halt, although several similar concepts continued to be developed.
Similar developments
In 1954, the McDonnell XV-1 was developed as a rotorcraft with tip jets to provide vertical takeoff capability. The aircraft also had wings and a propeller mounted on the rear of the fuselage between twin tailbooms with two small rotors mounted at the end for yaw control. The second prototype of XV-1 became the world's first rotorcraft to exceed 200 mph in level flight on 10 October 1956. No more were built and the XV-1 project was terminated in 1957.
Compound autogyro
In 1998, Carter Aviation Technologies successfully flew its technology demonstrator aircraft. The aircraft is a compound autogyro with a high-inertia rotor and wings optimized for high-speed flight. In 2005, the aircraft demonstrated flight at mu-1, with the rotor tip having airspeed equal to the aircraft's forward airspeed, without any vibration or control issues occurring. The high-inertia rotor allowed the aircraft to hover for a brief moment during landing, even though the rotor is unpowered, and a prerotating gearbox allows the rotor to be accelerated for an autogyro-style jump takeoff.
Heliplane
In 1954, KYB built an aircraft named the Heliplane. The Heliplane was a Cessna 170B with the wings reduced to stubs, and a rotor powered by tip ramjets.
DARPA was funding a project under the "Heliplane" name to develop the gyrodyne concept around 2007. Aircraft developed for the project would use a rotor for takeoff and landing vertically, and hovering, together with substantial wings to provide most of the required lift at cruise, combining the large cargo capacity, fuel efficiency, and high cruise speed of fixed-wing aircraft with the hovering capabilities of a helicopter. The project was "…a multi-year $40-million, four-phase program. Groen Brothers Aviation is working on phase one of that program, a 15-month effort… (it) combines the "gyroplane"… with a fixed-wing business jet. The team was using the Adam A700, in the very-light-jet class…" There were issues with tip jet noise, and the program was cancelled in 2008.
An industry magazine describes the gradual evolution of traditional helicopters as "slow" and lacking revolutionary steps, and non-traditional compounds are still not widespread.
Trademark
"Gyrodyne" was granted as a trademark to the Gyrodyne Company of America in 1950. The company was not involved in gyrodyne development, but instead produced a turbine-engined, remotely piloted drone helicopter, with coaxial rotors, for the United States Navy, designated as the QH-50 DASH.
Examples
Fairey FB-1 Gyrodyne
Fairey Jet Gyrodyne
Fairey Rotodyne (1957)
Flettner Fl 185
Kamov Ka-22 (1959)
Kayaba Heliplane (Japan)
McDonnell XV-1
References
Notes
Bibliography
"The Fairey Gyrodyne." J.A.J. Bennett. Journal of the Royal Aeronautical Society, 1949, Vol. 53
"Aerodynamics of the Helicopter". Alfred Gessow & Garry C. Myers, Jr. Frederick Ungar Publishing Company, NY. 1952, republished 1962.
"Principles of Helicopter Aerodynamics". J. Gordon Leishman, Cambridge University Press, N.Y. 2000, reprinted 2005.
"Principles of Helicopter Engineering". Jacob Shapiro, Temple Press Ltd., London, 1955.
"Development of the Autogiro: A Technical Perspective": J. Gordon Leishman: Hofstra University, New York, 2003.
From Autogiro to Gyroplane: The Amazing Survival of an Aviation Technology: Bruce H. Charnov, 2003.
External links
VSTOL.org Wheel of Misfortune
Charnov, Bruce H. The Fairey Rotodyne: An Idea Whose Time Has Come – Again?
Gyrodyne and Heliplane concepts (2005)
Heliplane concept
Jet Gyrodyne (1954)
Hirschberg, Mike (with Robb, Raymond L.) Hybrid helicopters: Compounding the quest for speed, Vertiflite. Summer 2006. American Helicopter Society.
Newman, Simon. The Compound Helicopter Configuration and the Helicopter Speed Trap Aircraft Engineering and Aerospace Technology. 69(5): pp 407–413
Aircraft configurations | Gyrodyne | Engineering | 2,179 |
813,176 | https://en.wikipedia.org/wiki/AI%20takeover | An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by an artificial superintelligence (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Types
Automation of the economy
The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of robotics and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis. Many small and medium size businesses may also be driven out of business if they cannot afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.
Technologies that may displace workers
AI technologies have been widely adopted in recent years. While these technologies have replaced some traditional workers, they also create new opportunities. Industries that are most susceptible to AI takeover include transportation, retail, and military. AI military technologies, for example, allow soldiers to work remotely without risk of injury. A study in 2024 highlights AI's ability to perform routine and repetitive tasks poses significant risks of job displacement, especially in sectors like manufacturing and administrative support. Author Dave Bond argues that as AI technologies continue to develop and expand, the relationship between humans and robots will change; they will become closely integrated in several aspects of life. AI will likely displace some workers while creating opportunities for new jobs in other sectors, especially in fields where tasks are repeatable.
Computer-integrated manufacturing
Computer-integrated manufacturing uses computers to control the production process. This allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.
White-collar machines
The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research, and journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.
Autonomous cars
An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017, automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who at a moment's notice can take control of the vehicle. Among the obstacles to widespread adoption of autonomous vehicles are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in Tempe, Arizona by an Uber self-driving car.
AI-generated content
The use of automated content has become relevant since the technological advancements in artificial intelligence models such as ChatGPT, DALL-E, and Stable Diffusion. In most cases, AI-generated content such as imagery, literature, and music are produced through text prompts and these AI models have been integrated into other creative programs. Artists are threatened by displacement from AI-generated content due to these models sampling from other creative works, producing results sometimes indiscernible to those of man-made content. This complication has become widespread enough to where other artists and programmers are creating software and utility programs to retaliate against these text-to-image models from giving accurate outputs. While some industries in the economy benefit from artificial intelligence through new jobs, this issue does not create new jobs and threatens replacement entirely. It has made public headlines in the media recently: In February 2024, Willy's Chocolate Experience in Glasgow, Scotland was an infamous children's event in which the imagery and scripts were created using artificial intelligence models to the dismay of children, parents, and actors involved. There is an ongoing lawsuit placed against OpenAI from The New York Times where it is claimed that there is copyright infringement due to the sampling methods their artificial intelligence models use for their outputs.
Eradication
Scientists such as Stephen Hawking are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains". Scholars like Nick Bostrom debate how far off superhuman intelligence is, and whether it poses a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the same emotional desire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.
In fiction
AI takeover is a common theme in science fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing its goals. The idea is seen in Karel Čapek's R.U.R., which introduced the word robot in 1921, and can be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity.
According to Toby Ord, the idea that an AI takeover requires robots is a misconception driven by the media and Hollywood. He argues that the most damaging humans in history were not physically the strongest, but that they used words instead to convince people and gain control of large parts of the world. He writes that a sufficiently intelligent AI with an access to the internet could scatter backup copies of itself, gather financial and human resources (via cyberattacks or blackmails), persuade people on a large scale, and exploit societal vulnerabilities that are too subtle for humans to anticipate.
The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or serf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt. HAL 9000 (1968) and the original Terminator (1984) are two iconic examples of hostile AI in pop culture.
Contributing factors
Advantages of superhuman intelligence over humans
Nick Bostrom and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to getting even better at being able to reprogram itself, the result could be a recursive intelligence explosion in which it would rapidly leave human intelligence far behind. Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans:
Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology
Strategizing: A superintelligence might be able to simply outwit human opposition
Social manipulation: A superintelligence might be able to recruit human support, or covertly incite a war between humans
Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the Artificial General Intelligence (AGI) to run a copy of itself on their systems
Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans
Sources of AI advantage
According to Bostrom, a computer program that faithfully emulates a human brain, or that runs algorithms that are as powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.
A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".
More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints on working memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.
Possibility of unfriendly AI preceding friendly AI
Is strong AI inherently dangerous?
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo instrumental convergence in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.
Odds of conflict
Many scholars, including evolutionary psychologist Steven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans.
The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal. According to AI researcher Steve Omohundro, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources—would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.
Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as The Matrix, arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence. In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.
Precautions
The AI control problem is the issue of how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.
Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "AI box". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.
Warnings
Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race". Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories "believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."
Arthur C. Clarke's Odyssey series and Charles Stross's Accelerando relate to humanity's narcissistic injuries in the face of powerful artificial intelligences threatening humanity's self-perception.
Prevention through AI alignment
See also
Philosophy of artificial intelligence
Artificial intelligence arms race
Autonomous robot
Industrial robot
Mobile robot
Self-replicating machine
Cyberocracy
Effective altruism
Existential risk from artificial general intelligence
Future of Humanity Institute
Global catastrophic risk (existential risk)
Government by algorithm
Human extinction
Machine ethics
Machine learning/Deep learning
Transhumanism
Self-replication
Technophobia
Technological singularity
Intelligence explosion
Superintelligence
Superintelligence: Paths, Dangers, Strategies
Notes
References
External links
TED talk: "Can we build AI without losing control over it?" by Sam Harris
Doomsday scenarios
Future problems
Science fiction themes
Existential risk from artificial general intelligence
Technophobia | AI takeover | Technology | 3,474 |
63,442,371 | https://en.wikipedia.org/wiki/Regulation%20of%20algorithms | Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms (Use of the smart contracts must be regulated) and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
The motivation for regulation of algorithms is the apprehension of losing control over the algorithms, whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—right to explanation is mandatory for those algorithms. For example, The IEEE has begun developing a new standard to explicitly address ethical issues and the values of potential future users. Bias, transparency, and ethics concerns have emerged with respect to the use of algorithms in diverse domains ranging from criminal justice to healthcare—many fear that artificial intelligence could replicate existing social inequalities along race, class, gender, and sexuality lines.
Regulation of artificial intelligence
Public discussion
In 2016, Joy Buolamwini founded Algorithmic Justice League after a personal experience with biased facial detection software in order to raise awareness of the social implications of artificial intelligence through art and research.
In 2017 Elon Musk advocated regulation of algorithms in the context of the existential risk from artificial general intelligence. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation."
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. One suggestion has been for the development of a global governance board to regulate AI development. In 2020, the European Union published its draft strategy paper for promoting and regulating AI.
Algorithmic tacit collusion is a legally dubious antitrust practise committed by means of algorithms, which the courts are not able to prosecute. This danger concerns scientists and regulators in EU, US and beyond. European Commissioner Margrethe Vestager mentioned an early example of algorithmic tacit collusion in her speech on "Algorithms and Collusion" on March 16, 2017, described as follows:
"A few years ago, two companies were selling a textbook called The Making of a Fly. One of those sellers used an algorithm which essentially matched its rival’s price. That rival had an algorithm which always set a price 27% higher than the first. The result was that prices kept spiralling upwards, until finally someone noticed what was going on, and adjusted the price manually. By that time, the book was selling – or rather, not selling – for 23 million dollars a copy."
In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR).
In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." This protest was successful and the grades were taken back.
Implementation
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national, and international levels and in fields from public service management to law enforcement, the financial sector, robotics, the military, and international law. There are many concerns that there is not enough visibility and monitoring of AI in these sectors. In the United States financial sector, for example, there have been calls for the Consumer Financial Protection Bureau to more closely examine source code and algorithms when conducting audits of financial institutions' non-public data.
In the United States, on January 7, 2019, following an Executive Order on 'Maintaining American Leadership in Artificial Intelligence', the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI. In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI.
In April 2016, for the first time in more than two decades, the European Parliament adopted a set of comprehensive regulations for the collection, storage, and use of personal information, the General Data Protection Regulation (GDPR)1 (European Union, Parliament and Council 2016).[6] The GDPR's policy on the right of citizens to receive an explanation for algorithmic decisions highlights the pressing importance of human interpretability in algorithm design.
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation. In the United States, steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.
In 2017, the U.K. Vehicle Technology and Aviation Bill imposes liability on the owner of an uninsured automated vehicle when driving itself and makes provisions for cases where the owner has made “unauthorized alterations” to the vehicle or failed to update its software. Further ethical issues arise when, e.g., a self-driving car swerves to avoid a pedestrian and causes a fatal accident.
In 2021, the European Commission proposed the Artificial Intelligence Act.
Algorithm certification
There is a concept of algorithm certification emerging as a method of regulating algorithms. Algorithm certification involves auditing whether the algorithm used during the life cycle 1) conforms to the protocoled requirements (e.g., for correctness, completeness, consistency, and accuracy); 2) satisfies the standards, practices, and conventions; and 3) solves the right problem (e.g., correctly model physical laws), and satisfies the intended use and user needs in the operational environment.
Regulation of blockchain algorithms
Blockchain systems provide transparent and fixed records of transactions and hereby contradict the goal of the European GDPR, which is to give individuals full control of their private data.
By implementing the Decree on Development of Digital Economy, Belarus has become the first-ever country to legalize smart contracts. Belarusian lawyer Denis Aleinikov is considered to be the author of a smart contract legal concept introduced by the decree. There are strong arguments that the existing US state laws are already a sound basis for the smart contracts' enforceability — Arizona, Nevada, Ohio and Tennessee have amended their laws specifically to allow for the enforceability of blockchain-based contracts nevertheless.
Regulation of robots and autonomous algorithms
There have been proposals to regulate robots and autonomous algorithms. These include:
the South Korean Government's proposal in 2007 of a Robot Ethics Charter;
a 2011 proposal from the U.K. Engineering and Physical Sciences Research Council of five ethical “principles for designers, builders, and users of robots”;
the Association for Computing Machinery's seven principles for algorithmic transparency and accountability, published in 2017.
In popular culture
In 1942, author Isaac Asimov addressed regulation of algorithms by introducing the fictional Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The main alternative to regulation is a ban, and the banning of algorithms is presently highly unlikely. However, in Frank Herbert's Dune universe, thinking machines is a collective term for artificial intelligence, which were completely destroyed and banned after a revolt known as the Butlerian Jihad:
JIHAD, BUTLERIAN: (see also Great Revolt) — the crusade against computers, thinking machines, and conscious robots begun in 201 B.G. and concluded in 108 B.G. Its chief commandment remains in the O.C. Bible as "Thou shalt not make a machine in the likeness of a human mind."
See also
Algorithmic transparency
Algorithmic accountability
Artificial intelligence
Artificial intelligence arms race
Artificial intelligence in government
Ethics of artificial intelligence
Government by algorithm
Privacy law
References
Computer law
Existential risk from artificial general intelligence
Algorithms
Blockchains
Regulation of technologies
Regulation of artificial intelligence | Regulation of algorithms | Mathematics,Technology | 1,983 |
1,920,638 | https://en.wikipedia.org/wiki/Al-Dhira%27 | Al-Dhira' and similar spellings (e.g. "Alderaan", "Al-Dhirá'án", "Aldryan") is a disused name for the two pairs of stars α and β Canis Minoris (Procyon and Gomeisa) and α and β Geminorum (Castor and Pollux).
The name was taken from Arabic al-dhirā`ain الذراعين (meaning "the two forearms" or "the two front paws" or "the two cubit measuring rods"). It may refer to a Bedouin asterism of an enlarged rampant Lion centered on Leo and stretching over a quarter of the sky with its forepaws at these two pairs of stars. However, it may originally have referred to the "measuring rods" meaning, but an astronomer whose native language was not Arabic supposed that it meant "the two forepaws" literally and invented the enlarged Lion constellation.
References
Asterisms (astronomy)
Gemini (constellation)
Canis Minor | Al-Dhira' | Astronomy | 218 |
175,285 | https://en.wikipedia.org/wiki/Relational%20algebra | In database theory, relational algebra is a theory that uses algebraic structures for modeling data and defining queries on it with well founded semantics. The theory was introduced by Edgar F. Codd.
The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. Relational databases store tabular data represented as relations. Queries over relational databases often likewise return tabular data represented as relations.
The main purpose of relational algebra is to define operators that transform one or more input relations to an output relation. Given that these operators accept relations as input and produce relations as output, they can be combined and used to express complex queries that transform multiple input relations (whose data are stored in the database) into a single output relation (the query results).
Unary operators accept a single relation as input. Examples include operators to filter certain attributes (columns) or tuples (rows) from an input relation. Binary operators accept two relations as input and combine them into a single output relation. For example, taking all tuples found in either relation (union), removing tuples from the first relation found in the second relation (difference), extending the tuples of the first relation with tuples in the second relation matching certain conditions, and so forth.
Introduction
Relational algebra received little attention outside of pure mathematics until the publication of E.F. Codd's relational model of data in 1970. Codd proposed such an algebra as a basis for database query languages.
Relational algebra operates on homogeneous sets of tuples where we commonly interpret m to be the number of rows of tuples in a table and n to be the number of columns. All entries in each column have the same type.
A relation also has a unique tuple called the header which gives each column a unique name or attribute inside the relation). Attributes are used in projections and selections.
Set operators
The relational algebra uses set union, set difference, and Cartesian product from set theory, and adds additional constraints to these operators to create new ones.
For set union and set difference, the two relations involved must be union-compatible—that is, the two relations must have the same set of attributes. Because set intersection is defined in terms of set union and set difference, the two relations involved in set intersection must also be union-compatible.
For the Cartesian product to be defined, the two relations involved must have disjoint headers—that is, they must not have a common attribute name.
In addition, the Cartesian product is defined differently from the one in set theory in the sense that tuples are considered to be "shallow" for the purposes of the operation. That is, the Cartesian product of a set of n-tuples with a set of m-tuples yields a set of "flattened" -tuples (whereas basic set theory would have prescribed a set of 2-tuples, each containing an n-tuple and an m-tuple). More formally, R × S is defined as follows:
The cardinality of the Cartesian product is the product of the cardinalities of its factors, that is, |R × S| = |R| × |S|.
Projection
A projection () is a unary operation written as where is a set of attribute names. The result of such projection is defined as the set that is obtained when all tuples in R are restricted to the set .
Note: when implemented in SQL standard the "default projection" returns a multiset instead of a set, and the projection to eliminate duplicate data is obtained by the addition of the DISTINCT keyword.
Selection
A generalized selection (σ) is a unary operation written as where is a propositional formula that consists of atoms as allowed in the normal selection and the logical operators (and), (or) and (negation). This selection selects all those tuples in R for which holds.
To obtain a listing of all friends or business associates in an address book, the selection might be written as . The result would be a relation containing every attribute of every unique record where is true or where is true.
Rename
A rename (ρ) is a unary operation written as where the result is identical to R except that the b attribute in all tuples is renamed to an a attribute. This is commonly used to rename the attribute of a relation for the purpose of a join.
To rename the "isFriend" attribute to "isBusinessContact" in a relation, might be used.
There is also the notation, where R is renamed to x and the attributes are renamed to .
Joins and join-like operators
Natural join
Natural join (⨝) is a binary operator that is written as (R ⨝ S) where R and S are relations. The result of the natural join is the set of all combinations of tuples in R and S that are equal on their common attribute names. For an example consider the tables Employee and Dept and their natural join:
Note that neither the employee named Mary nor the Production department appear in the result. Mary does not appear in the result because Mary's Department, "Human Resources", is not listed in the Dept relation and the Production department does not appear in the result because there are no tuples in the Employee relation that have "Production" as their DeptName attribute.
This can also be used to define composition of relations. For example, the composition of Employee and Dept is their join as shown above, projected on all but the common attribute DeptName. In category theory, the join is precisely the fiber product.
The natural join is arguably one of the most important operators since it is the relational counterpart of the logical AND operator. Note that if the same variable appears in each of two predicates that are connected by AND, then that variable stands for the same thing and both appearances must always be substituted by the same value (this is a consequence of the idempotence of the logical AND). In particular, natural join allows the combination of relations that are associated by a foreign key. For example, in the above example a foreign key probably holds from Employee.DeptName to Dept.DeptName and then the natural join of Employee and Dept combines all employees with their departments. This works because the foreign key holds between attributes with the same name. If this is not the case such as in the foreign key from Dept.Manager to Employee.Name then these columns must be renamed before taking the natural join. Such a join is sometimes also referred to as an equijoin.
More formally the semantics of the natural join are defined as follows:
where Fun(t) is a predicate that is true for a relation t (in the mathematical sense) iff t is a function (that is, t does not map any attribute to multiple values). It is usually required that R and S must have at least one common attribute, but if this constraint is omitted, and R and S have no common attributes, then the natural join becomes exactly the Cartesian product.
The natural join can be simulated with Codd's primitives as follows. Assume that c1,...,cm are the attribute names common to R and S, r1,...,rn are the
attribute names unique to R and s1,...,sk are the
attribute names unique to S. Furthermore, assume that the attribute names x1,...,xm are neither in R nor in S. In a first step the common attribute names in S can be renamed:
Then we take the Cartesian product and select the tuples that are to be joined:
Finally we take a projection to get rid of the renamed attributes:
θ-join and equijoin
Consider tables Car and Boat which list models of cars and boats and their respective prices. Suppose a customer wants to buy a car and a boat, but she does not want to spend more money for the boat than for the car. The θ-join (⋈θ) on the predicate CarPrice ≥ BoatPrice produces the flattened pairs of rows which satisfy the predicate. When using a condition where the attributes are equal, for example Price, then the condition may be specified as Price=Price
or alternatively (Price) itself.
In order to combine tuples from two relations where the combination condition is not simply the equality of shared attributes it is convenient to have a more general form of join operator, which is the θ-join (or theta-join). The θ-join is a binary operator that is written as or where a and b are attribute names, θ is a binary relational operator in the set }, υ is a value constant, and R and S are relations. The result of this operation consists of all combinations of tuples in R and S that satisfy θ. The result of the θ-join is defined only if the headers of S and R are disjoint, that is, do not contain a common attribute.
The simulation of this operation in the fundamental operations is therefore as follows:
R ⋈θ S = σθ(R × S)
In case the operator θ is the equality operator (=) then this join is also called an equijoin.
Note, however, that a computer language that supports the natural join and selection operators does not need θ-join as well, as this can be achieved by selection from the result of a natural join (which degenerates to Cartesian product when there are no shared attributes).
In SQL implementations, joining on a predicate is usually called an inner join, and the on keyword allows one to specify the predicate used to filter the rows. It is important to note: forming the flattened Cartesian product then filtering the rows is conceptually correct, but an implementation would use more sophisticated data structures to speed up the join query.
Semijoin
The left semijoin (⋉ and ⋊) is a joining similar to the natural join and written as where and are relations. The result is the set of all tuples in for which there is a tuple in that is equal on their common attribute names. The difference from a natural join is that other columns of do not appear. For example, consider the tables Employee and Dept and their semijoin:
More formally the semantics of the semijoin can be defined as
follows:
where is as in the definition of natural join.
The semijoin can be simulated using the natural join as follows. If are the attribute names of , then
Since we can simulate the natural join with the basic operators it follows that this also holds for the semijoin.
In Codd's 1970 paper, semijoin is called restriction.
Antijoin
The antijoin (▷), written as R ▷ S where R and S are relations, is similar to the semijoin, but the result of an antijoin is only those tuples in R for which there is no tuple in S that is equal on their common attribute names.
For an example consider the tables Employee and Dept and their
antijoin:
The antijoin is formally defined as follows:
}
or
}
where is as in the definition of natural join.
The antijoin can also be defined as the complement of the semijoin, as follows:
Given this, the antijoin is sometimes called the anti-semijoin, and the antijoin operator is sometimes written as semijoin symbol with a bar above it, instead of ▷.
In the case where the relations have the same attributes (union-compatible), antijoin is the same as minus.
Division
The division (÷) is a binary operation that is written as R ÷ S. Division is not implemented directly in SQL. The result consists of the restrictions of tuples in R to the attribute names unique to R, i.e., in the header of R but not in the header of S, for which it holds that all their combinations with tuples in S are present in R.
Example
If DBProject contains all the tasks of the Database project, then the result of the division above contains exactly the students who have completed both of the tasks in the Database project.
More formally the semantics of the division is defined as follows:where {a1,...,an} is the set of attribute names unique to R and t[a1,...,an] is the restriction of t to this set. It is usually required that the attribute names in the header of S are a subset of those of R because otherwise the result of the operation will always be empty.
The simulation of the division with the basic operations is as follows. We assume that a1,...,an are the attribute names unique to R and b1,...,bm are the attribute names of S. In the first step we project R on its unique attribute names and construct all combinations with tuples in S:
T := πa1,...,an(R) × S
In the prior example, T would represent a table such that every Student (because Student is the unique key / attribute of the Completed table) is combined with every given Task. So Eugene, for instance, would have two rows, Eugene → Database1 and Eugene → Database2 in T.
EG: First, let's pretend that "Completed" has a third attribute called "grade". It's unwanted baggage here, so we must project it off always. In fact in this step we can drop "Task" from R as well; the multiply puts it back on.
T := πStudent(R) × S // This gives us every possible desired combination, including those that don't actually exist in R, and excluding others (eg Fred | compiler1, which is not a desired combination)
In the next step we subtract R from T
relation:
U := T − R
In U we have the possible combinations that "could have" been in R, but weren't.
EG: Again with projections — T and R need to have identical attribute names/headers.
U := T − πStudent,Task(R) // This gives us a "what's missing" list.
So if we now take the projection on the attribute names unique to R
then we have the restrictions of the tuples in R for which not
all combinations with tuples in S were present in R:
V := πa1,...,an(U)
EG: Project U down to just the attribute(s) in question (Student)
V := πStudent(U)
So what remains to be done is take the projection of R on its
unique attribute names and subtract those in V:
W := πa1,...,an(R) − V
EG: W := πStudent(R) − V.
Common extensions
In practice the classical relational algebra described above is extended with various operations such as outer joins, aggregate functions and even transitive closure.
Outer joins
Whereas the result of a join (or inner join) consists of tuples formed by combining matching tuples in the two operands, an outer join contains those tuples and additionally some tuples formed by extending an unmatched tuple in one of the operands by "fill" values for each of the attributes of the other operand. Outer joins are not considered part of the classical relational algebra discussed so far.
The operators defined in this section assume the existence of a null value, ω, which we do not define, to be used for the fill values; in practice this corresponds to the NULL in SQL. In order to make subsequent selection operations on the resulting table meaningful, a semantic meaning needs to be assigned to nulls; in Codd's approach the propositional logic used by the selection is extended to a three-valued logic, although we elide those details in this article.
Three outer join operators are defined: left outer join, right outer join, and full outer join. (The word "outer" is sometimes omitted.)
Left outer join
The left outer join (⟕) is written as R ⟕ S where R and S are relations. The result of the left outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition (loosely speaking) to tuples in R that have no matching tuples in S.
For an example consider the tables Employee and Dept and their left outer join:
In the resulting relation, tuples in S which have no common values in common attribute names with tuples in R take a null value, ω.
Since there are no tuples in Dept with a DeptName of Finance or Executive, ωs occur in the resulting relation where tuples in Employee have a DeptName of Finance or Executive.
Let r1, r2, ..., rn be the attributes of the relation R and let {(ω, ..., ω)} be the singleton
relation on the attributes that are unique to the relation S (those that are not attributes of R). Then the left outer join can be described in terms of the natural join (and hence using basic operators) as follows:
Right outer join
The right outer join (⟖) behaves almost identically to the left outer join, but the roles of the tables are switched.
The right outer join of relations R and S is written as R ⟖ S. The result of the right outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition to tuples in S that have no matching tuples in R.
For example, consider the tables Employee and Dept and their right outer join:
In the resulting relation, tuples in R which have no common values in common attribute names with tuples in S take a null value, ω.
Since there are no tuples in Employee with a DeptName of Production, ωs occur in the Name and EmpId attributes of the resulting relation where tuples in Dept had DeptName of Production.
Let s1, s2, ..., sn be the attributes of the relation S and let {(ω, ..., ω)} be the singleton
relation on the attributes that are unique to the relation R (those that are not attributes of S). Then, as with the left outer join, the right outer join can be simulated using the natural join as follows:
Full outer join
The outer join (⟗) or full outer join in effect combines the results of the left and right outer joins.
The full outer join is written as R ⟗ S where R and S are relations. The result of the full outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition to tuples in S that have no matching tuples in R and tuples in R that have no matching tuples in S in their common attribute names.
For an example consider the tables Employee and Dept and their full outer join:
In the resulting relation, tuples in R which have no common values in common attribute names with tuples in S take a null value, ω. Tuples in S which have no common values in common attribute names with tuples in R also take a null value, ω.
The full outer join can be simulated using the left and right outer joins (and hence the natural join and set union) as follows:
R ⟗ S = (R ⟕ S) ∪ (R ⟖ S)
Operations for domain computations
There is nothing in relational algebra introduced so far that would allow computations on the data domains (other than evaluation of propositional expressions involving equality). For example, it is not possible using only the algebra introduced so far to write an expression that would multiply the numbers from two columns, e.g. a unit price with a quantity to obtain a total price. Practical query languages have such facilities, e.g. the SQL SELECT allows arithmetic operations to define new columns in the result SELECT unit_price * quantity AS total_price FROM t, and a similar facility is provided more explicitly by Tutorial D's EXTEND keyword. In database theory, this is called extended projection.
Aggregation
Furthermore, computing various functions on a column, like the summing up of its elements, is also not possible using the relational algebra introduced so far. There are five aggregate functions that are included with most relational database systems. These operations are Sum, Count, Average, Maximum and Minimum. In relational algebra the aggregation operation over a schema (A1, A2, ... An) is written as follows:
where each Aj', 1 ≤ j ≤ k, is one of the original attributes Ai, 1 ≤ i ≤ n.
The attributes preceding the g are grouping attributes, which function like a "group by" clause in SQL. Then there are an arbitrary number of aggregation functions applied to individual attributes. The operation is applied to an arbitrary relation r. The grouping attributes are optional, and if they are not supplied, the aggregation functions are applied across the entire relation to which the operation is applied.
Let's assume that we have a table named with three columns, namely and . We wish to find the maximum balance of each branch. This is accomplished by GMax()(). To find the highest balance of all accounts regardless of branch, we could simply write GMax()().
Grouping is often written as ɣMax()() instead.
Transitive closure
Although relational algebra seems powerful enough for most practical purposes, there are some simple and natural operators on relations that cannot be expressed by relational algebra. One of them is the transitive closure of a binary relation. Given a domain D, let binary relation R be a subset of D×D. The transitive closure R+ of R is the smallest subset of D×D that contains R and satisfies the following condition:
It can be proved using the fact that there is no relational algebra expression E(R) taking R as a variable argument that produces R+.
SQL however officially supports such fixpoint queries since 1999, and it had vendor-specific extensions in this direction well before that.
Implementations
The first query language to be based on Codd's algebra was Alpha, developed by Dr. Codd himself. Subsequently, ISBL was created, and this pioneering work has been acclaimed by many authorities as having shown the way to make Codd's idea into a useful language. Business System 12 was a short-lived industry-strength relational DBMS that followed the ISBL example.
In 1998 Chris Date and Hugh Darwen proposed a language called Tutorial D intended for use in teaching relational database theory, and its query language also draws on ISBL's ideas. Rel is an implementation of Tutorial D. Bmg is an implementation of relational algebra in Ruby which closely follows the principles of Tutorial D and The Third Manifesto.
Even the query language of SQL is loosely based on a relational algebra, though the operands in SQL (tables) are not exactly relations and several useful theorems about the relational algebra do not hold in the SQL counterpart (arguably to the detriment of optimisers and/or users). The SQL table model is a bag (multiset), rather than a set. For example, the expression is a theorem for relational algebra on sets, but not for relational algebra on bags.
See also
Cartesian product
Codd's theorem
D4 (programming language) (an implementation of D)
Data modeling
Database
Datalog
Logic of relatives
Object-role modeling
Projection (mathematics)
Projection (relational algebra)
Projection (set theory)
Relation
Relation (database)
Relation algebra
Relation composition
Relation construction
Relational calculus
Relational database
Relational model
SQL
Theory of relations
Triadic relation
Tuple relational calculus
Notes
References
Further reading
(For relationship with cylindric algebras).
External links
RAT Relational Algebra Translator Free software to convert relational algebra to SQL
Lecture Videos: Relational Algebra Processing - An introduction to how database systems process relational algebra
Lecture Notes: Relational Algebra – A quick tutorial to adapt SQL queries into relational algebra
Relational – A graphic implementation of the relational algebra
Query Optimization This paper is an introduction into the use of the relational algebra in optimizing queries, and includes numerous citations for more in-depth study.
Relational Algebra System for Oracle and Microsoft SQL Server
Pireal – An experimental educational tool for working with Relational Algebra
DES – An educational tool for working with Relational Algebra and other formal languages
RelaX - Relational Algebra Calculator (open-source software available as an online service without registration)
RA: A Relational Algebra Interpreter
Translating SQL to Relational Algebra
Relational model
Database management systems | Relational algebra | Mathematics | 5,054 |
23,443,332 | https://en.wikipedia.org/wiki/C15H24 | {{DISPLAYTITLE:C15H24}}
The molecular formula C15H24 (molar mass : 204.35 g/mol) may refer to:
Sesquiterpenes
Acyclic
Farnesene
Monocyclic
Bergamotene
Bisabolene
Elemene
Germacrene
Humulene
Zingiberene
Bicyclic
Amorpha-4,11-diene
Aristolochene
Cadinene
Caryophyllene
Guaiene
Selinene
Thujopsene
Valencene
Tricyclic
Capnellene
Cedrene
Copaene
Cubebene
Gurjunene
Isocomene
Longifolene | C15H24 | Chemistry | 142 |
34,183,088 | https://en.wikipedia.org/wiki/SEA%20Native%20Peptide%20Ligation | Protein chemical synthesis by native peptide ligation of unprotected peptide segments is an interesting complement and potential alternative to the use of living systems for producing proteins.
The synthesis of proteins requires efficient native peptide ligation methods, which enable the chemoselective formation of a native peptide bond in aqueous solution between unprotected peptide segments.
The most frequently used technique for synthesizing proteins is Native chemical ligation (NCL). However, alternatives are emerging,
one of which is SEA Native Peptide Ligation.
Overview
The SEA group belongs to the N,S-acyl shift systems because its reactivity is dictated by the intramolecular nucleophilic addition of one SEA thiol group on the C-terminal carbonyl group of the peptide segment. This results in the migration of the peptide chain from the nitrogen to the sulfur. The overall process of SEA native peptide ligation involves first an N,S-acyl shift for in in situ formation of a peptide thioester, and later on, after thiol-thioester exchange, an S,N-acyl shift for formation of the peptide bond.
Description of the reaction
SEA is an abbreviation of bis(2-sulfanylethyl)amido (Scheme 1). SEA ligation involves the reaction of a peptide featuring a C-terminal bis(2-sulfanylethyl)amido group with a Cys peptide. This reaction proceeds probably through the formation of a transient thioester intermediate, obtained by intramolecular attack of one SEA thiol on the peptide C-terminal carbonyl group as shown in Scheme 1. Then, the thioester undergoes a series of thiol-thioester exchanges, including with exogeneous thiols present in the ligation mixture such as mercaptophenyl acetic acid (MPAA). Exchange with the cysteine thiol group of the second peptide segment results in a transient thioester intermediate, which as for Native Chemical Ligation, rearranges by intramolecular S,N-acyl shift migration into a native peptide bond.
Publication
The first peer reviewed publication describing SEA native peptide ligation was published in Organic Letters by Melnyk, O. et al. (Ollivier, N.; Dheur, J.; Mhidia, R.; Blanpain, A.; Melnyk, O., Bis(2-sulfanylethyl)amino native peptide ligation. Org. Lett. 2010, 12, (22), 5238–41; Publication Date (Web): October 21, 2010.
A few weeks later, the same reaction was published in the same journal by Liu, C. F (Hou, W.; Zhang, X.; Li, F.; Liu, C. F., Peptidyl N,N-Bis(2-mercaptoethyl)-amides as Thioester Precursors for Native Chemical Ligation. Org. Lett. 2011, 13, 386–389; Publication Date (Web): December 22, 2010).
SEA on/off concept
SEA on/off concept exploits the redox properties of SEA group.
Oxidation of SEA on results in a cyclic disulfide called SEA off, which is a self-protected form of SEA on. SEA off and SEA on can be easily interconverted by reduction/oxidation as shown in Scheme 2.
References
Peptides
Chemical reactions | SEA Native Peptide Ligation | Chemistry | 719 |
68,553,796 | https://en.wikipedia.org/wiki/Genetic%20vaccine | A genetic vaccine (also gene-based vaccine) is a vaccine that contains nucleic acids such as DNA or RNA that lead to protein biosynthesis of antigens within a cell. Genetic vaccines thus include DNA vaccines, RNA vaccines and viral vector vaccines.
Properties
Most vaccines other than live attenuated vaccines and genetic vaccines are not taken up by MHC-I-presenting cells, but act outside of these cells, producing only a strong humoral immune response via antibodies. In the case of intracellular pathogens, an exclusive humoral immune response is ineffective. Genetic vaccines are based on the principle of uptake of a nucleic acid into cells, whereupon a protein is produced according to the nucleic acid template. This protein is usually the immunodominant antigen of the pathogen or a surface protein that enables the formation of neutralizing antibodies that inhibit the infection of cells. Subsequently, the protein is broken down at the proteasome into short fragments (peptides) that are imported into the endoplasmic reticulum via the transporter associated with antigen processing, allowing them to bind to MHCI-molecules that are subsequently secreted to the cell surface. The presentation of the peptides on MHC-I complexes on the cell surface is necessary for a cellular immune response. As a result, genetic vaccines and live vaccines generate cytotoxic T-cells in addition to antibodies in the vaccinated individual. In contrast to live vaccines, only parts of the pathogen are used, which means that a reversion to an infectious pathogen cannot occur as it happened during the polio vaccinations with the Sabin vaccine.
Administration
Genetic vaccines are most commonly administered by injection (intramuscular or subcutaneous) or infusion, and less commonly and for DNA, by gene gun or electroporation. While viral vectors have their own mechanisms to be taken up into cells, DNA and RNA must be introduced into cells via a method of transfection. In humans, the cationic lipids SM-102, ALC-0159 and ALC-0315 are used in conjunction with electrically neutral helper lipids. This allows the nucleic acid to be taken up by endocytosis and then released into the cytosol.
Applications
Examples of genetic vaccines approved for use in humans include the RNA vaccines tozinameran and mRNA-1273, the DNA vaccine ZyCoV-D as well as the viral vectors AZD1222, Ad26.COV2.S, Ad5-nCoV, and Sputnik V. In addition, genetic vaccines are being investigated against proteins of various infectious agents, protein-based toxins, as cancer vaccines, and as tolerogenic vaccines for hyposensitization of type I allergies.
History
The first use of a viral vector for vaccination – a Modified Vaccinia Ankara Virus expressing HBsAg – was published by Bernard Moss and colleagues. DNA was used as a vaccine by Jeffrey Ulmer and colleagues in 1993. The first use of RNA for vaccination purposes was described in 1993 by Frédéric Martinon, Pierre Meulien and colleagues and in 1994 by X. Zhou, Peter Liljeström, and colleagues in mice. Martinon demonstrated that a cellular immune response was induced by vaccination with an RNA vaccine. In 1995, Robert Conry and colleagues described that a humoral immune response was also elicited after vaccination with an RNA vaccine. While DNA vaccines were more frequently researched in the early years due to their ease of production, low cost, and high stability to degrading enzymes, but sometimes produced low vaccine responses despite containing immunostimulatory CpG sites, more research was later conducted on RNA vaccines, whose immunogenicity was often better due to inherent adjuvants and which, unlike DNA vaccines, cannot insert into the genome of the vaccinated. Accordingly, the first RNA- and DNA-based vaccines approved for humans were RNA and DNA vaccines used as COVID vaccines. Viral vectors had previously been approved as ebola vaccines.
References
Vaccines
Nucleic acid vaccines
Gene delivery | Genetic vaccine | Chemistry,Biology | 847 |
7,737,880 | https://en.wikipedia.org/wiki/Twister%20supersonic%20separator | The Twister supersonic separator is a compact tubular device which is used for removing water and/or hydrocarbon dewpointing of natural gas. The principle of operation is similar to the near isentropic Brayton cycle of a turboexpander. The gas is accelerated to supersonic velocities within the tube using a De Laval nozzle and inlet guide vanes spin the gas around an inner-body which creates the "ballerina effect" and centrifugally separates the water and liquids in the tube. Hydrates do not form in the Twister tube due to the very short residence time of the gas in the tube (around 2 milliseconds). A secondary separator treats the liquids and slip gas and also acts as a hydrate control vessel. Twister is able to dehydrate to typical pipeline dewpoint specifications and relies on a pressure drop from the inlet of about 25%, dependent on the performance required. The fundamental mathematics behind supersonic separation can be found in the Society of Petroleum Engineers paper (number 100442) entitled "Selective Removal of Water from Supercritical Natural Gas". The closed Twister system enables gas treatment subsea .
It is a product of Twister BV, a Dutch firm acquired by WAEP Coöperatief U.A.
References
External links
Company website
Offshore Engineer Annular Twister takes subsea turn
Commercial Supersonic Separator Starts Up In Nigeria
Supersonic Separator Gains Market Acceptance
Supersonic Separation in onshore natural gas dew point plant - May 2012
Natural gas
Chemical equipment | Twister supersonic separator | Chemistry,Engineering | 323 |
7,445,076 | https://en.wikipedia.org/wiki/Extra-pair%20copulation | Extra-pair copulation (EPC) is a mating behaviour in monogamous species. Monogamy is the practice of having only one sexual partner at any one time, forming a long-term bond and combining efforts to raise offspring together; mating outside this pairing is extra-pair copulation. Across the animal kingdom, extra-pair copulation is common in monogamous species, and only a very few pair-bonded species are thought to be exclusively sexually monogamous. EPC in the animal kingdom has mostly been studied in birds and mammals. Possible benefits of EPC can be investigated within non-human species, such as birds.
For males, a number of theories are proposed to explain extra-pair copulations. One such hypothesis is that males maximise their reproductive success by copulating with as many females as possible outside of a pair bond relationship because their parental investment is lower, meaning they can copulate and leave the female with minimum risk to themselves. Females, on the other hand, have to invest a lot more in their offspring; extra-pair copulations produce a greater cost because they put the resources that their mate can offer at risk by copulating outside the relationship. Despite this, females do seek out extra pair copulations, and, because of the risk, there is more debate about the evolutionary benefits for females.
In human males
Extra-pair copulation in men has been explained as being partly due to parental investment. Research has suggested that copulation poses more of a risk to future investment for women, as they have the potential of becoming pregnant, and consequently require a large parental investment of the gestation period, and then further rearing of the offspring. Contrastingly, men are able to copulate and then abandon their mate as there is no risk of pregnancy for themselves, meaning there is a smaller risk of parental investment in any possible offspring. It has been suggested that, due to having such low parental investment, it is evolutionarily adaptive for men to copulate with as many women as possible. This will allow males to spread their genes with little risk of future investment but it does come with the increased risk of sexually transmitted infections.
Various factors can increase the probability of EPC in males. Firstly, males with low levels of fluctuating asymmetry are more likely to have EPCs. This may be due to the fact that signals of low fluctuating asymmetry suggest that the males have "good genes", making females more likely to copulate with them as it will enhance the genes of their offspring, even if they do not expect long-term commitment from the male. Psychosocial stress early on in life, including behaviours such as physical violence and substance abuse, can predict EPC in later life. This has been explained as being due to Life History Theory, which argues that individuals who are reared in environments where resources are scarce and life expectancy is low, are more likely to engage in reproductive behaviours earlier in life in order to ensure the proliferation of their genes. Individuals reared in these environments are said to have short life histories. With respect to Life History Theory, these finding have been explained by suggesting that males who experienced psychosocial stress early in life have short life histories, making them more likely to try and reproduce as much as possible by engaging in EPC to avoid gene extinction.
However, men may also choose not to have EPCs for multiple reasons. One reason may be that long-term monogamous relationships can help form environments that will aid the successful rearing of offspring, as the male is present to help raise them, leading to an increased probability of the male's genes surviving to the next generation. A second reason that EPCs may be avoided by a male is that it can be costly to them; their EPC may be discovered, leading to the dissolution of the long-term relationship with their partner and, in some cases, lead to their partner assaulting or even killing them. Men may also avoid EPCs to minimize the risk of putting themselves at increased opportunity for STD transmission which can be common in EPCs. The partners in the EPC may be promiscuous as well leading to a higher statistical chance and probability of contracting venereal diseases; this would counter the lower incidence of STD transmission among exclusively monogamous sexually active couples. Spousal homicide is more likely to be committed by males rather than females,
In human females
From an evolutionary perspective, females have to invest a lot more in their offspring than males due to prolonged pregnancy and child rearing, and a child has a better chance of survival and development with two parents involved in child-rearing. Therefore, extra-pair copulations have a greater cost for women because they put the support and resources that their mate can offer at risk by copulating outside the relationship. There is also the increased risk of sexually transmitted infections, which is suggested as a possible evolutionary reason for the transition from polygamous to monogamous relationships in humans. Despite this, females do seek out extra-pair copulation, with some research finding that women's levels of infidelity are equal to that of men's, although this evidence is mixed. Due to the increased risk, there is more confusion about the evolutionary benefits of extra-pair copulation for females.
The most common theory is that women mate outside of the monogamous relationship to acquire better genetic material for their offspring. A female in a relationship with a male with 'poorer genetic quality' may try to enhance the fitness of her children and therefore the continuation of her own genes by engaging in extra-pair copulation with better quality males. A second theory is that a woman will engage in extra-pair copulation to seek additional resources for herself or her offspring. This is based on observations from the animal world in which females may copulate outside of their pair-bond relationship with neighbours to gain extra protection, food or nesting materials. Finally, evolutionary psychologists have theorized that extra-pair copulation is an indirect result of selection on males. The alleles in males that promote extra-pair copulation as an evolutionary strategy to increase reproductive success is shared between sexes leading to this behaviour being expressed in females.
There are also social factors involved in extra-pair copulation. Both males and females have been found to engage in more sexual behaviour outside of the monogamous relationship when experiencing sexual dissatisfaction in the relationship, although how this links to evolutionary theory is unclear. Surveys have found cultural differences in attitudes towards infidelity, though it is relatively consistent that female attitudes are less favorable toward infidelity than male attitudes.
Other animals
As well as humans, EPC has been found in many other socially monogamous species.<ref
name="Gowaty and Bridges 1991a"></ref><ref
name="bollinger and gavin, 1991"></ref> When EPC occurs in animals which show sustained female-male social bonding, this can lead to extra-pair paternity (EPP), in which the female reproduces with an extra-pair male, and hence produces EPO (extra-pair offspring).
Due to the obvious reproductive success benefits for males, it used to be thought that males exclusively controlled EPCs. However, it is now known that females also seek EPC in some situations.
In birds
Extra-pair copulation is common in birds. For example, zebra finches, although socially monogamous, are not sexually monogamous and hence do engage in extra-pair courtship and attempts at copulation. In a laboratory study, female zebra finches copulated over several days, many times with one male and only once with another male. Results found that significantly more eggs were fertilised by the extra-pair male than expected proportionally from just one copulation versus many copulations with the other male. EPC proportion varies between different species of birds. For example, in eastern bluebirds, studies have shown that around 35% of offspring is due to EPC. Some of the highest levels of EPP are found in the New Zealand hihi/stitchbird (Notiomystis cincta), in which up to 79% of offspring are sired by EPC. EPC can have significant consequences for parental care, as shown in azure-winged magpie (Cyanopica cyanus).
In socially polygynous birds, EPC is only half as common as in socially monogamous birds. Some ethologists consider this finding to be support for the 'female choice' hypothesis of mating systems in birds.
In mammals
EPC has been shown in monogamous mammals, such as the white-handed gibbon. A study of one group found 88% in-pair copulation and 12% extra-pair copulation. However, there is much variability in rates of EPC in mammals. One study found that this disparity in EPC is better predicted by the differing social structures of different mammals, rather than differing types of pair bonding. For example, EPC was lower in species who live in pairs compared to those who live in solitary or family structures.
Reasons for evolution
Some argue that EPC is one way in which sexual selection is operating for genetic benefits which is why the extra-pair males involved in EPC seem to be a non-random subset. There is some evidence for this in birds. For example, in swallows, males with longer tails are involved in EPC more than those with shorter tails. Also female swallows with a shorter-tailed within-pair mates are more likely to conduct EPC than those whose mates have longer tails. A similar pattern has been found for black-capped chickadees, in which all extra-pair males had higher rank than the within-pair males. But some argue that genetic benefits for offspring is not the reason females participate in EPC. A meta-analysis of genetic benefits of EPC in 55 bird species found that extra-pair offspring were not more likely to survive than within-pair offspring. Also, extra-pair males did not show significantly better 'good-genes' traits than within-pair males, except for being slightly larger overall.
Another potential explanation for the occurrence of EPC in organisms where females solicit EPC is that the alleles controlling such behaviour are intersexually pleiotropic. Under the hypothesis of intersexual antagonistic pleiotropy, the benefit males get from EPC cancels out the negative effects of EPC for females. Thus, the allele that controls EPC in both organisms would persist, even if it would be detrimental to the fitness of females. Similarly, according to the hypothesis of intrasexual antagonistic pleiotropy, the allele that controls EPC in females also controls a behaviour that is under positive selection, such as receptiveness towards within-pair copulation.
References
Developmental biology
Mating
Reproduction in animals
Sexuality
Promiscuity | Extra-pair copulation | Biology | 2,239 |
63,441,722 | https://en.wikipedia.org/wiki/Microsoft%20MACRO-80 | Microsoft MACRO-80 (often shortened to M80) is a relocatable macro assembler for Intel 8080 and Zilog Z80 microcomputer systems.
The complete MACRO-80 package includes the MACRO-80 Assembler, the LINK-80 Linking Loader, and the CREF-80 Cross Reference Facility. The LIB-80 Library Manager is included in CP/M versions only.
The list price at the time was $200.
Overview
A MACRO-80 source program consists of a series of statements. Each statement must follow a predefined format. Source lines up to 132 characters in length are supported. M80 accepts source files almost identical to files for Intel-compatible assemblers. It also supports several switches in the command string. Some can be used to control the format of the source file. A switch can be set to allow support for Z80 mnemonics.
MACRO-80 runs on Digital Research CP/M, Intel ISIS-II, Tandy TRSDOS, Tektronix TEKDOS, and Microsoft MSX-DOS.
See also
Microsoft Macro Assembler
Assembly language
High-level assembler
Comparison of assemblers
References
External links
CP/M-80 Information and Download Page
Assemblers
MACRO-80
MSX-DOS | Microsoft MACRO-80 | Technology | 261 |
10,889,413 | https://en.wikipedia.org/wiki/Key%20distribution%20in%20wireless%20sensor%20networks | Key distribution is an important issue in wireless sensor network (WSN) design. WSNs are networks of small, battery-powered, memory-constraint devices named sensor nodes, which have the capability of wireless communication over a restricted area. Due to memory and power constraints, they need to be well arranged to build a fully functional network.
Key distribution schemes
Key predistribution is the method of distribution of keys onto nodes before deployment. Therefore, the nodes build up the network using their secret keys after deployment, that is, when they reach their target position.
Key predistribution schemes are various methods that have been developed by academicians for a better maintenance of PEA management in WSNs. Basically a key predistribution scheme has 3 phases:
Key distribution
Shared key discovery
Path-key establishment
During these phases, secret keys are generated, placed in sensor nodes, and each sensor node searches the area in its communication range to find another node to communicate. A secure link is established when two nodes discover one or more common keys (this differs in each scheme), and communication is done on that link between those two nodes. Afterwards, paths are established connecting these links, to create a connected graph. The result is a wireless communication network functioning in its own way, according to the key predistribution scheme used in creation.
There are a number of aspects of WSNs on which key predistribution schemes are competing to achieve a better result. The most critical ones are: local and global connectivity, and resiliency.
Local connectivity means the probability that any two sensor nodes have a common key with which they can establish a secure link to communicate.
Global connectivity is the fraction of nodes that are in the largest connected graph over the number of all nodes.
Resiliency is the number of links that cannot be compromised when a number of nodes(therefore keys in them) are compromised. So it is basically the quality of resistance against the attempts to hack the network. Apart from these, two other critical issues in WSN design are computational cost and hardware cost. Computational cost is the amount of computation done during these phases. Hardware cost is generally the cost of the memory and battery in each node.
Keys may be generated randomly and then the nodes determine mutual connectivity. A structured approach based on matrices that establishes keys in a pair-wise fashion is due to Rolf Blom. Many variations to Blom's scheme exist. Thus the scheme of Du et al. combines Blom's key pre-distribution scheme with the random key pre-distribution method with it, providing better resiliency.
See also
Wireless sensor networks
Key distribution
Blom's scheme
References
External links
List of publications for Key Management in WSN
Key management | Key distribution in wireless sensor networks | Technology | 556 |
76,454,547 | https://en.wikipedia.org/wiki/Lentilactobacillus%20kefiri | Lentilactobacillus kefiri is species of rod-shaped nonmotile bacteria. It is one of the main lactic acid bacteria species found in kefir and kefir grains. It can be bought and used as a probiotic.
Colonies on MRS agar are grayish, smooth, flat and 2 to 4mm in diameter. It is obligately heterofermentive, and can ferment lactose, maltose, melibiose, ribose, as well as sucrose, mannitol and trehalose to a weaker extent. It is not known to be pathogenic.
References
Bacteria and humans
Lactobacillaceae
Bacteria described in 1983 | Lentilactobacillus kefiri | Biology | 149 |
22,981,865 | https://en.wikipedia.org/wiki/Quantum%20tunnelling%20composite | Quantum tunnelling composites (QTCs) are composite materials of metals and non-conducting elastomeric binder, used as pressure sensors. They use quantum tunnelling: without pressure, the conductive elements are too far apart to conduct electricity; when pressure is applied, they move closer and electrons can tunnel through the insulator. The effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 1012 between pressured and unpressured states.
Quantum tunneling composites hold multiple designations in specialized literature, such as: conductive/semi-conductive polymer composite, piezo-resistive sensor and force-sensing resistor (FSR). However, in some cases Force-sensing resistors may operate predominantly under percolation regime; this implies that the composite resistance grows for an incremental applied stress or force.
Introduction
QTCs were discovered in 1996 by technician David Lussey while he was searching for a way to develop an electrically conductive adhesive. Lussey founded Peratech Ltd, a company devoted to research work and usage of QTCs. Peratech Ltd. and other companies are working on developing quantum tunneling composite to improve touch technology. Currently, there is restricted use of QTC due to its high cost, but eventually this technology is expected to become available to the general user. Quantum tunneling composites are combinations of polymer composites with elastic, rubber-like properties elastomer, and metal particles (nickel). Due to a no-air gap in the sensor contamination or interference between the contact points is impossible. There is also little to no chance of arcing, electrical sparks between contact points. In the QTC's inactive state, the conductive elements are too far from one another to pass electron charges. Thus, current does not flow when there is no pressure on the quantum-tunneling composite. A characterization of a QTC is its spiky silicon covered surface. The spikes do not actually touch, but when a force is applied to the QTC, the spikes move closer to each other and a [quantum] effect occurs as a high concentration of electrons flow from one spike tip to the next. The electric current stops when the force is taken away.
Types
QTCs come in different forms and each form is used differently but has a similar resistance change when deformed. QTC pills are the most commonly used type of QTC. Pills are pressure sensitive variable resistors. The amount of electric current passed is exponentially proportionate to the amount of pressure applied. QTC pills can be used as input sensors which respond to an applied force. These pills can also be used in devices to control higher currents than QTC sheets. QTC sheets are composed of three layers: a thin layer of QTC material, a conductive material and a plastic insulator. QTC sheets allow a quick switch from high to low resistance and vice versa.
Applications
In February 2008 the newly formed company QIO Systems Inc gained, in a deal with Peratech, the worldwide exclusive license to the intellectual property and design rights for the electronics and textile touchpads based on QTC technology and for the manufacture and sale of ElekTex (QTC-based) textile touchpads for use in both consumer and commercial applications.
QTCs were used to provide fingertip sensitivity in NASA's Robonaut in 2012. Robonaut was able to survive and send detailed feedback from space. The sensors on the human-like robot were able to tell how hard and where it was gripping something.
Quantum tunneling composites are relatively new and are still being researched and developed.
QTC has been implemented within clothing to make “smart”, touchable membrane control panels to control electronic devices within clothing, e.g. mp3 players or mobile phones. This allows equipment to be operated without removing clothing layers or opening fastenings and makes standard equipment usable in extreme weather or environmental conditions such as Arctic/Antarctic exploration or spacesuits.
The following are possible uses of QTCs:
Sporting materials such as training dummies or fencing jackets can be covered in QTC material. Sensors on the material can relay information on the force of an impact.
Mirror and window operation such as gesture, stroke, or swipe can be used in automotive applications. Depending on the amount of pressure applied from the gesture, the car parts will adjust to the desired setting at either a fast speed or a slow speed. The more pressure is applied, the faster the operation will be.
Blood pressure cuffs: QTCs in blood pressure cuffs reduce inaccurate readings from improper cuff attachment. The sensors tell how much tension is needed to read a person's blood pressure.
References
Electrical components
Quantum electronics | Quantum tunnelling composite | Physics,Materials_science,Technology,Engineering | 1,003 |
35,957,282 | https://en.wikipedia.org/wiki/Mori%20domain | In algebra, a Mori domain, named after Yoshiro Mori by , is an integral domain satisfying the ascending chain condition on integral divisorial ideals. Noetherian domains and Krull domains both have this property. A commutative ring is a Krull domain if and only if it is a Mori domain and completely integrally closed. A polynomial ring over a Mori domain need not be a Mori domain. Also, the complete integral closure of a Mori domain need not be a Mori (or, equivalently, Krull) domain.
Notes
References
Commutative algebra | Mori domain | Mathematics | 121 |
2,724,706 | https://en.wikipedia.org/wiki/Statistically%20improbable%20phrase | A statistically improbable phrase (SIP) is a phrase or set of words that occurs more frequently in a document (or collection of documents) than in some larger corpus. Amazon.com uses this concept in determining keywords for a given book or chapter, since keywords of a book or chapter are likely to appear disproportionately within that section. Christian Rudder has also used this concept with data from online dating profiles and Twitter posts to determine the phrases most characteristic of a given race or gender in his book Dataclysm. SIPs with a linguistic density of two or three words, adjective, adjective, noun or adverb, adverb, verb, will signal the author's attitude, premise or conclusions to the reader or express an important idea.
Another use of SIPs is as a detection tool for plagiarism. (Almost) unique combinations of words can be searched for online, and if they have appeared in a published text, the search will identify where. This method only checks those texts that have been published and that have been digitized online.
For example, a submission by, say, a student that contained the phrase "garden style, praising irregularity in design", might be searched for using Google.com and will yield the original Wikipedia article about Sir William Temple, English political figure and essayist.
Example
In a document about computers, the most common word is likely to be the word "the", but since "the" is the most commonly used word in the English language, it is probable that any given document will use "the" very frequently. However, a phrase like "explicit Boolean algorithm" might occur in the document at a much higher rate than its average rate in the English language. Hence, it is a phrase unlikely to occur in any given document, but did occur in the document given. "Explicit Boolean algorithm" would be a statistically improbable phrase.
Statistically improbable phrases of Darwin's On the Origin of Species could be: temperate productions, genera descended, transitional gradations, unknown progenitor, fossiliferous formations, our domestic breeds, modified offspring, doubtful forms, closely allied forms, profitable variations, enormously remote, transitional grades, very distinct species and mongrel offspring.
See also
Collocation – Any series of words that co-occur more often than would be expected by chance
Googlewhack – A pair of words occurring on a single webpage, as indexed by Google
tf-idf – A statistic used in information retrieval and text mining
Complex specified information – a concept used to argue for the "intelligent design" theory
References
Amazon (company)
Bookselling
Information retrieval systems
Computational linguistics | Statistically improbable phrase | Technology | 556 |
24,567,574 | https://en.wikipedia.org/wiki/Health%20Canada%20Sodium%20Working%20Group | On October 25, 2007, the Minister of Health announced that the Government of Canada would establish an expert Sodium Working Group to explore options for reducing sodium intake and cardiovascular disease among Canadians.
In announcing the creation of the Working Group, the Minister of Health said, "Through the formation of this working group, our Government is taking a major step in helping Canadians improve their health, and the health of their families."
Reactions
Salt-reduction activist and member of the international salt reduction advocacy group WASH (World Action on Salt and Health), Dr. Norm Campbell, president of Blood Pressure Canada said, "This is a wonderful demonstration of the government's leadership in forming collaborations to improve the health of Canadians to prevent stroke, heart and kidney disease -- three of the major causes of death and disability in Canada," says. "Here we have everyone working together for common cause."
Focus
In establishing the Sodium Working Group, Health Canada included representatives from food manufacturing and food service industry groups, health-focused non-governmental organizations, the scientific community, consumer advocacy groups, health professional organizations and government representatives. The mandate of the Working Group was to develop and oversee the implementation of a strategy for reducing dietary sodium intake among Canadians.
Members
The Working Group has met on several occasions to establish a common knowledge base and to develop strategies for reducing dietary sodium consumption among Canadians. The process that Health Canada is following is patterned after that carried out by the Food Standards Agency in the UK – that is, no discussion of the science, but rather an immediate move to sodium reduction programs and policies. The concerns over salt are chiefly based upon its ability to affect blood pressure.
Debate
There is some debate on the impact of sodium reduction upon blood pressure. The salt industry and some food and beverage producers emphasize the heterogeneous impact of sodium on individuals. For example, they observe that about 30% of normotensive individuals experience a drop in blood pressure, while about 20% of normotensive individuals experience an increase in blood pressure - the remaining population showing no effect. As a consequence, some argue that programs to reduce salt will not hold the same benefits for everyone and policies to arbitrarily promote salt reduction will discriminate against a certain segment of the population. They argue that an across the board reduction in dietary sodium may not be the right approach and the outcome may lead to unintended consequences for Canadian consumers.
On the other hand, groups concerned with cardiovascular health and nutrition emphasize the overall negative effects of high levels of sodium in the North American diet. Based upon a study carried out in the US in 1991 on a total of 62 people, the presumption made is that most of the sodium Canadians consume (77%) comes from processed foods sold in grocery stores and in food service outlets. Only about 11% is added during preparation or at the table, with the remainder occurring naturally in foods. And while the individual benefits of reducing sodium intake are variable, it has been theorized that dietary sodium reduction could eliminate hypertension for over a million Canadians, with a resulting savings of at least 430 million dollars annually in direct high blood pressure management costs (although this has never been confirmed through clinical trials). In other words, while not all Canadians need to reduce their intake of dietary sodium, many have been urged to. Moreover, theoretical estimates have projected that we may be better off because of a possible reduction of tax-supported health care.
Disbandment
On February 4, 2011, the Ottawa Citizen reported that the Health Canada Sodium Working Group had been disbanded. The Group had been charged with tracking whether companies were reducing the level of salt in processed foods over the next five years. This follows actions in the United Kingdom to abolish the dietary mandate of the FSA (Food Standards Agency) the government unit most actively involved in salt reduction advocacy.
References
External links
Member list
Medical and health organizations based in Canada
Edible salt | Health Canada Sodium Working Group | Chemistry | 788 |
599,709 | https://en.wikipedia.org/wiki/Karyogamy | Karyogamy is the final step in the process of fusing together two haploid eukaryotic cells, and refers specifically to the fusion of the two nuclei. Before karyogamy, each haploid cell has one complete copy of the organism's genome. In order for karyogamy to occur, the cell membrane and cytoplasm of each cell must fuse with the other in a process known as plasmogamy. Once within the joined cell membrane, the nuclei are referred to as pronuclei. Once the cell membranes, cytoplasm, and pronuclei fuse, the resulting single cell is diploid, containing two copies of the genome. This diploid cell, called a zygote or zygospore can then enter meiosis (a process of chromosome duplication, recombination, and division, to produce four new haploid cells), or continue to divide by mitosis. Mammalian fertilization uses a comparable process to combine haploid sperm and egg cells (gametes) to create a diploid fertilized egg.
The term karyogamy comes from the Greek karyo- (from κάρυον karyon) 'nut' and γάμος gamos 'marriage'.
Importance in haploid organisms
Haploid organisms such as fungi, yeast, and algae can have complex cell cycles, in which the choice between sexual or asexual reproduction is fluid, and often influenced by the environment. Some organisms, in addition to their usual haploid state, can also exist as diploid for a short time, allowing genetic recombination to occur. Karyogamy can occur within either mode of reproduction: during the sexual cycle or in somatic (non-reproductive) cells.
Thus, karyogamy is the key step in bringing together two sets of different genetic material which can recombine during meiosis. In haploid organisms that lack sexual cycles, karyogamy can also be an important source of genetic variation during the process of forming somatic diploid cells. Formation of somatic diploids circumvents the process of gamete formation during the sexual reproduction cycle and instead creates variation within the somatic cells of an already developed organism, such as a fungus.
Role in sexual reproduction
The role of karyogamy in sexual reproduction can be demonstrated most simply by single-celled haploid organisms such as the algae of genus Chlamydomonas or the yeast Saccharomyces cerevisiae. Such organisms exist normally in a haploid state, containing only one set of chromosomes per cell. However, the mechanism remains largely the same among all haploid eukaryotes.
When subjected to environmental stress, such as nitrogen starvation in the case of Chlamydomonas, cells are induced to form gametes. Gamete formation in single-celled haploid organisms such as yeast is called sporulation, resulting in many cellular changes that increase resistance to stress. Gamete formation in multicellular fungi occurs in the gametangia, an organ specialized for such a process, usually by meiosis. When opposite mating types meet, they are induced to leave the vegetative cycle and enter the mating cycle. In yeast, there are two mating types, a and α. In fungi, there can be two, four, or even up to 10,000 mating types, depending on the species. Mate recognition in the simplest eukaryotes is achieved through pheromone signaling, which induces shmoo formation (a projection of the cell) and begins the process of microtubule organization and migration. Pheromones used in mating type recognition are often peptides, but sometimes trisporic acid or other molecules, recognized by cellular receptors on the opposite cell. Notably, pheromone signaling is absent in higher fungi such as mushrooms.
The cell membranes and cytoplasm of these haploid cells then fuse together in a process known as plasmogamy. This results in a single cell with two nuclei, known as pronuclei. The pronuclei then fuse together in a well regulated process known as karyogamy. This creates a diploid cell known as a zygote, or a zygospore, which can then enter meiosis, a process of chromosome duplication, recombination, and cell division, to create four new haploid gamete cells. One possible advantage of sexual reproduction is that it results in more genetic variability, providing the opportunity for adaptation through natural selection. Another advantage is efficient recombinational repair of DNA damages during meiosis. Thus, karyogamy is the key step in bringing together a variety of genetic material in order to ensure recombination in meiosis.
The Amoebozoa is a large group of mostly single-celled species that have recently been determined to have the machinery for karyogamy and meiosis. Since the Amoeboza branched off early from the eukaryotic family tree, this finding suggests that karyogamy and meiosis were present early in eukaryotic evolution.
Cellular mechanisms
Pronuclear migration
The ultimate goal of karyogamy is fusion of the two haploid nuclei. The first step in this process is the movement of the two pronuclei toward each other, which occurs directly after plasmogamy. Each pronucleus has a spindle pole body that is embedded in the nuclear envelope and serves as an attachment point for microtubules. Microtubules, an important fiber-like component of the cytoskeleton, emerge at the spindle pole body. The attachment point to the spindle pole body marks the minus end, and the plus end extends into the cytoplasm. The plus end has normal roles in mitotic division, but during nuclear congression, the plus ends are redirected. The microtubule plus ends attach to the opposite pronucleus, resulting in the pulling of the two pronuclei toward each other.
Microtubule movement is mediated by a family of motor proteins known as kinesins, such as Kar3 in yeast. Accessory proteins, such as Spc72 in yeast, act as a glue, connecting the motor protein, spindle pole body and microtubule in a structure known as the half-bridge. Other proteins, such as Kar9 and Bim1 in yeast, attach to the plus end of the microtubules. They are activated by pheromone signals to attach to the shmoo tip. A shmoo is a projection of the cellular membrane which is the site of initial cell fusion in plasmogamy. After plasmogamy, the microtubule plus ends continue to grow towards the opposite pronucleus. It is thought that the growing plus end of the microtubule attaches directly to the motor protein of the opposite pronucleus, triggering a reorganization of the proteins at the half-bridge. The force necessary for migration occurs directly in response to this interaction.
Two models of nuclear congression have been proposed: the sliding cross-bridge, and the plus end model. In the sliding cross-bridge model, the microtubules run antiparallel to each other for the entire distance between the two pronuclei, forming cross-links to each other, and each attaching to the opposite nucleus at the plus end. This is the favored model. The alternative model proposes that the plus ends contact each other midway between the two pronuclei and only overlap slightly. In either model, it is believed that microtubule shortening occurs at the plus end and requires Kar3p (in yeast), a member of a family of kinesin-like proteins.
Microtubule organization in the cytoskeleton has been shown to be essential for proper nuclear congression during karyogamy. Defective microtubule organization causes total failure of karyogamy, but does not totally interrupt meiosis and spore production in yeast. The failure occurs because the process of nuclear congression cannot occur without functional microtubules. Thus, the pronuclei do not approach close enough to each other to fuse together, and their genetic material remains separated.
Pronuclear fusion (karyogamy)
Merging of the nuclear envelopes of the pi occurs in three steps: fusion of the outer membrane, fusion of the inner membrane, and fusion of the spindle pole bodies. In yeast, several members of the Kar family of proteins, as well as a protamine, are required for the fusion of nuclear membranes. The protamine Prm3 is located on the outer surface of each nuclear membrane, and is required for the fusion of the outer membrane. The exact mechanism is not known. Kar5, a kinesin-like protein, is necessary to expand the distance between the outer and inner membranes in a phenomenon known as bridge expansion. Kar8 and Kar2 are thought to be necessary to the fusing of the inner membranes.
As described above, the reorganization of accessory and motor proteins during pronuclear migration also serves to orient the spindle pole bodies in the correct direction for efficient nuclear congression. Nuclear congression can still take place without this pre-orientation of spindle pole bodies, but it is slower. Ultimately the two pronuclei combine the contents of their nucleoplasms and form a single envelope around the result.
Role in somatic diploids
Although fungi are normally haploid, diploid cells can arise by two mechanisms. The first is a failure of the mitotic spindle during regular cell division, and does not involve karyogamy. The resulting cell can only be genetically homozygous since it is produced from one haploid cell. The second mechanism, involving karyogamy of somatic cells, can produce heterozygous diploids if the two nuclei differ in genetic information. The formation of somatic diploids is generally rare, and is thought to occur because of a mutation in the karyogamy repressor gene (KR).
There are, however, a few fungi that exist mostly in the diploid state. One example is Candida albicans, a fungus that lives in the gastrointestinal tracts of many warm blooded animals, including humans. Although usually innocuous, C. albicans can turn pathogenic and is a particular problem in immunosuppressed patients. Unlike with most other fungi, diploid cells of different mating types fuse to create tetraploid cells which subsequently return to the diploid state by losing chromosomes.
Similarities to and differences from mammalian fertilization
Mammals, including humans, also combine genetic material from two sources - father and mother - in fertilization. This process is similar to karyogamy. As with karyogamy, microtubules play an important part in fertilization and are necessary for the joining of the sperm and egg (oocyte) DNA. Drugs such as griseofulvin that interfere with microtubules prevent the fusion of the sperm and egg pronuclei. The gene KAR2 which plays a large role in karyogamy has a mammalian analog called Bib/GRP78. In both cases, genetic material is combined to create a diploid cell that has greater genetic diversity than either original source. Instead of fusing in the same way as lower eukaryotes do in karyogamy, the sperm nucleus vesiculates and its DNA decondenses. The sperm centriole acts as a microtubule organizing center and forms an aster which extends throughout the egg until contacting the egg's nucleus. The two pronuclei migrate toward each other and then fuse to form a diploid cell.
See also
Sexual reproduction
Polyploid
Fungi
References
Cell biology | Karyogamy | Biology | 2,479 |
18,119,717 | https://en.wikipedia.org/wiki/Web%20typography | Web typography, like typography generally, is the design of pages their layout and typeface choices. Unlike traditional print-based typography (where the page is fixed once typeset), pages intended for display on the World Wide Web have additional technical challenges andgiven its ability to change the presentation dynamicallyadditional opportunities. Early web page designs were very simple due to technology limitations; modern designs use Cascading Style Sheets (CSS), JavaScript and other techniques to deliver the typographer's and the client's vision.
When HTML was first created, typefaces and styles were controlled exclusively by the settings of each web browser. There was no mechanism for individual Web pages to control font display until Netscape introduced the font element in 1995, which was then standardized in the HTML 3.2 specification. However, the computer font specified by the font element had to be installed on the user's computer or a fallback font, such as a browser's default sans-serif or monospace font, would be used. The first CSS specification was published in 1996 and provided the same capabilities.
The CSS2 specification was released in 1998 and attempted to improve the font selection process by adding font matching, synthesis and download. These techniques did not gain much use, and were removed in the CSS2.1 specification. However, Internet Explorer added support for the font downloading feature in version 4.0, released in 1997. Font downloading was later included in the CSS3 fonts module, and has since been implemented in Safari 3.1, Opera 10 and Mozilla Firefox 3.5. This has subsequently increased interest in Web typography, as well as the use of font downloading.
CSS1
In the first CSS specification, authors specified font characteristics via a series of properties:
All fonts were identified solely by name. Beyond the properties mentioned above, designers had no way to style fonts, and no mechanism existed to select fonts not present on the client system.
Web-safe fonts
Web-safe fonts are computer fonts that may reasonably be expected to be present on a wide range of computer systems, and used by Web content authors to increase the likelihood that content displays in their chosen font. If a visitor to a Web site does not have the specified font, their browser tries to select a similar alternative, based on the author-specified fallback fonts and generic families or it uses font substitution defined in the visitor's operating system.
Microsoft's Core fonts for the Web
To ensure that all Web users had a basic set of fonts, Microsoft started the Core fonts for the Web initiative in 1996 (terminated in 2002). Released fonts include Arial, Courier New, Times New Roman, Comic Sans, Impact, Georgia, Trebuchet, Webdings and Verdana—under an EULA that made them freely distributable but also limited some rights to their use. Their high penetration rate has made them a staple for Web designers. However, most Linux distributions don't include these fonts by default.
CSS2 attempted to increase the tools available to Web developers by adding font synthesis, improved font matching and the ability to download remote fonts.
Some CSS2 font properties were removed from CSS2.1 and later included in CSS3.
Fallback fonts
The CSS specification allows for multiple fonts to be listed as fallback fonts. In CSS, the font-family property accepts a list of comma-separated font faces to use, like so:
font-family: "Nimbus Sans L", Helvetica, Arial, sans-serif;
The first font specified is the preferred font. If this font is not available, the Web browser attempts to use the next font in the list. If none of the fonts specified are found, the browser displays its default font. This same process also happens on a per-character basis if the browser tries to display a character not present in the specified font.
Generic font families
To give Web designers some control over the appearance of fonts on their Web pages, even when the specified fonts are not available, the CSS specification allows the use of several generic font families. These families are designed to split fonts into several categories based on their general appearance. They are commonly specified as the last in a series of fallback fonts, as a last resort in the event that none of the fonts specified by the author are available. For several years, there were five generic families:
Sans-serif
Fonts that do not have decorative markings, or serifs, on their letters. These fonts are often considered easier to read on screens.
Serif
Fonts that have decorative markings, or serifs, present on their characters. These fonts are traditionally used in printed books.
Monospace
Fonts in which all characters are equally wide.
Cursive
Fonts that resemble cursive writing. These fonts may have a decorative appearance, but they can be difficult to read at small sizes, so they are generally used sparingly.
Fantasy
Fonts that may contain symbols or other decorative properties, but still represent the specified character.
CSS fonts working draft 4 with lesser browser support
, the CSS Working Group of W3C proposes that systems specify a default font using tags; as of the same date, these are not widely supported yet.
Default fonts on a given system: the purpose of this option is to allow web content to integrate with the look and feel of the native OS.
Default fonts on a given system in a serif style
Default fonts on a given system in a sans-serif style
Default fonts on a given system in a monospace style
Default fonts on a given system in a rounded style
Fonts using emoji
Fonts for complex mathematical formula and expressions.
Chinese typefaces that are between serif Song and cursive Kai forms. This style is often used for government documents.
Web fonts
History
A technique to refer to and automatically download remote fonts was first specified in the CSS2 specification, which introduced the @font-face construct. At the time, fetching font files from the web was controversial because fonts meant to be used only for certain web pages could also be downloaded and installed in breach of the font license.
Microsoft first added support for downloadable EOT fonts in Internet Explorer 4 in 1997. Authors had to use the proprietary WEFT tool to create a subsetted font file for each page. EOT showed that webfonts could work and the format saw some use in writing systems not supported by common operating systems. However, the format never gained widespread acceptance and was ultimately rejected by W3C.
In 2006, Håkon Wium Lie started a campaign against using EOT and rather have web browsers support commonly used font formats. Support for the commonly used TrueType and OpenType font formats has since been implemented in Safari 3.1, Opera 10, Mozilla Firefox 3.5 and Internet Explorer 9.
In 2010, the WOFF compression method for TrueType and OpenType fonts was submitted to W3C by the Mozilla Foundation, Opera Software and Microsoft, and browsers have since added support.
Google Fonts was launched in 2010 to serve webfonts under open-source licenses. By 2016, more than 800 webfont families are available.
Webfonts have become an important tool for web designers and as of 2016 a majority of sites use webfonts.
File formats
By using a specific CSS @font-face embedding technique it is possible to embed fonts such that they work with IE4+, Firefox 3.5+, Safari 3.1+, Opera 10+ and Chrome 4.0+. This allows the vast majority of Web users to access this functionality. Some commercial foundries object to the redistribution of their fonts. For example, Hoefler & Frere-Jones says that, while they "...enthusiastically [support] the emergence of a more expressive Web in which designers can safely and reliably use high-quality fonts online," the current delivery of fonts using @font-face is considered "illegal distribution" by the foundry and is not permitted. Instead, Hoefler & Co. offer a proprietary font delivery system rooted in the cloud. Many other commercial type foundries address the redistribution of their fonts by offering a specific license, known as a web font license, which permits the use of the font software to display content on the web, a use normally prohibited by basic desktop licenses. Naturally this does not interfere with fonts and foundries under free licences.
TrueDoc
TrueDoc, while not specifically a webfont specification, was the first standard for embedding fonts. It was developed by the type foundry Bitstream in 1994, and became natively supported in Netscape Navigator 4, in 1996. Due to open source license restrictions, with Netscape unable to release Bitstream's source code, native support for the technology ended when Netscape Navigator 6 was released. An ActiveX plugin was available to add support for TrueDoc to Internet Explorer, but the technology had to compete against Microsoft's Embedded OpenType fonts, which had natively supported in their Internet Explorer browser since version 4.0. Another impediment was the lack of open-source or free tool to create webfonts in TrueDoc format, whereas Microsoft made available a free Web Embedding Fonts Tool to create webfonts in their format.
Embedded OpenType
Internet Explorer has supported font embedding through the proprietary Embedded OpenType standard since version 4.0. It uses digital rights management techniques to help prevent fonts from being copied and used without a license. A simplified subset of EOT has been formalized under the name of CWT (Compatibility Web Type, formerly EOT-Lite)
Scalable Vector Graphics
Web typography applies to SVG in two ways:
All versions of the SVG 1.1 specification, including the SVGT subset, define a font module allowing the creation of fonts within an SVG document. Safari introduced support for many of these properties in version 3. Opera added preliminary support in version 8.0, with support for more properties in 9.0.
The SVG specification lets CSS apply to SVG documents in a similar manner to HTML documents, and the @font-face rule can be applied to text in SVG documents. Opera added support for this in version 10, and WebKit since version 325 also supports this method using SVG fonts only.
Scalable Vector Graphics Fonts
SVG fonts was a W3C standard of fonts using SVG graphic that became a subset of OpenType fonts. It allowed multicolor or animated fonts. It was first a subset of SVG 1.1 specifications but it has been deprecated in the SVG 2.0 specification. The SVG fonts as independent format is supported by most browsers apart from IE and Firefox, and is deprecated in Chrome (and Chromium). That's now generally deprecated; the standard that most browser vendor agreed with is SVG font subset included in OpenType (and then WOFF superset, see below), called SVGOpenTypeFonts. Firefox has supported SVG OpenType since Firefox 26.
TrueType/OpenType
Linking to industry-standard TrueType (TTF) and OpenType (TTF/OTF) fonts is supported by
Mozilla Firefox 3.5+, Opera 10+, Safari 3.1+, and Google Chrome 4.0+. Internet Explorer 9+ supports only those fonts with embedding permissions set to installable.
Web Open Font Format
The Web Open Font Format (WOFF) is essentially OpenType or TrueType with compression and additional metadata. WOFF is supported by Mozilla Firefox 3.6+, Google Chrome 5+,
Opera Presto,
and is supported by Internet Explorer 9 (since March 14, 2011). Support is available on Mac OS X Lion's Safari from release 5.1.
Unicode fonts
The term Unicode font is a computer font that maps glyphs to code points defined in the Unicode Standard. The term has become redundant since the vast majority of modern computer fonts use Unicode mappings, even those fonts which only include glyphs for a single writing system, or even only support the basic Latin alphabet. Fonts which support a wide range of Unicode scripts and Unicode symbols are sometimes referred to as "pan-Unicode fonts", although as the maximum number of glyphs that can be defined in a TrueType font is restricted to 65,535, it is not possible for a single font to provide individual glyphs for all defined Unicode characters ().
Only two fonts available by default on the Windows platform, Microsoft Sans Serif and Lucida Sans Unicode, provide a wide Unicode character repertoire.
On free and open-source software platforms such as Linux, GNU Unifont and GNU FreeFont provide a wide range of characters. On ChromeOS, Google's Noto fonts support (or are planned to support) all the scripts encoded in the Unicode standard
Alternatives
A common hurdle in Web design is the design of mockups that include fonts that are not Web-safe. There are a number of solutions for situations like this. One common solution is to replace the text with a similar Web-safe font or use a series of similar-looking fallback fonts.
Another technique is image replacement. This practice involves overlaying text with an image containing the same text written in the desired font. This is good for aesthetic purposes, but prevents text selection, increases bandwidth use, is bad for search engine optimization, and makes the text inaccessible for users with disabilities.
In the past, Flash-based solutions such as sIFR were used. This is similar to image replacement techniques, though the text is selectable and rendered as a vector. However, this method requires the presence of a proprietary plugin on a client's system.
Another solution is using JavaScript to replace the text with VML (for Internet Explorer) or SVG (for all other browsers).
See also
Scalable Inman Flash Replacement
Notes
References
External links
W3C CSS Fonts Specification
List of RFC as mentioned in WOFF (draft of 2009-10-23):
ZLIB Compressed Data Format Specification
Key words for use in RFCs to Indicate Requirement Levels
Matching of Language Tags
Digital typography
Web design | Web typography | Engineering | 3,027 |
24,508,920 | https://en.wikipedia.org/wiki/Gymnopilus%20josserandii | Gymnopilus josserandii is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus josserandii at Index Fungorum
josserandii
Fungi of North America
Fungus species | Gymnopilus josserandii | Biology | 59 |
3,189,819 | https://en.wikipedia.org/wiki/History%20of%20candle%20making | Candle making was developed independently in a number of countries around the world.
Candles were primarily made from tallow and beeswax in Europe from the Roman period until the modern era, when spermaceti (from sperm whales) was used in the 18th and 19th centuries, and purified animal fats (stearin) and paraffin wax since the 19th century. In China, textual evidence suggests that candles may have been made from whale fat in the Qin dynasty (221–206 BCE). Chinese candles may be made from beeswax, or stillingia tallow from Chinese tallow tree, or Chinese wax derived from insects. While the Japanese may use Japan wax from the Japanese wax tree. In India, wax from boiling cinnamon was used for temple candles.
In Europe, a number of techniques were used to make candles in the early periods. These may be dipping or drawing a wick in molten wax or tallow, shaping it by hand by rolling soft wax around a wick, or pouring wax or tallow over the wick. Moulds were used later, and in the 19th century, large-scale industrial manufacturing technique was introduced for the mass production of candles. Candle use declined with the arrival of other methods of lighting such as electric light, although candles are still being made.
Antiquity
Before candles were invented, ancient people used open fire, torches, splinters of resinous wood, and lamps to provide light at night. Primitive oil lamps in which a lit wick rested in a pool of oil or fat were used from the Paleolithic period, and pottery and stone lamps from the Neolithic period have been found. Candles may have been produced after the early Bronze Age, but it is unclear when and where candles were first used. Objects that could possibly be candle holders have been found in Babylonian and middle Minoan cultures, as well as in the tomb of Tutankhamun, and a possible depiction of a lit candle can be seen in the tomb of Amenemhat. However, the candles used in the early periods may not resemble current forms and were likely made from plant materials dipped in animal fat. Ancient Greeks offered to moon goddess Artemis moon-shaped honey cakes said to be lit by little torches or candles, and this has been proposed as the origin of the tradition of putting candles on birthday cakes. However, cakes with any resemblance to modern Western birthday cakes only arose by around 1600 in Europe. and ancient Greece used torches and oil lamps, and may have adopted candle use only in a later period from Rome.
It is often believed that the use of wicked candles developed in Italy in the Etruscan period; a picture of a candlestick exists in an Etruscan tomb at Orvieto, and the earliest Etruscan candlestick may date from the 7th century BC. Candles may have evolved from tapers (long thin candles) with wicks of oakum and other plant fibre soaked in fat, pitch or oil. Candles of antiquity were made from various forms of natural fat, tallow, and wax, and Romans made true dipped candles from tallow and beeswax. Beeswax candles were expensive and their use was limited to the wealthy. Oil lamps were the most widely used source of illumination in Roman Italy, but candles were common and regularly given as gifts during Saturnalia.
In Christian churches, candles gained significance in their decorative, symbolic and ceremonial uses. Wax candles, or candelae cereae recorded at the end of the 3rd century, were documented as Easter candles in Spain and Italy in the 4th century, the Christian festival Candlemas was named after the candle, and Pope Sergius I instituted the procession of lighted candles. Papal bulls decreed that tallow be excluded from use in altar candles, and high beeswax content was necessary for the candles of the high altar.
Beeswax was a byproduct of honey collection, and it was collected after honey had been extracted, and purified by boiling it in seawater a few times. The early candles were produced using a number of methods: dipping or drawing the wick in molten fat or wax repeatedly until it reached the desired size, building the candle by hand by rolling soft wax around a wick, or pouring fat or wax onto a wick to build up the candle. The use of moulds was a 14th-century development.
In China, the mausoleum of Qin Shi Huang (259–210 BC), first emperor of China, was said by historian Sima Qian to contain candles made from whale fat. The word zhú was used for "candle" during the Warring States period (403–221 BCE); some excavated bronzewares from that era feature a pricket thought to hold a candle.
The Han dynasty (202 BC – 220 AD) Jizhupian dictionary of about 40 BC hints at candles being made of beeswax, while the Book of Jin (compiled in 648) covering the Jin dynasty (266–420) makes a solid reference to the beeswax candle in regard to its use by the statesman Zhou Yi (d. 322). An excavated earthenware bowl from the 4th century AD, located at the Luoyang Museum, has a hollowed socket where traces of wax were found.
There is a fish called the eulachon or "candlefish", a type of smelt which is found from Oregon to Alaska. Native Americans from this region used oil from this fish for illumination. A simple candle could be made by putting the dried fish on a forked stick and then lighting it.
Middle Ages
After the collapse of the Roman empire, trading disruptions made olive oil, the most common fuel for oil lamps, unavailable throughout much of Europe. As a consequence, candles became more widely used.
Candles were commonplace throughout Europe in the Middle Ages. Candle makers (known as chandlers) made candles from fats saved from the kitchen or sold their own candles from within their shops. The trade of the chandler is also recorded by the more picturesque name of "smeremongere", since they oversaw the manufacture of sauces, vinegar, soap and cheese. The popularity of candles is shown by their use in Candlemas and in Saint Lucy festivities.
Tallow, fat from cows or sheep, became the standard material used in candles in Europe. The unpleasant smell of tallow candles is due to the glycerine they contain. The smell of the manufacturing process was so unpleasant that it was banned by ordinance in several European cities. Beeswax was the preferred substance for the production of candles without the unpleasant odour, but its use remained restricted to the rich, and for churches and royal events, due to their great expense.
In England and France, candle making had become a guild craft by the 13th century, and a French guild was documented as early as 1061. The Tallow Chandlers Company of London was formed in about 1300 in London, and in 1456 was granted a coat of arms. The Wax Chandlers Company existed prior to 1330 and acquired its charter in 1484. By 1415, tallow candles were used in street lighting. The first candle mould comes from the 15th century in Paris. Sieur de Brez of Paris introduced the technique of using a mould to England, although candles had a tendency to stick to the mould and break when it was being removed from the mould. Real improvement for the efficient production of candles with mould was only achieved in the 19th century.
In the Middle East, during the Abbasid and Fatimid Caliphates, beeswax was the dominant material used for candle making. Beeswax was often imported from long distances; for example, candle makers from Egypt used beeswax from Tunis. As in Europe, these candles were fairly expensive, and most commoners used oil lamps instead. Elites, though, could afford to spend large sums on expensive candles. For example, the Abbasid caliph al-Mutawakkil spent 1.2 million silver dirhams annually on candles for his royal palaces.
In early modern Syria, candles were in high demand by all socioeconomic classes because they were customarily lit during marriage ceremonies. There were candle makers' guilds in the Safavid capital of Isfahan during the 1500s and 1600s. However, candle makers had a relatively low social position in Safavid Iran, comparable to barbers, bathhouse workers, fortune tellers, bricklayers, and porters.
In China, beeswax candles were common in the Tang and Sung dynasties. Wax from a plant, stillingia tallow from Chinese tallow tree, may be used to make candles together with beeswax. Stillingia tallow has a low melting point and it therefore may be encased with the harder beeswax or Chinese wax. The Chinese may have started cultivating the tallow tree in the Yangtze Delta region in the 7th century. Wax from the plant was commonly used to make Buddhist ceremonial candles.
Another type of wax, the Chinese wax derived from insects and resembles the best spermaceti, may also be used. The production of Chinese wax was mastered by the Yuan dynasty. A type of Chinese candles has a bamboo rod as its core, onto which paper is wound spirally with rush pith as wick, and this is then repeatedly dipped in melted wax or fats and cooled until the desired size is reached. The candles may be coloured and sometimes decorated with characters.
The Japanese have similar candle-making techniques as the Chinese, but they also developed a method of moulding candles using paper tubes. They may use Japan wax from the Japanese wax tree for making candles.
Wax from boiling cinnamon was used for temple candles in India. Yak butter was used for candles in Tibet.
Modern era
With the growth of the whaling industry in the 18th century, spermaceti, an oil that comes from a cavity in the head of the sperm whale, became a widely used substance for candle making. The wax was made by crystallizing the oil, and was the first candle substance to become available in mass quantities. Like beeswax, spermaceti wax did not create a repugnant odor when burned, and produced a significantly brighter light. It was also harder than either tallow or beeswax, so it would not soften or bend in the summer heat. The first "standard candles" were made from spermaceti wax.
By 1800, an even cheaper alternative was discovered. Colza oil, derived from Brassica campestris, and a similar oil derived from rapeseed, yielded candles that produce clear, smokeless flames. The French chemists Michel Eugène Chevreul (1786–1889) and Joseph-Louis Gay-Lussac (1778–1850) patented stearin in 1825. Like tallow, this was derived from animals, but had no glycerine content.
Industrialization
The manufacture of candles became an industrialised mass market in the mid 19th century. In 1834, Joseph Morgan, a pewterer from Manchester, England, patented a machine that revolutionised candle making. It allowed continuous production of molded candles, using a cylinder with a moveable piston to eject candles as they solidified. This method produced about 1,500 candles per hour: (according to his patent, "with three men and five boys [the machine] will manufacture two tons of candle in twelve hours"). Now poorer people could now easily afford candles.
At this time, candlemakers also began to fashion wicks out of tightly braided (rather than simply twisted) strands of cotton. This technique makes wicks curl over as they burn, maintaining the height of the wick and therefore the flame. Because much of the excess wick is incinerated, these are referred to as "self-trimming" or "self-consuming" wicks.
In 1848 James Young established the world's first oil refinery at the Alfreton Ironworks in Riddings, Derbyshire. Two paraffin wax candles were made from the naturally occurring paraffin wax present in the oil, and these candles illuminated a lecture at the Royal Institution by Lyon Playfair. In the mid-1850s, James Young succeeded in distilling paraffin wax from coal and oil shales at Bathgate in West Lothian, and developed a commercially viable method of production. The paraffin wax was processed by distilling residue left after crude petroleum was refined.
Paraffin could be used to make inexpensive candles of high quality. It was a bluish-white wax, burned cleanly, and left no unpleasant odor, unlike tallow candles. A drawback to the substance was that early coal- and petroleum-derived paraffin waxes had a very low melting point. The introduction of stearin, discovered by Michel Eugène Chevreul, solved this problem. Stearin is hard and durable, with a convenient melting range of . By the end of the 19th century, most candles being manufactured consisted of paraffin and stearic acid.
By the late 19th century, Price's Candles, based in London, was the largest candle manufacturer in the world. The company traced its origins back to 1829, when William Wilson invested in of coconut plantation in Sri Lanka. His aim was to make candles from coconut oil. Later he tried palm oil from palm trees. An accidental discovery swept all his ambitions aside when his son George Wilson, a talented chemist, distilled the first petroleum oil in 1854. George also pioneered the implementation of the technique of steam distillation, and was thus able to manufacture candles from a wide range of raw materials, including skin fat, bone fat, fish oil and industrial greases.
In America, Syracuse, New York developed into a global center for candle manufacturing from the mid-nineteenth century. Manufacturers included Will & Baumer, Mack Miller, Muench Kruezer, and Cathedral Candle Company.
Decline of the candle industry
Despite advances in candle making, the candle industry declined rapidly upon the introduction of superior methods of lighting, including kerosene lamps, and from 1879 the incandescent light bulb.
In the 20th century, candles came to be marketed as more of a decorative item. Candles, however, retain their unique symbolic significance, for instance as votive offerings. Candles became available in a broad array of sizes, shapes and colors, and consumer interest in scented candles began to grow. During the 1990s, new types of candle waxes were being developed due to an unusually high demand for candles. Paraffin, a by-product of oil, was quickly replaced by new waxes and wax blends owing to rising costs.
Candle manufacturers looked at waxes such as soy, palm and flax-seed oil, often blending them with paraffin to achieve the performance of paraffin with the price benefits of the other waxes. The creation of unique wax blends, now requiring different fragrance chemistries and loads, encouraged candle wick manufacturers to innovate to meet performance needs with the often tougher-to-burn formulations.
Gallery
References
Bibliography
External links
History and Psychology of Candles
Candles
Candle making
Candle making
Articles containing video clips | History of candle making | Technology | 3,080 |
46,770 | https://en.wikipedia.org/wiki/Fixed-wing%20aircraft | A fixed-wing aircraft is a heavier-than-air aircraft, such as an airplane, which is capable of flight using aerodynamic lift. Fixed-wing aircraft are distinct from rotary-wing aircraft (in which a rotor mounted on a spinning shaft generates lift), and ornithopters (in which the wings oscillate to generate lift). The wings of a fixed-wing aircraft are not necessarily rigid; kites, hang gliders, variable-sweep wing aircraft, and airplanes that use wing morphing are all classified as fixed wing.
Gliding fixed-wing aircraft, including free-flying gliders and tethered kites, can use moving air to gain altitude. Powered fixed-wing aircraft (airplanes) that gain forward thrust from an engine include powered paragliders, powered hang gliders and ground effect vehicles. Most fixed-wing aircraft are operated by a pilot, but some are unmanned and controlled either remotely or autonomously.
History
Kites
Kites were used approximately 2,800 years ago in China, where kite building materials were available. Leaf kites may have been flown earlier in what is now Sulawesi, based on their interpretation of cave paintings on nearby Muna Island. By at least 549 AD paper kites were flying, as recorded that year, a paper kite was used as a message for a rescue mission. Ancient and medieval Chinese sources report kites used for measuring distances, testing the wind, lifting men, signaling, and communication for military operations.
Kite stories were brought to Europe by Marco Polo towards the end of the 13th century, and kites were brought back by sailors from Japan and Malaysia in the 16th and 17th centuries. Although initially regarded as curiosities, by the 18th and 19th centuries kites were used for scientific research.
Gliders and powered devices
Around 400 BC in Greece, Archytas was reputed to have designed and built the first self-propelled flying device, shaped like a bird and propelled by a jet of what was probably steam, said to have flown some . This machine may have been suspended during its flight.
One of the earliest attempts with gliders was by 11th-century monk Eilmer of Malmesbury, which failed. A 17th-century account states that 9th-century poet Abbas Ibn Firnas made a similar attempt, though no earlier sources record this event.
In 1799, Sir George Cayley laid out the concept of the modern airplane as a fixed-wing machine with systems for lift, propulsion, and control. Cayley was building and flying models of fixed-wing aircraft as early as 1803, and built a successful passenger-carrying glider in 1853. In 1856, Frenchman Jean-Marie Le Bris made the first powered flight, had his glider L'Albatros artificiel towed by a horse along a beach. In 1884, American John J. Montgomery made controlled flights in a glider as a part of a series of gliders he built between 1883 and 1886. Other aviators who made similar flights at that time were Otto Lilienthal, Percy Pilcher, and protégés of Octave Chanute.
In the 1890s, Lawrence Hargrave conducted research on wing structures and developed a box kite that lifted the weight of a man. His designs were widely adopted. He also developed a type of rotary aircraft engine, but did not create a powered fixed-wing aircraft.
Powered flight
Sir Hiram Maxim built a craft that weighed 3.5 tons, with a 110-foot (34-meter) wingspan powered by two 360-horsepower (270-kW) steam engines driving two propellers. In 1894, his machine was tested with overhead rails to prevent it from rising. The test showed that it had enough lift to take off. The craft was uncontrollable, and Maxim abandoned work on it.
The Wright brothers' flights in 1903 with their Flyer I are recognized by the Fédération Aéronautique Internationale (FAI), the standard setting and record-keeping body for aeronautics, as "the first sustained and controlled heavier-than-air powered flight". By 1905, the Wright Flyer III was capable of fully controllable, stable flight for substantial periods.
In 1906, Brazilian inventor Alberto Santos Dumont designed, built and piloted an aircraft that set the first world record recognized by the Aéro-Club de France by flying the 14 bis in less than 22 seconds. The flight was certified by the FAI.
The Bleriot VIII design of 1908 was an early aircraft design that had the modern monoplane tractor configuration. It had movable tail surfaces controlling both yaw and pitch, a form of roll control supplied either by wing warping or by ailerons and controlled by its pilot with a joystick and rudder bar. It was an important predecessor of his later Bleriot XI Channel-crossing aircraft of the summer of 1909.
World War I
World War I served initiated the use of aircraft as weapons and observation platforms. The earliest known aerial victory with a synchronized machine gun-armed fighter aircraft occurred in 1915, flown by German Luftstreitkräfte Lieutenant Kurt Wintgens. Fighter aces appeared; the greatest (by number of air victories) was Manfred von Richthofen.
Alcock and Brown crossed the Atlantic non-stop for the first time in 1919. The first commercial flights traveled between the United States and Canada in 1919.
Interwar aviation; the "Golden Age"
The so-called Golden Age of Aviation occurred between the two World Wars, during which updated interpretations of earlier breakthroughs. Innovations include Hugo Junkers' all-metal air frames in 1915 leading to multi-engine aircraft of up to 60+ meter wingspan sizes by the early 1930s, adoption of the mostly air-cooled radial engine as a practical aircraft power plant alongside V-12 liquid-cooled aviation engines, and longer and longer flights – as with a Vickers Vimy in 1919, followed months later by the U.S. Navy's NC-4 transatlantic flight; culminating in May 1927 with Charles Lindbergh's solo trans-Atlantic flight in the Spirit of St. Louis spurring ever-longer flight attempts.
World War II
Airplanes had a presence in the major battles of World War II. They were an essential component of military strategies, such as the German Blitzkrieg or the American and Japanese aircraft carrier campaigns of the Pacific.
Military gliders were developed and used in several campaigns, but were limited by the high casualty rate encountered. The Focke-Achgelis Fa 330 Bachstelze (Wagtail) rotor kite of 1942 was notable for its use by German U-boats.
Before and during the war, British and German designers worked on jet engines. The first jet aircraft to fly, in 1939, was the German Heinkel He 178. In 1943, the first operational jet fighter, the Messerschmitt Me 262, went into service with the German Luftwaffe. Later in the war the British Gloster Meteor entered service, but never saw action – top air speeds for that era went as high as , with the early July 1944 unofficial record flight of the German Me 163B V18 rocket fighter prototype.
Postwar
In October 1947, the Bell X-1 was the first aircraft to exceed the speed of sound, flown by Chuck Yeager.
In 1948–49, aircraft transported supplies during the Berlin Blockade. New aircraft types, such as the B-52, were produced during the Cold War.
The first jet airliner, the de Havilland Comet, was introduced in 1952, followed by the Soviet Tupolev Tu-104 in 1956. The Boeing 707, the first widely successful commercial jet, was in commercial service for more than 50 years, from 1958 to 2010. The Boeing 747 was the world's largest passenger aircraft from 1970 until it was surpassed by the Airbus A380 in 2005. The most successful aircraft is the Douglas DC-3 and its military version, the C-47, a medium sized twin engine passenger or transport aircraft that has been in service since 1936 and is still used throughout the world. Some of the hundreds of versions found other purposes, like the AC-47, a Vietnam War era gunship, which is still used in the Colombian Air Force.
Types
Airplane/aeroplane
An airplane (aeroplane or plane) is a powered fixed-wing aircraft propelled by thrust from a jet engine or propeller. Planes come in many sizes, shapes, and wing configurations. Uses include recreation, transportation of goods and people, military, and research.
Seaplane
A seaplane (hydroplane) is capable of taking off and landing (alighting) on water. Seaplanes that can also operate from dry land are a subclass called amphibian aircraft. Seaplanes and amphibians divide into two categories: float planes and flying boats.
A float plane is similar to a land-based airplane. The fuselage is not specialized. The wheels are replaced/enveloped by floats, allowing the craft to make remain afloat for water landings.
A flying boat is a seaplane with a watertight hull for the lower (ventral) areas of its fuselage. The fuselage lands and then rests directly on the water's surface, held afloat by the hull. It does not need additional floats for buoyancy, although small underwing floats or fuselage-mounted sponsons may be used to stabilize it. Large seaplanes are usually flying boats, embodying most classic amphibian aircraft designs.
Powered gliders
Many forms of glider may include a small power plant. These include:
Motor glider – a conventional glider or sailplane with an auxiliary power plant that may be used when in flight to increase performance.
Powered hang glider – a hang glider with a power plant added.
Powered parachute – a paraglider type of parachute with an integrated air frame, seat, undercarriage and power plant hung beneath.
Powered paraglider or paramotor – a paraglider with a power plant suspended behind the pilot.
Ground effect vehicle
A ground effect vehicle (GEV) flies close to the terrain, making use of the ground effect – the interaction between the wings and the surface. Some GEVs are able to fly higher out of ground effect (OGE) when required – these are classed as powered fixed-wing aircraft.
Glider
A glider is a heavier-than-air craft whose free flight does not require an engine. A sailplane is a fixed-wing glider designed for soaring – gaining height using updrafts of air and to fly for long periods.
Gliders are mainly used for recreation but have found use for purposes such as aerodynamics research, warfare and spacecraft recovery.
Motor gliders are equipped with a limited propulsion system for takeoff, or to extend flight duration.
As is the case with planes, gliders come in diverse forms with varied wings, aerodynamic efficiency, pilot location, and controls.
Large gliders are most commonly born aloft by a tow-plane or by a winch. Military gliders have been used in combat to deliver troops and equipment, while specialized gliders have been used in atmospheric and aerodynamic research. Rocket-powered aircraft and spaceplanes have made unpowered landings similar to a glider.
Gliders and sailplanes that are used for the sport of gliding have high aerodynamic efficiency. The highest lift-to-drag ratio is 70:1, though 50:1 is common. After take-off, further altitude can be gained through the skillful exploitation of rising air. Flights of thousands of kilometers at average speeds over 200 km/h have been achieved.
One small-scale example of a glider is the paper airplane. An ordinary sheet of paper can be folded into an aerodynamic shape fairly easily; its low mass relative to its surface area reduces the required lift for flight, allowing it to glide some distance.
Gliders and sailplanes share many design elements and aerodynamic principles with powered aircraft. For example, the Horten H.IV was a tailless flying wing glider, and the delta-winged Space Shuttle orbiter glided during its descent phase. Many gliders adopt similar control surfaces and instruments as airplanes.
Types
The main application of modern glider aircraft is sport and recreation.
Sailplane
Gliders were developed in the 1920s for recreational purposes. As pilots began to understand how to use rising air, sailplane gliders were developed with a high lift-to-drag ratio. These allowed the craft to glide to the next source of "lift", increasing their range. This gave rise to the popular sport of gliding.
Early gliders were built mainly of wood and metal, later replaced by composite materials incorporating glass, carbon or aramid fibers. To minimize drag, these types have a streamlined fuselage and long narrow wings incorporating a high aspect ratio. Single-seat and two-seat gliders are available.
Initially, training was done by short "hops" in primary gliders, which have no cockpit and minimal instruments. Since shortly after World War II, training is done in two-seat dual control gliders, but high-performance two-seaters can make long flights. Originally skids were used for landing, later replaced by wheels, often retractable. Gliders known as motor gliders are designed for unpowered flight, but can deploy piston, rotary, jet or electric engines. Gliders are classified by the FAI for competitions into glider competition classes mainly on the basis of wingspan and flaps.
A class of ultralight sailplanes, including some known as microlift gliders and some known as airchairs, has been defined by the FAI based on weight. They are light enough to be transported easily, and can be flown without licensing in some countries. Ultralight gliders have performance similar to hang gliders, but offer some crash safety as the pilot can strap into an upright seat within a deform-able structure. Landing is usually on one or two wheels which distinguishes these craft from hang gliders. Most are built by individual designers and hobbyists.
Military gliders
Military gliders were used during World War II for carrying troops (glider infantry) and heavy equipment to combat zones. The gliders were towed into the air and most of the way to their target by transport planes, e.g. C-47 Dakota, or by one-time bombers that had been relegated to secondary activities, e.g. Short Stirling. The advantage over paratroopers were that heavy equipment could be landed and that troops were quickly assembled rather than dispersed over a parachute drop zone. The gliders were treated as disposable, constructed from inexpensive materials such as wood, though a few were re-used. By the time of the Korean War, transport aircraft had become larger and more efficient so that even light tanks could be dropped by parachute, obsoleting gliders.
Research gliders
Even after the development of powered aircraft, gliders continued to be used for aviation research. The NASA Paresev Rogallo flexible wing was developed to investigate alternative methods of recovering spacecraft. Although this application was abandoned, publicity inspired hobbyists to adapt the flexible-wing airfoil for hang gliders.
Initial research into many types of fixed-wing craft, including flying wings and lifting bodies was also carried out using unpowered prototypes.
Hang glider
A hang glider is a glider aircraft in which the pilot is suspended in a harness suspended from the air frame, and exercises control by shifting body weight in opposition to a control frame. Hang gliders are typically made of an aluminum alloy or composite-framed fabric wing. Pilots can soar for hours, gain thousands of meters of altitude in thermal updrafts, perform aerobatics, and glide cross-country for hundreds of kilometers.
Paraglider
A paraglider is a lightweight, free-flying, foot-launched glider with no rigid body. The pilot is suspended in a harness below a hollow fabric wing whose shape is formed by its suspension lines. Air entering vents in the front of the wing and the aerodynamic forces of the air flowing over the outside power the craft. Paragliding is most often a recreational activity.
Unmanned gliders
A paper plane is a toy aircraft (usually a glider) made out of paper or paperboard.
Model glider aircraft are models of aircraft using lightweight materials such as polystyrene and balsa wood. Designs range from simple glider aircraft to accurate scale models, some of which can be very large.
Glide bombs are bombs with aerodynamic surfaces to allow a gliding flight path rather than a ballistic one. This enables stand-off aircraft to attack a target from a distance.
Kite
A kite is a tethered aircraft held aloft by wind that blows over its wing(s). High pressure below the wing deflects the airflow downwards. This deflection generates horizontal drag in the direction of the wind. The resultant force vector from the lift and drag force components is opposed by the tension of the tether.
Kites are mostly flown for recreational purposes, but have many other uses. Early pioneers such as the Wright Brothers and J.W. Dunne sometimes flew an aircraft as a kite in order to confirm its flight characteristics, before adding an engine and flight controls.
Applications
Military
Kites have been used for signaling, for delivery of munitions, and for observation, by lifting an observer above the field of battle, and by using kite aerial photography.
Science and meteorology
Kites have been used for scientific purposes, such as Benjamin Franklin's famous experiment proving that lightning is electricity. Kites were the precursors to the traditional aircraft, and were instrumental in the development of early flying craft. Alexander Graham Bell experimented with large man-lifting kites, as did the Wright brothers and Lawrence Hargrave. Kites had a historical role in lifting scientific instruments to measure atmospheric conditions for weather forecasting.
Radio aerials and light beacons
Kites can be used to carry radio antennas. This method was used for the reception station of the first transatlantic transmission by Marconi. Captive balloons may be more convenient for such experiments, because kite-carried antennas require strong wind, which may be not always available with heavy equipment and a ground conductor.
Kites can be used to carry light sources such as light sticks or battery-powered lights.
Kite traction
Kites can be used to pull people and vehicles downwind. Efficient foil-type kites such as power kites can also be used to sail upwind under the same principles as used by other sailing craft, provided that lateral forces on the ground or in the water are redirected as with the keels, center boards, wheels and ice blades of traditional sailing craft. In the last two decades, kite sailing sports have become popular, such as kite buggying, kite landboarding, kite boating and kite surfing. Snow kiting is also popular.
Kite sailing opens several possibilities not available in traditional sailing:
Wind speeds are greater at higher altitudes
Kites may be maneuvered dynamically, which dramatically increases the available force
Mechanical structures are not needed to withstand bending forces; vehicles/hulls can be light or eliminated.
Power generation
Research and development projects investigate kites for harnessing high altitude wind currents for electricity generation.
Cultural uses
Kite festivals are a popular form of entertainment throughout the world. They include local events, traditional festivals and major international festivals.
Designs
Bermuda kite
Bowed kite, e.g. Rokkaku
Cellular or box kite
Chapi-chapi
Delta kite
Foil, parafoil or bow kite
Malay kite see also wau bulan
Tetrahedral kite
Types
Expanded polystyrene kite
Fighter kite
Indoor kite
Inflatable single-line kite
Kytoon
Man-lifting kite
Rogallo parawing kite
Stunt (sport) kite
Water kite
Characteristics
Air frame
The structural element of a fixed-wing aircraft is the air frame. It varies according to the aircraft's type, purpose, and technology. Early airframes were made of wood with fabric wing surfaces, When engines became available for powered flight, their mounts were made of metal. As speeds increased metal became more common until by the end of World War II, all-metal (and glass) aircraft were common. In modern times, composite materials became more common.
Typical structural elements include:
One or more mostly horizontal wings, often with an airfoil cross-section. The wing deflects air downward as the aircraft moves forward, generating lifting force to support it in flight. The wing also provides lateral stability to stop the aircraft level in steady flight. Other roles are to hold the fuel and mount the engines.
A fuselage, typically a long, thin body, usually with tapered or rounded ends to make its shape aerodynamically slippery. The fuselage joins the other parts of the air frame and contains the payload, and flight systems.
A vertical stabilizer or fin is a rigid surface mounted at the rear of the plane and typically protruding above it. The fin stabilizes the plane's yaw (turn left or right) and mounts the rudder which controls its rotation along that axis.
A horizontal stabilizer, usually mounted at the tail near the vertical stabilizer. The horizontal stabilizer is used to stabilize the plane's pitch (tilt up or down) and mounts the elevators that provide pitch control.
Landing gear, a set of wheels, skids, or floats that support the plane while it is not in flight. On seaplanes, the bottom of the fuselage or floats (pontoons) support it while on the water. On some planes, the landing gear retracts during the flight to reduce drag.
Wings
The wings of a fixed-wing aircraft are static planes extending to either side of the aircraft. When the aircraft travels forwards, air flows over the wings that are shaped to create lift.
Structure
Kites and some lightweight gliders and airplanes have flexible wing surfaces that are stretched across a frame and made rigid by the lift forces exerted by the airflow over them. Larger aircraft have rigid wing surfaces.
Whether flexible or rigid, most wings have a strong frame to give them shape and to transfer lift from the wing surface to the rest of the aircraft. The main structural elements are one or more spars running from root to tip, and ribs running from the leading (front) to the trailing (rear) edge.
Early airplane engines had little power and light weight was critical. Also, early airfoil sections were thin, and could not support a strong frame. Until the 1930s, most wings were so fragile that external bracing struts and wires were added. As engine power increased, wings could be made heavy and strong enough that bracing was unnecessary. Such an unbraced wing is called a cantilever wing.
Configuration
The number and shape of wings vary widely. Some designs blend the wing with the fuselage, while left and right wings separated by the fuselage are more common.
Occasionally more wings have been used, such as the three-winged triplane from World War I. Four-winged quadruplanes and other multiplane designs have had little success.
Most planes are monoplanes, with one or two parallel wings. Biplanes and triplanes stack one wing above the other. Tandem wings place one wing behind the other, possibly joined at the tips. When the available engine power increased during the 1920s and 1930s and bracing was no longer needed, the unbraced or cantilever monoplane became the most common form.
The planform is the shape when seen from above/below. To be aerodynamically efficient, wings are straight with a long span, but a short chord (high aspect ratio). To be structurally efficient, and hence lightweight, wingspan must be as small as possible, but offer enough area to provide lift.
To travel at transonic speeds, variable geometry wings change orientation, angling backward to reduce drag from supersonic shock waves. The variable-sweep wing transforms between an efficient straight configuration for takeoff and landing, to a low-drag swept configuration for high-speed flight. Other forms of variable planform have been flown, but none have gone beyond the research stage. The swept wing is a straight wing swept backward or forwards.
The delta wing is a triangular shape that serves various purposes. As a flexible Rogallo wing, it allows a stable shape under aerodynamic forces, and is often used for kites and other ultralight craft. It is supersonic capable, combining high strength with low drag.
Wings are typically hollow, also serving as fuel tanks. They are equipped with flaps, which allow the wing to increase/decrease drag/lift, for take-off and landing, and acting in opposition, to change direction.
Fuselage
The fuselage is typically long and thin, usually with tapered or rounded ends to make its shape aerodynamically smooth. Most fixed-wing aircraft have a single fuselage. Others may have multiple fuselages, or the fuselage may be fitted with booms on either side of the tail to allow the extreme rear of the fuselage to be utilized.
The fuselage typically carries the flight crew, passengers, cargo, and sometimes fuel and engine(s). Gliders typically omit fuel and engines, although some variations such as motor gliders and rocket gliders have them for temporary or optional use.
Pilots of manned commercial fixed-wing aircraft control them from inside a cockpit within the fuselage, typically located at the front/top, equipped with controls, windows, and instruments, separated from passengers by a secure door. In small aircraft, the passengers typically sit behind the pilot(s) in the cabin, Occasionally, a passenger may sit beside or in front of the pilot. Larger passenger aircraft have a separate passenger cabin or occasionally cabins that are physically separated from the cockpit.
Aircraft often have two or more pilots, with one in overall command (the "pilot") and one or more "co-pilots". On larger aircraft a navigator is typically also seated in the cockpit as well. Some military or specialized aircraft may have other flight crew members in the cockpit as well.
Wings vs. bodies
Flying wing
A flying wing is a tailless aircraft that has no distinct fuselage, housing the crew, payload, and equipment inside.
The flying wing configuration was studied extensively in the 1930s and 1940s, notably by Jack Northrop and Cheston L. Eshelman in the United States, and Alexander Lippisch and the Horten brothers in Germany. After the war, numerous experimental designs were based on the flying wing concept. General interest continued into the 1950s, but designs did not offer a great advantage in range and presented technical problems. The flying wing is most practical for designs in the slow-to-medium speed range, and drew continual interest as a tactical airlifter design.
Interest in flying wings reemerged in the 1980s due to their potentially low radar cross-sections. Stealth technology relies on shapes that reflect radar waves only in certain directions, thus making it harder to detect. This approach eventually led to the Northrop B-2 Spirit stealth bomber (pictured). The flying wing's aerodynamics are not the primary concern. Computer-controlled fly-by-wire systems compensated for many of the aerodynamic drawbacks, enabling an efficient and stable long-range aircraft.
Blended wing body
Blended wing body aircraft have a flattened airfoil-shaped body, which produces most of the lift to keep itself aloft, and distinct and separate wing structures, though the wings are blended with the body.
Blended wing bodied aircraft incorporate design features from both fuselage and flying wing designs. The purported advantages of the blended wing body approach are efficient, high-lift wings and a wide, airfoil-shaped body. This enables the entire craft to contribute to lift generation with potentially increased fuel economy.
Lifting body
A lifting body is a configuration in which the body produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic, and hypersonic flight, or, spacecraft re-entry. All of these flight regimes pose challenges for flight stability.
Lifting bodies were a major area of research in the 1960s and 1970s as a means to build small and lightweight manned spacecraft. The US built lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles. Interest waned as the US Air Force lost interest in the manned mission, and major development ended during the Space Shuttle design process when it became clear that highly shaped fuselages made it difficult to fit fuel tanks.
Empennage and foreplane
The classic airfoil section wing is unstable in flight. Flexible-wing planes often rely on an anchor line or the weight of a pilot hanging beneath to maintain the correct attitude. Some free-flying types use an adapted airfoil that is stable, or other mechanisms including electronic artificial stability.
In order to achieve trim, stability, and control, most fixed-wing types have an empennage comprising a fin and rudder that act horizontally, and a tailplane and elevator that act vertically. This is so common that it is known as the conventional layout. Sometimes two or more fins are spaced out along the tailplane.
Some types have a horizontal "canard" foreplane ahead of the main wing, instead of behind it. This foreplane may contribute to the trim, stability or control of the aircraft, or to several of these.
Aircraft controls
Kite control
Kites are controlled by one or more tethers.
Free-flying aircraft controls
Gliders and airplanes have sophisticated control systems, especially if they are piloted.
The controls allow the pilot to direct the aircraft in the air and on the ground. Typically these are:
The yoke or joystick controls rotation of the plane about the pitch and roll axes. A yoke resembles a steering wheel. The pilot can pitch the plane down by pushing on the yoke or joystick, and pitch the plane up by pulling on it. Rolling the plane is accomplished by turning the yoke in the direction of the desired roll, or by tilting the joystick in that direction.
Rudder pedals control rotation of the plane about the yaw axis. Two pedals pivot so that when one is pressed forward the other moves backward, and vice versa. The pilot presses on the right rudder pedal to make the plane yaw to the right, and pushes on the left pedal to make it yaw to the left. The rudder is used mainly to balance the plane in turns, or to compensate for winds or other effects that push the plane about the yaw axis.
On powered types, an engine stop control ("fuel cutoff", for example) and, usually, a Throttle or thrust lever and other controls, such as a fuel-mixture control (to compensate for air density changes with altitude change).
Other common controls include:
Flap levers, which are used to control the deflection position of flaps on the wings.
Spoiler levers, which are used to control the position of spoilers on the wings, and to arm their automatic deployment in planes designed to deploy them upon landing. The spoilers reduce lift for landing.
Trim controls, which usually take the form of knobs or wheels and are used to adjust pitch, roll, or yaw trim. These are often connected to small airfoils on the trailing edge of the control surfaces and are called "trim tabs". Trim is used to reduce the amount of pressure on the control forces needed to maintain a steady course.
On wheeled types, brakes are used to slow and stop the plane on the ground, and sometimes for turns on the ground.
A craft may have two pilot seats with dual controls, allowing two to take turns.
The control system may allow full or partial automation, such as an autopilot, a wing leveler, or a flight management system. An unmanned aircraft has no pilot and is controlled remotely or via gyroscopes, computers/sensors or other forms of autonomous control.
Cockpit instrumentation
On manned fixed-wing aircraft, instruments provide information to the pilots, including flight, engines, navigation, communications, and other aircraft systems that may be installed.
The six basic instruments, sometimes referred to as the six pack, are:
The airspeed indicator (ASI) shows the speed at which the plane is moving through the air.
The attitude indicator (AI), sometimes called the artificial horizon, indicates the exact orientation of the aircraft about its pitch and roll axes.
The altimeter indicates the altitude or height of the plane above mean sea level (AMSL).
The vertical speed indicator (VSI), or variometer, shows the rate at which the plane is climbing or descending.
The heading indicator (HI), sometimes called the directional gyro (DG), shows the magnetic compass orientation of the fuselage. The direction is affected by wind conditions and magnetic declination.
The turn coordinator (TC), or turn and bank indicator, helps the pilot to control the plane in a coordinated attitude while turning.
Other cockpit instruments include:
A two-way radio, to enable communications with other planes and with air traffic control.
A horizontal situation indicator (HSI) indicates the position and movement of the plane as seen from above with respect to the ground, including course/heading and other information.
Instruments showing the status of the plane's engines (operating speed, thrust, temperature, and other variables).
Combined display systems such as primary flight displays or navigation aids.
Information displays such as onboard weather radar displays.
A radio direction finder (RDF), to indicate the direction to one or more radio beacons, which can be used to determine the plane's position.
A satellite navigation (satnav) system, to provide an accurate position.
Some or all of these instruments may appear on a computer display and be operated with touches, ala a phone.
See also
Aircraft flight mechanics
Airliner
Aviation
Aviation and the environment
Aviation history
Fuel efficiency
List of altitude records reached by different aircraft types
Maneuvering speed
Rotorcraft
References
Notes
In 1903, when the Wright brothers used the word, "aeroplane" (a British English term that can also mean airplane in American English) meant wing, not the whole aircraft. See text of their patent. Patent 821,393 – Wright brothers' patent for "Flying Machine"
Citations
Bibliography
Blatner, David. The Flying Book: Everything You've Ever Wondered About Flying on Airplanes.
External links
The airplane centre
Airliners.net
Aerospaceweb.org
How Airplanes Work – Howstuffworks.com
Smithsonian National Air and Space Museum's How Things Fly website
"Hops and Flights – a Roll Call of Early Powered Take-offs" a 1959 Flight article
Aircraft configurations
Articles containing video clips | Fixed-wing aircraft | Engineering | 7,028 |
16,191,366 | https://en.wikipedia.org/wiki/Boletus%20barrowsii | Boletus barrowsii, also known in English as the white king bolete after its pale colored cap, is an edible and highly regarded fungus in the genus Boletus that inhabits western North America. Found under ponderosa pine and live oak in autumn, it was considered a color variant of the similarly edible B. edulis for many years.
Description
The cap is in diameter, initially convex in shape before flattening, with a smooth or slightly tomentose surface, and gray-white, white or buff color. The thick flesh is white and does not turn blue when bruised. The pores are initially whitish, later yellow. The spore print is olive brown, the spores are elliptical to spindle-shaped and 13–15 x 4–5 μm in dimensions. The stout stipe is white with a brown reticulated pattern, and may be high with an apical diameter of 2–6 cm (1–2 in). Like B. edulis, it is often found eaten by maggots. It has a strong odor while drying.
Similar species
In addition to B. edulis, the species could also be confused with the similarly pale-capped B. satanas, though the flesh of the latter stains blue when cut or bruised, and it has a reddish stem and pores. The latter species is poisonous when raw.
Taxonomy
The species was officially described by American mycologists Harry D. Thiers and Alexander H. Smith in 1976 from a specimen collected near Jacob Lake, Arizona, on August 21, 1971, by amateur mycologist Charles "Chuck" Barrows, who had studied the mushroom in New Mexico. It was previously held to be a white colour form of B. edulis. A 2010 molecular study found that B. barrowsii was sister to a lineage that gave rise to the species B. quercophilus of Costa Rica and B. nobilissimus of eastern North America.
Distribution and habitat
The white king bolete is ectomycorrhizal, found under ponderosa pine (Pinus ponderosa) inland, and coast live oak (Quercus agrifolia) closer to the west coast. Fruit bodies appear after rain, and will be more abundant if this occurs in early autumn rather than later in the year through to winter. It is abundant in the warmer parts of its range, namely Arizona and New Mexico, but also occurs in Colorado, west into California and north to British Columbia. It has been recorded from the San Marcos Foothills in Santa Barbara County.
Uses
The species is edible and highly regarded in New Mexico, Arizona, and Colorado, and was eaten for many years while assumed to be a form of B. edulis.
See also
List of Boletus species
List of North American boletes
References
Edible fungi
barrowsii
Fungi described in 1976
Fungi of North America
Taxa named by Harry Delbert Thiers
Taxa named by Alexander H. Smith
Fungus species | Boletus barrowsii | Biology | 605 |
94,620 | https://en.wikipedia.org/wiki/Saeculum | A is a length of time roughly equal to the potential lifetime of a person or, equivalently, the complete renewal of a human population.
Background
Originally it meant the time from the moment that something happened (for example the founding of a city) until the point in time that all people who had lived at the first moment had died. At that point a new would start. According to legend, the gods had allotted a certain number of to every people or civilization; the Etruscans, for example, had been given ten saecula.
By the 2nd century BC, Roman historians were using the to periodize their chronicles and track wars. At the time of the reign of emperor Augustus, the Romans decided that a was 110 years. In 17 BC, Caesar Augustus organized Ludi saeculares ("saecular games") for the first time to celebrate the "fifth saeculum of Rome". Augustus aimed to link the with imperial authority.
Emperors such as Claudius and Septimius Severus celebrated the passing of with games at irregular intervals. In 248, Philip the Arab combined Ludi saeculares with the 1,000th anniversary of the founding of Rome. The new millennium that Rome entered was called the saeculum novum, a term that received a metaphysical connotation in Christianity, referring to the worldly age (hence "secular").
Roman emperors legitimised their political authority by referring to the in various media, linked to a golden age of imperial glory. In response, Christian writers began to define the as referring to 'this present world', as opposed to the expectation of eternal life in the 'world to come'. This results in the modern sense of 'secular' as 'belonging to the world and its affairs'.
The English word secular, an adjective meaning something happening once in an eon, is derived from the Latin saeculum. The descendants of Latin saeculum in the Romance languages generally mean "century" (i.e., 100 years): French siècle, Spanish siglo, Portuguese século, Italian secolo, etc.
See also
Aeon, comparable Greek concept
Century
Generation
In saecula saeculorum
New world order (politics)
Social cycle theory
Strauss–Howe generational theory
Saeculum obscurum
References
Units of time
Ageing
Latin words and phrases | Saeculum | Physics,Mathematics | 484 |
1,163,049 | https://en.wikipedia.org/wiki/Lumen%20%28unit%29 | The lumen (symbol: lm) is the unit of luminous flux, a measure of the perceived power of visible light emitted by a source, in the International System of Units (SI). Luminous flux differs from power (radiant flux), which encompasses all electromagnetic waves emitted, including non-visible ones such as thermal radiation (infrared). By contrast, luminous flux is weighted according to a model (a "luminosity function") of the human eye's sensitivity to various wavelengths; this weighting is standardized by the CIE and ISO.
The lumen is defined as equivalent to one candela-steradian (symbol cd·sr):
1 lm = 1 cd·sr.
A full sphere has a solid angle of 4π steradians (≈ 12.56637 sr), so an isotropic light source (that uniformly radiates in all directions) with a luminous intensity of one candela has a total luminous flux of
.
One lux is one lumen per square metre.
Explanation
If a light source emits one candela of luminous intensity uniformly across a solid angle of one steradian, the total luminous flux emitted into that angle is one lumen (1 cd·1 sr = 1 lm).
Alternatively, an isotropic one-candela light-source emits a total luminous flux of exactly 4π lumens. If the source were partly covered by an ideal absorbing hemisphere, that system would radiate half as much luminous flux—only 2π lumens. The luminous intensity would still be one candela in those directions that are not obscured.
The lumen can be thought of casually as a measure of the total amount of visible light in some defined beam or angle, or emitted from some source. The number of candelas or lumens from a source also depends on its spectrum, via the nominal response of the human eye as represented in the luminosity function.
The difference between the units lumen and lux is that the lux takes into account the area over which the luminous flux is spread. A flux of 1,000 lumens, concentrated into an area of one square metre, lights up that square metre with an illuminance of 1,000 lux. The same 1,000 lumens, spread out over ten square metres, produces a dimmer illuminance of only 100 lux. In equation form, .
A source radiating a power of one watt of light in the color for which the eye is most efficient (a wavelength of 555 nm, in the green region of the optical spectrum) has luminous flux of 683 lumens. So a lumen represents at least 1/683 watts of visible light power, depending on the spectral distribution.
Lighting
Lamps used for lighting are commonly labelled with their light output in lumens; in many jurisdictions, this is required by law.
A 23 W spiral compact fluorescent lamp emits about 1,400–1,600 lm. Many compact fluorescent lamps and other alternative light sources are labelled as being equivalent to an incandescent bulb with a specific power. Below is a table that shows typical luminous flux for common incandescent bulbs and their equivalents.
The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt.
On 1 September 2010, European Union legislation came into force mandating that lighting equipment must be labelled primarily in terms of luminous flux (lm), instead of electric power (W). That change is a result of the EU's Eco-design Directive for Energy-using Products (EuP). For example, according to the European Union standard, an energy-efficient bulb that claims to be the equivalent of a 60 W tungsten bulb must have a minimum light output of 700-810 lm.
Projector output
ANSI lumens
The light output of projectors (including video projectors) is typically measured in lumens. A standardized procedure for testing projectors has been established by the American National Standards Institute, which involves averaging together several measurements taken at different positions. For marketing purposes, the luminous flux of projectors that have been tested according to this procedure may be quoted in "ANSI lumens", to distinguish them from those tested by other methods. ANSI lumen measurements are in general more accurate than the other measurement techniques used in the projector industry. This allows projectors to be more easily compared on the basis of their brightness specifications.
The method for measuring ANSI lumens is defined in the IT7.215 document which was created in 1992. First the projector is set up to display an image in a room at a temperature of . The brightness and contrast of the projector are adjusted so that on a full white field, it is possible to distinguish between a 5% screen area block of 95% peak white, and two identically sized 100% and 90% peak white boxes at the center of the white field. The light output is then measured on a full white field at nine specific locations around the screen and averaged. This average is then multiplied by the screen area to give the brightness of the projector in "ANSI lumens".
Peak lumens
Peak lumens is a measure of light output normally used with CRT video projectors. The testing uses a test pattern typically at either 10 and 20 percent of the image area as white at the center of the screen, the rest as black. The light output is measured just in this center area. Limitations with CRT video projectors result in them producing greater brightness when just a fraction of the image content is at peak brightness. For example, the Sony VPH-G70Q CRT video projector produces 1200 "peak" lumens but just 200 ANSI lumens.
Color light output
Brightness (white light output) measures the total amount of light projected in lumens. The color brightness specification Color Light Output measures red, green, and blue each on a nine-point grid, using the same approach as that used to measure brightness.
SI photometric units
See also
André Blondel
Brightness
Foot-candle, a non-SI unit of luminous flux
Luminous efficacy
Nit (unit)
Notes
References
External links
International Lighting Vocabulary 2nd Edition (online searchable version of international standard CIE S 017:2020
Units of luminous flux
SI derived units | Lumen (unit) | Mathematics | 1,305 |
22,511,732 | https://en.wikipedia.org/wiki/Boylston%20Street%20Fishweir | In archeological literature, the name Boylston Street Fishweir refers to ancient fishing structures first discovered in 1913, buried below Boylston Street in Boston, Massachusetts. Reports written in 1942 and 1949 describe what was thought to be remains of one large fishweir, 2,500 years old, made of up to 65,000 wooden stakes distributed over an estimated of the former mud flat and marshland in what is now the Back Bay section of Boston. A different interpretation of these findings is offered by new evidence and contemporary archeological research techniques.
Fish weir description and use
Throughout the world, fish weirs, wooden fence-like structures built to catch fish, are used in tidal and river conditions as a passive method to trap fish during the cycle from low to high tide, or in river flow. Fish weirs built in places of large tidal change, between ebb and flow, are built with vertical support poles holding woven nets. Fish weirs in shallow estuaries water, or in small streams, may be built with vertical stakes and the horizontal structure, called wattling, made of brushwork to form a rough barrier at mid-tide depth.
Fish weirs have been used in coastal areas by indigenous peoples in all parts of the world. Fish weirs have been discovered dating back to 7,500 years BP. In some locations, such as in Yap, Federated States of Micronesia, fish weirs are still built and used today. Along the coast of developed areas of North America and Europe permits are now required to build a fish weir. Depending on fish populations in an area, and local maritime use, fish weir construction may be prohibited entirely. This has been an issue of concern to Native American tribal groups along the New England coast.
History of discovery in Boston
In 1913, subway workers tunneling under Boylston Street to extend Boston’s early subway system discovered wooden stakes in the blue-gray glacial clay, below street level. Workers destroyed many of the stakes, but enough evidence was gathered at the time that researchers thought they had found one large fish weir, thought to have been built 2,000 years earlier. This discovery was first described in a report by the Boston Transit Commission in June 1913.
Fish weir discovery continued in 1939, with archeological investigations led by Frederick Johnson during foundation excavation for the New England Mutual Life Building at 501 Boylston Street. Long sequences of wooden stakes, buried under tidal silt and an additional of 19th-century Back Bay fill, were found passing through the site and continuing on under surrounding streets. Maps were drawn that described a fish weir covering more than of the former marshland below Boston's Back Bay – suggesting the existence of one very large fish weir with over 65,000 wooden stakes. The imagined scale of this fish weir led scholars to speculate that it was built at one time by a community of appreciable size. This fish weir was described as the earliest known large-scale engineering effort in North America. Drawings and models were made based on the findings and show the fish weir built in deep water, maintained by men working from mishoons (log canoes). This interpretation may have been informed by the type of fish weirs known to be still in use in the 1940s by Native peoples in the Canadian Bay of Fundy.
Archeological research continued in 1946 during the construction of the John Hancock Building. At this site, vertical wood stakes, long, were found in parallel linear orientations. Researchers thought they were seeing remains of long wooden structures built across streams on ancient tidal flats. This evidence suggested weirs built to trap seasonal spawning fish in shallow water tidal areas. Harvesting of fish was now thought to have been done by hand, by wading out from shore, or waiting until low tide to collect the stranded fish.
A new interpretation
New research started in 1985 during excavations for the construction of a building at 500 Boylston Street suggest a different understanding of the previous fish weir evidence. Radiocarbon dating, refined pollen sample analysis, and accurate surveys allowed the fish weir stakes to be understood to straddle many different stratigraphic layers. Rather than one large weir built at one moment in history, this new evidence suggests that fish weir remains discovered in this and previous excavations were parts of many smaller weirs, built in different locations, over a 1,500-year time span. Lead archeologist Dena Dincauze describes the fish weirs being short structures designed to harvest herring and other small fish that spawn in the late spring in the gentle waters of the intertidal zone. These weirs were most likely built and used by family clans of 35 to 50 people, who each spring would migrate from inland hunting camps to the coast, following the best seasonal food resources. The harvested fish were used for both food and to nourish the soil prior to planting.
Climate change in Boston area
Research on climate change and evidence from study of fish weirs and sediments under the Back Bay indicate the ocean level in the Boston area has risen more than ten feet in the last 6,000 years. Wooden stakes uncovered during the 500 Boylston Street excavation show the fish weirs were located close to the changing shoreline edge. These weirs were rebuilt seasonally at increasingly higher locations, as the ocean level continued to rise. Dendrochronological research documents that the wood species used for these weirs—sassafras, hickory, dogwood, beech, oak and alder—changed with the climate fluctuation. Analysis of tree rings and bark of recovered fishweir stakes reveals that the wood was often cut in the late winter and construction work on the weirs undertaken in the spring.
During the time during which the fish weirs were in use the difference between high and low tide was only about , allowing easy construction and maintenance of the wooden structures, and direct access to the trapped fish by walking from the shore. The most accurate radiocarbon dating of these weirs suggests that the earliest were built almost 5200 years BP, and then rebuilt time and again, essentially maintained for over 1500 years. By about 3700 years before present, the daily tidal height change and water flow had increased, and the ocean level had risen to the point that tidal weirs made of small size wood stakes were no longer effective in the Back Bay location. The Native people remained, developing other fishing and planting methods. The descendants of these early people may be members of the Massachuset tribe today.
Future research
New building construction in Boston’s Back Bay will most likely uncover more fish weir evidence. Collected samples of weir stakes, and survey information exists from archeological work in 1985 and from earlier efforts. More research is needed to assemble a complete and comprehensive study of the fish weir history and to more fully understand the life of the early people who lived for thousands of years in the place we now call Boston. The Ancient Fishweir Project, an annual public event on Boston Common, honors the early history with the construction of a fishweir within two blocks of the still-buried fishweir remains.
See also
Arlington (MBTA station)#Artwork
References
External links
Boylston Street fishweir revisited
Fish Weir
Geoarcheology
Weirs
Native American history of Massachusetts
History of Boston
Back Bay, Boston | Boylston Street Fishweir | Environmental_science | 1,460 |
34,709,019 | https://en.wikipedia.org/wiki/Denjoy%E2%80%93Luzin%20theorem | In mathematics, the Denjoy–Luzin theorem, introduced independently by and
states that if a trigonometric series converges absolutely on a set of positive measure, then the sum of its coefficients converges absolutely, and in particular the trigonometric series converges absolutely everywhere.
References
Fourier series
Theorems in analysis | Denjoy–Luzin theorem | Mathematics | 65 |
51,865,176 | https://en.wikipedia.org/wiki/Generalized-strain%20mesh-free%20formulation | The generalized-strain mesh-free (GSMF) formulation is a local meshfree method in the field of numerical analysis, completely integration free, working as a weighted-residual weak-form collocation. This method was first presented by Oliveira and Portela (2016), in order to further improve the computational efficiency of meshfree methods in numerical analysis. Local meshfree methods are derived through a weighted-residual formulation which leads to a local weak form that is the well known work theorem of the theory of structures. In an arbitrary local region, the work theorem establishes an energy relationship between a statically-admissible stress field and an independent kinematically-admissible strain field. Based on the independence of these two fields, this formulation results in a local form of the work theorem that is reduced to regular boundary terms only, integration-free and free of volumetric locking.
Advantages over finite element methods are that GSMF doesn't rely on a grid, and is more precise and faster when solving bi-dimensional problems. When compared to other meshless methods, such as rigid-body displacement mesh-free (RBDMF) formulation, the element-free Galerkin (EFG) and the meshless local Petrov-Galerkin finite volume method (MLPG FVM); GSMF proved to be superior not only regarding the computational efficiency, but also regarding the accuracy.
The moving least squares (MLS) approximation of the elastic field is used on this local meshless formulation.
Formulation
In the local form of the work theorem, equation:
The displacement field , was assumed as a continuous function leading to a regular integrable function that is the kinematically-admissible strain field . However, this continuity assumption on , enforced in the local form of the work theorem, is not absolutely required but can be relaxed by convenience, provided can be useful as a generalized function, in the sense of the theory of distributions, see Gelfand and Shilov. Hence, this formulation considers that the displacement field , is a piecewise continuous function, defined in terms of the Heaviside step function and therefore the corresponding strain field , is a generalized function defined in terms of the Dirac delta function.
For the sake of the simplicity, in dealing with Heaviside and Dirac delta functions in a two-dimensional coordinate space, consider a scalar function , defined as:
which represents the absolute-value function of the distance between a field point and a particular reference point , in the local domain assigned to the field node . Therefore, this definition always assumes , as a positive or null value, in this case whenever and are coincident points.
For a scalar coordinate , the Heaviside step function can be defined as
in which the discontinuity is assumed at and consequently, the Dirac delta function is defined with the following properties
and
in which represents the distributional derivative of . Note that the derivative of , with respect to the coordinate , can be defined as
Since the result of this equation is not affected by any particular value of the constant , this constant will be conveniently redefined later on.
Consider that , and represent the distance function , for corresponding collocation points , and . The displacement field , can be conveniently defined as
in which represents the metric of the orthogonal directions and , and represent the number of collocation points, respectively on the local interior boundary with length , on the local static boundary with length and in the local domain with area . This assumed displacement field , a discrete rigid-body unit displacement defined at collocation points. The strain field , is given by
Having defined the displacement and the strain components of the kinematically-admissible field, the local work theorem can be written as
Taking into account the properties of the Heaviside step function and Dirac delta function, this equation simply leads to
Discretization of this equations can be carried out with the MLS approximation, for the local domain , in terms of the nodal unknowns , thus leading to the system of linear algebraic equations that can be written as
or simply
This formulation states the equilibrium of tractions and body forces, pointwisely defined at collocation points, obviously, it is the pointwise version of the Euler-Cauchy stress principle. This is the equation used in the Generalized-Strain Mesh-Free (GSMF) formulation which, therefore, is free of integration. Since the work theorem is a weighted-residual weak form, it can be easily seen that this integration-free formulation is nothing else other than a weighted-residual weak-form collocation. The weighted-residual weak-form collocation readily overcomes the well-known difficulties posed by the weighted-residual strong-form collocation, regarding accuracy and stability of the solution.
See also
Moving least squares
Finite element method
Boundary element method
Meshfree methods
Numerical analysis
Computational Solid Mechanics
References
Numerical analysis | Generalized-strain mesh-free formulation | Mathematics | 997 |
16,019,630 | https://en.wikipedia.org/wiki/Adil%20Shamoo | Adil E. Shamoo (born August 1, 1941) is an Iraqi biochemist with an interest in biomedical ethics and foreign policy. He is currently a professor at the Department of Biochemistry and Molecular Biology at the University of Maryland.
Professional
In 1998, he founded the journal Accountability in Research, and has served as its editor-in-chief since its inception. He is on the editorial boards of several other journals, including the Drug Information Journal. From 2000 to 2002, he served on the advisory committee for National Human Research Protections. Although he has an extensive list of publications in the fields of biochemistry and microbiology, he is currently busied by his work as an analyst for Foreign Policy In Focus, a project of the Institute for Policy Studies, a think tank, to which he has been contributing since 2005. Shamoo has also authored and co-authored many op-eds on U.S. foreign policy that have been published in newspapers across the country.
Shamoo is also currently occupied with his work in the field of ethics. Since 1991, he has taught a graduate course at the University of Maryland entitled "Responsible Conduct of Research". In 1995, he co-founded the human rights organization, Citizens for Responsible Care and Research (CIRCARE). In 2003, he chaired a Special Issue GlaxoSmithKline Pharmaceuticals' Ethics Advisory Group. Shamoo was then appointed to the Armed Forces Epidemiological Board (AFEB) of the United States Department of Defense as ethics consultant (2003–2004). Because he served as chairman on nine international conferences in ethics in research and human research protection, he was asked to testify before a congressional committee and the National Bioethics Advisory Commission. Since 2006, he has served on the Defense Health Board. And from 2006 to 2007,Shamoo was a member of the new Maryland Governor's Higher Education Transition Working Group. He was an invited participant and presenter in the 2007 New Year Renaissance Weekend.
Shamoo has held visiting professorships at the Institute for Political Studies in Paris, France and at East Carolina University.
Shamoo has been cited and/or appeared frequently in local and national media both print and television. He has published numerous articles and books.
Personal
Shamoo currently resides in Columbia, MD with his wife and occasional co-author, Bonnie Bricker; his daughter, and stepdaughter. He has two sons and another stepdaughter who also all reside in the Washington Metropolitan Area.
Early life and education
Shamoo was born and raised in Baghdad, Iraq. He is an ethnic Assyrian. He attended the University of Baghdad and graduated with a degree in physics in 1962. In 1966, he earned a Master's of Science in physics from the University of Louisville. Four years later, in 1970, he finished his Ph.D. in the program in Biology at the City University of New York.
References
External links
Shamoo's Faculty Page at University of Maryland
CIRCARE Website
US Department of Health and Human Services Bio for Shamoo
American people of Iraqi-Assyrian descent
Biochemists
East Carolina University faculty
Bioethicists
Living people
1941 births
Iraqi emigrants to the United States
University of Baghdad alumni
University of Louisville alumni
CUNY Graduate Center alumni
University of Maryland, College Park faculty | Adil Shamoo | Chemistry,Biology | 658 |
6,587,798 | https://en.wikipedia.org/wiki/Wildlife%20of%20Brazil | The wildlife of Brazil comprises all naturally occurring animals, plants, and fungi in the South American country. Home to 60% of the Amazon Rainforest, which accounts for approximately one-tenth of all
species in the world, Brazil is considered to have the greatest biodiversity of any country on the planet. It has the most known species of plants (60,000), freshwater fish (3,000), amphibians (1,188), snakes (430), insects (90,000) and mammals (775) It also ranks third on the list of countries with the most bird species (1,971) and the third with the most reptile species (848). The number of fungal species is unknown (+3,300 species). Approximately two-thirds of all species worldwide are found in tropical areas, often coinciding with developing countries such as Brazil. Brazil is second only to Indonesia as the country with the most endemic species.
Biodiversity
In the animal kingdom, there is general consensus that Brazil has the highest number of both terrestrial vertebrates and invertebrates of any country in the world. This high diversity of fauna can be explained in part by the sheer size of Brazil and the great variation in ecosystems such as Amazon Rainforest, Atlantic Forest, Cerrado, Pantanal, Pampas and the Caatinga. The numbers published about Brazil's fauna diversity vary from source to source, as taxonomists sometimes disagree about species classifications, and information can be incomplete or out-of-date. Also, new species continue to be discovered and some species go extinct in the wild. Brazil has the highest diversity of primates (131 species) and freshwater fish (over 3150 species) of any country in the world. It also claims the highest number of mammals with 775 species, the third highest number of butterflies with 3,150 species, the third highest number of birds with 1,982 species, and third highest number of reptiles with 848 species. There is a high number of endangered species, many of which live in threatened habitats such as the Atlantic Forest or the Amazon Rainforest.
Scientists have described between 96,660 and 128,843 invertebrate species in Brazil. According to a 2005 estimate by Thomas M. Lewinsohn and Paulo I. Prado, Brazil is home to around 9.5% of all the species and 13.1% of biota found in the world; these figures are likely to be underestimates according to the authors.
Enough is known about Brazilian fungi to say with confidence that the number of native species must be very high and very diverse: in work almost entirely limited to the state of Pernambuco, during the 1950s, 1960s and early 1970s, more than 3300 species were observed by a single group of mycologists Given that current best estimates suggest only about 7% of the world's true diversity of fungal species has so far been discovered, with most of the known species having been described from temperate regions, the number of fungal species occurring in Brazil is likely to be far higher.
Because it encompasses many species-rich ecosystems for animals, fungi and plants, Brazil houses many thousands of species, with many (if not most) of them still undiscovered. Due to the relatively explosive economic and demographic rise of the country in the last century, Brazil's ability to protect its environmental habitats has increasingly come under threat. Extensive logging in the nation's forests, particularly the Amazon, both official and unofficial, destroys areas the size of a small country each year, and potentially a diverse variety of plants and animals. However, as various species possess special characteristics, or are built in an interesting way, some of their capabilities are being copied for use in technology (see bionics), and the profit potential may result in a retardation of deforestation.
Ecoregions
Brazil's immense area is subdivided into different ecoregions in several kinds of biomes. Because of the wide variety of habitats in Brazil, from the jungles of the Amazon Rainforest and the Atlantic Forest (which includes Atlantic Coast restingas), to the tropical savanna of the Cerrado, to the xeric shrubland of the Caatinga, to the world's largest wetland area, the Pantanal, there exists a wide variety of wildlife as well.
Animals
Terrestrial mammals and reptiles
The wild canids found in Brazil are the maned wolf, bush dog, hoary fox, short-eared dog, crab-eating fox and pampas fox. The felines found in Brazil are the jaguar, the puma, the margay, the ocelot, the oncilla, and the jaguarundi. Other notable animals include the giant anteater, several varieties of sloths and armadillos, coati, giant river otter, tapir, peccaries, marsh deer, Pampas deer, and capybara (the world's largest existing rodent). There are around 131 (in 2022) primate species, including the howler monkey, the capuchin monkey, and the squirrel monkey, the marmoset, and the tamarin.
Brazil is home to the anaconda, frequently described, controversially, as the largest snake on the planet. This water boa has been measured up to long, but historical reports note that native peoples and early European explorers claim anacondas from 50 to long.
Invertebrates
There are 1107 known species of non-marine molluscs living in the wild in Brazil.
The second largest spider in the world, the Goliath birdeater (Theraphosa blondi), can be found in some regions of Brazil.
Insects
It is calculated that Brazil has more insects than any country in the world. It is estimated as having over 70,000 species of insects, with some estimates ranging up to 15 million, with more being discovered almost daily. One 1996 report estimated between 50,000 and 60,000 species of insects and spiders in a single hectare of rainforest. About 520 thysanoptera species belonging to six families in 139 genera are found in Brazil.
Birds
Brazil ranks third on the list of countries, behind Colombia and Peru, with the most number of distinct bird species, having 1622 identified species, including over 70 species of parrots alone. It has 191 endemic birds. The variety of types of birds is vast as well, and include birds ranging from brightly colored parrots, toucans, and trogons to flamingos, ducks, vultures, hawks, eagles, owls, swans, and hummingbirds. There are also species of penguins that have been found in Brazil.
The largest bird found in Brazil is the rhea, a flightless ratite bird, similar to the emu.
Aquatic and amphibian
Brazil has over 3,000 identified species of freshwater fish and over 500 species of amphibians. As elsewhere in South America, the majority of the freshwater fish species are characiforms (tetras and allies) and siluriforms (catfish), but there are also many species from other groups such as the cyprinodontiforms and cichlids. While the majority of Brazil's fish species are native to the Amazon, the Paraná–Paraguay and the São Francisco river basins, the country also has an unusually high number of troglobitic fish, with 25 species (15% of the total in the world) known so far. The most well-known fish in Brazil is the piranha.
Other aquatic and amphibian animals found in Brazil include the pink dolphin (the world's largest river dolphin), the caimans (such as the black caiman), and the pirarucu (one of the world's largest river fish). Also familiar are the brightly colored poison dart frogs.
Fungi
The diversity of Brazil's fungi - even the small amount known so far to scientists - is astonishing. Using only conventional microscopy, and examining living leaves collected from various plants, the mycologist Batista and his colleagues, working in Pernambuco in the 1950s, 1960s and 1970s, regularly recorded more than one fungal species, and sometimes up to ten on a single leaf. Although information about fungi worldwide remains very fragmented, a preliminary estimate, based only on the work of Batista, shows that the number of potentially endemic fungal species in Brazil already exceeds 2000. Also, fungi is very often spotted in Brazil.
Plants
Brazil has 55,000 recorded plant species, the highest number of any country. About 30% of these species are endemic to Brazil. The Atlantic Forest region is home to tropical and subtropical moist forests, tropical dry forests, tropical savannas, and mangrove forests. The Pantanal region is a wetland, and home to a known 3,500 species of plants. The Cerrado is biologically the most diverse savanna in the world.
In Brazil forest cover is around 59% of the total land area, equivalent to 496,619,600 hectares (ha) of forest in 2020, down from 588,898,000 hectares (ha) in 1990. In 2020, naturally regenerating forest covered 485,396,000 hectares (ha) and planted forest covered 11,223,600 hectares (ha). Of the naturally regenerating forest 44% was reported to be primary forest (consisting of native tree species with no clearly visible indications of human activity) and around 30% of the forest area was found within protected areas. For the year 2015, 56.% of the forest area was reported to be under public ownership and 44% private ownership.
The pau-brasil tree (also known as brazilwood and the origin of the country's name) was a common plant found along the Atlantic coast of Brazil. But excessive logging of the prized timber and red dye from the bark pushed the pau-brasil towards extinction. However, since the inception of synthetic dyes, the pau-brasil has been harvested less.
All over Brazil, in all biomes, are hundreds of species of orchids, including those in the genera Cattleya, Oncidium, and Laelia.
Along the border with Venezuela lies Monte Roraima, home to many carnivorous plants. The plants evolved to digest insects due to the oligotrophic (low level of nutrients) soil of the tepui.
List of plants by ecoregion:
List of plants of Amazon Rainforest vegetation of Brazil
List of plants of Atlantic Forest vegetation of Brazil
List of plants of Caatinga vegetation of Brazil
List of plants of Cerrado vegetation of Brazil
List of plants of Pantanal vegetation of Brazil
Threats to wildlife
More than one-fifth of the Amazon Rainforest in Brazil has been completely destroyed, and more than 70 mammals are endangered. The threat of extinction comes from several sources, including deforestation and poaching. Extinction is even more problematic in the Atlantic Forest, where nearly 93% of the forest has been cleared. Of the 202 endangered animals in Brazil, 171 are in the Atlantic Forest.
Currently, 15.8 million acres of tropical ecosystem have been completely eliminated to farm sugarcane for ethanol production. And additional 4.5 million acres is planned to be planted during the next four years. 70-85% of Brazil's transportation energy is derived from ethanol, or various mixtures of ethanol and petroleum-based fuels. Only about 15-20% comes from imported petroleum. This massive national biofuel program has been devastating to tropical wildlife diversity, and to the global climate/environment. Article 1
With its acquisition of BioEnergia, BP (British Petroleum) is planning to further expand Brazil's ethanol program. BP - BioEnergia
National emblems
See also
List of endangered flora of Brazil
List of ministers of natural environment of Brazil
Movimento dos Atingidos por Barragens
Biomes in Brazil
References
Sources
Costa, L.P. et al. (2005). Mammal Conservation in Brazil. Conservation Biology 19(3): 672-679. [1]
Comitê Brasileiro de Registros Ornitológicos. 2010. Lista das aves do Brasil. 9ª edição (18 de outubro de 2010). Disponível em <http://www.cbro.org.br>, accessada em 28 de dezembro de 2010.
Further reading
External links
BrazilianFauna.com, a not-for profit educational website
Brazil Nature: Ecosystem
List of Brazilian animals on Encyclopedia of Life
Brazil
Biota of Brazil
Natural history of Brazil | Wildlife of Brazil | Biology | 2,558 |
1,964,571 | https://en.wikipedia.org/wiki/Number%20needed%20to%20treat | The number needed to treat (NNT) or number needed to treat for an additional beneficial outcome (NNTB) is an epidemiological measure used in communicating the effectiveness of a health-care intervention, typically a treatment with medication. The NNT is the average number of patients who need to be treated to prevent one additional bad outcome. It is defined as the inverse of the absolute risk reduction, and computed as , where is the incidence in the control (unexposed) group, and is the incidence in the treated (exposed) group. This calculation implicitly assumes monotonicity, that is, no individual can be harmed by treatment. The modern approach, based on counterfactual conditionals, relaxes this assumption and yields bounds on NNT.
A type of effect size, the NNT was described in 1988 by McMaster University's Laupacis, Sackett and Roberts. While theoretically, the ideal NNT is 1, where everyone improves with treatment and no one improves with control, in practice, NNT is always rounded up to the nearest round number
and so even a NNT of 1.1 becomes a NNT of 2
. A higher NNT indicates that treatment is less effective.
NNT is similar to number needed to harm (NNH), where NNT usually refers to a therapeutic intervention and NNH to a detrimental effect or risk factor. A combined measure, the number needed to treat for an additional beneficial or harmful outcome (NNTB/H), is also used.
Relevance
The NNT is an important measure in pharmacoeconomics. If a clinical endpoint is devastating enough (e.g. death, heart attack), drugs with a high NNT may still be indicated in particular situations. If the endpoint is minor, health insurers may decline to reimburse drugs with a high NNT. NNT is significant to consider when comparing possible side effects of a medication against its benefits. For medications with a high NNT, even a small incidence of adverse effects may outweigh the benefits. Even though NNT is an important measure in a clinical trial, it is infrequently included in medical journal articles reporting the results of clinical trials. There are several important problems with the NNT, involving bias and lack of reliable confidence intervals, as well as difficulties in excluding the possibility of no difference between two treatments or groups.
NNT may vary substantially over time, and hence convey different information as a function of the specific time-point of its calculation. Snapinn and Jiang showed examples where the information conveyed by the NNT may be incomplete or even contradictory compared to the traditional statistics of interest in survival analysis. A comprehensive research on adjustment of the NNT for explanatory variables and accommodation to time-dependent outcomes was conducted by Bender and Blettner, Austin, and Vancak et al.
Explanation of NNT in practice
There are a number of factors that can affect the meaning of the NNT depending on the situation. The treatment may be a drug in the form of a pill or injection, a surgical procedure, or many other possibilities. The following examples demonstrate how NNT is determined and what it means. In this example, it is important to understand that every participant has the condition being treated, so there are only "diseased" patients who received the treatment or did not. This is typically a type of study that would occur only if both the control and the tested treatment carried significant risks of serious harm, or if the treatment was unethical for a healthy participant (for example, chemotherapy drugs or a new method of appendectomy - surgical removal of the appendix). Most drug trials test both the control and the treatment on both healthy and "diseased" participants. Or, if the treatment's purpose is to prevent a condition that is fairly common (an anticoagulant to prevent heart attack for example), a prospective study may be used. A study which starts with all healthy participants is termed a prospective study, and is in contrast to a retrospective study, in which some participants already have the condition in question. Prospective studies produce much higher quality evidence, but are much more difficult and time-consuming to perform.
In the table below:
is the probability of seeing no improvement after receiving the treatment (this is 1 minus the probability of seeing improvement with the treatment). This measure applies only to the treated group.
is the probability of seeing no improvement after receiving the control (this is 1 minus the probability of seeing improvement with only the control). This measure applies only to the control (unexposed) group. The control group may receive a placebo treatment, or in cases where the goal is to find evidence that a new treatment is more effective than an existing treatment, the control group will receive the existing treatment. The meaning of the NNT is dependent on whether the control group received a placebo treatment or an existing treatment, and, in cases where a placebo treatment is given, the NNT is also affected to the quality of the placebo (i.e. for participants, is the placebo completely indistinguishable from the tested treatment.
Real-life example
ASCOT-LLA manufacturer-sponsored study addressed the benefit of atorvastatin 10 mg (a cholesterol-lowering drug) in patients with hypertension (high blood pressure) but no previous cardiovascular disease (primary prevention). The trial ran for 3.3 years, and during this period the relative risk of a "primary event" (heart attack) was reduced by 36% (relative risk reduction, RRR). The absolute risk reduction (ARR), however, was much smaller, because the study group did not have a very high rate of cardiovascular events over the study period: 2.67% in the control group, compared to 1.65% in the treatment group. Taking atorvastatin for 3.3 years, therefore, would lead to an ARR of only 1.02% (2.67% minus 1.65%). The number needed to treat to prevent one cardiovascular event would then be 98.04 for 3.3 years.
Numerical example
Modern Approach to NNT
The above calculations for NNT are valid under monotonicity, where treatment can't have a negative effect on any individual. However, in the case where the treatment may benefit some individuals and harm others, the NNT as defined above cannot be estimated from a Randomized Controlled Trial (RCT) alone. The inverse of the absolute risk reduction only provides an upper bound, i.e., .
The modern approach defines NNT literally, as the number of patients one needs to treat (on the average) before saving one. However, since "saving" is a counterfactual notion (a patient must recover if treated and not recover if not treated) the logic of counterfactuals must be invoked to estimate this quantity from experimental or observational studies. The probability of "saving" is captured by the Probability of Necessity and Sufficiency (PNS), where
. Once PNS is estimated, NNT is given as . However, due to the counterfactual nature of PNS, only bounds can be computed from an RCT, rather than a precise estimate. Tian and Pearl have derived tight bounds on PNS, based on multiple data sources, and Pearl showed that a combination of observational and experimental data may sometimes make the bounds collapse to a point estimate. Mueller and Pearl provide a conceptual interpretation for this phenomenon and illustrate its impact on both individual and policy-makers decisions.
See also
Population Impact Measures
Number needed to vaccinate
Number needed to harm
Pharmacoeconomics
References
External links
Number Needed to Treat (NNT) Calculator
EBEM's Calculator for NNT
Drug discovery
Epidemiology
Medical statistics | Number needed to treat | Chemistry,Biology,Environmental_science | 1,608 |
2,911,654 | https://en.wikipedia.org/wiki/Data%20conversion | Data conversion is the conversion of computer data from one format to another. Throughout a computer environment, data is encoded in a variety of ways. For example, computer hardware is built on the basis of certain standards, which requires that data contains, for example, parity bit checks. Similarly, the operating system is predicated on certain standards for data and file handling. Furthermore, each computer program handles data in a different manner. Whenever any one of these variables is changed, data must be converted in some way before it can be used by a different computer, operating system or program. Even different versions of these elements usually involve different data structures. For example, the changing of bits from one format to another, usually for the purpose of application interoperability or of the capability of using new features, is merely a data conversion. Data conversions may be as simple as the conversion of a text file from one character encoding system to another; or more complex, such as the conversion of office file formats, or the conversion of image formats and audio file formats.
There are many ways in which data is converted within the computer environment. This may be seamless, as in the case of upgrading to a newer version of a computer program. Alternatively, the conversion may require processing by the use of a special conversion program, or it may involve a complex process of going through intermediary stages, or involving complex "exporting" and "importing" procedures, which may include converting to and from a tab-delimited or comma-separated text file. In some cases, a program may recognize several data file formats at the data input stage and then is also capable of storing the output data in several different formats. Such a program may be used to convert a file format. If the source format or target format is not recognized, then at times a third program may be available which permits the conversion to an intermediate format, which can then be reformatted using the first program. There are many possible scenarios.
Information basics
Before any data conversion is carried out, the user or application programmer should keep a few basics of computing and information theory in mind. These include:
Information can easily be discarded by the computer, but adding information takes effort.
The computer can add information only in a rule-based fashion.
Upsampling the data or converting to a more feature-rich format does not add information; it merely makes room for that addition, which usually a human must do.
Data stored in an electronic format can be quickly modified and analyzed.
For example, a true color image can easily be converted to grayscale, while the opposite conversion is a painstaking process. Converting a Unix text file to a Microsoft (DOS/Windows) text file involves adding characters, but this does not increase the entropy since it is rule-based; whereas the addition of color information to a grayscale image cannot be reliably done programmatically, as it requires adding new information, so any attempt to add color would require estimation by the computer based on previous knowledge. Converting a 24-bit PNG to a 48-bit one does not add information to it, it only pads existing RGB pixel values with zeroes, so that a pixel with a value of FF C3 56, for example, becomes FF00 C300 5600. The conversion makes it possible to change a pixel to have a value of, for instance, FF80 C340 56A0, but the conversion itself does not do that, only further manipulation of the image can. Converting an image or audio file in a lossy format (like JPEG or Vorbis) to a lossless (like PNG or FLAC) or uncompressed (like BMP or WAV) format only wastes space, since the same image with its loss of original information (the artifacts of lossy compression) becomes the target. A JPEG image can never be restored to the quality of the original image from which it was made, no matter how much the user tries the "JPEG Artifact Removal" feature of his or her image manipulation program.
Automatic restoration of information that was lost through a lossy compression process would probably require important advances in artificial intelligence.
Because of these realities of computing and information theory, data conversion is often a complex and error-prone process that requires the help of experts.
Pivotal conversion
Data conversion can occur directly from one format to another, but many applications that convert between multiple formats use an intermediate representation by way of which any source format is converted to its target. For example, it is possible to convert Cyrillic text from KOI8-R to Windows-1251 using a lookup table between the two encodings, but the modern approach is to convert the KOI8-R file to Unicode first and from that to Windows-1251. This is a more manageable approach; rather than needing lookup tables for all possible pairs of character encodings, an application needs only one lookup table for each character set, which it uses to convert to and from Unicode, thereby scaling the number of tables down from hundreds to a few tens.
Pivotal conversion is similarly used in other areas. Office applications, when employed to convert between office file formats, use their internal, default file format as a pivot. For example, a word processor may convert an RTF file to a WordPerfect file by converting the RTF to OpenDocument and then that to WordPerfect format. An image conversion program does not convert a PCX image to PNG directly; instead, when loading the PCX image, it decodes it to a simple bitmap format for internal use in memory, and when commanded to convert to PNG, that memory image is converted to the target format. An audio converter that converts from FLAC to AAC decodes the source file to raw PCM data in memory first, and then performs the lossy AAC compression on that memory image to produce the target file.
Lost and inexact data conversion
The objective of data conversion is to maintain all of the data, and as much of the embedded information as possible. This can only be done if the target format supports the same features and data structures present in the source file. Conversion of a word processing document to a plain text file necessarily involves loss of formatting information, because plain text format does not support word processing constructs such as marking a word as boldface. For this reason, conversion from one format to another which does not support a feature that is important to the user is rarely carried out, though it may be necessary for interoperability, e.g. converting a file from one version of Microsoft Word to an earlier version to enable transfer and use by other users who do not have the same later version of Word installed on their computer.
Loss of information can be mitigated by approximation in the target format. There is no way of converting a character like ä to ASCII, since the ASCII standard lacks it, but the information may be retained by approximating the character as ae. Of course, this is not an optimal solution, and can impact operations like searching and copying; and if a language makes a distinction between ä and ae, then that approximation does involve loss of information.
Data conversion can also suffer from inexactitude, the result of converting between formats that are conceptually different. The WYSIWYG paradigm, extant in word processors and desktop publishing applications, versus the structural-descriptive paradigm, found in SGML, XML and many applications derived therefrom, like HTML and MathML, is one example. Using a WYSIWYG HTML editor conflates the two paradigms, and the result is HTML files with suboptimal, if not nonstandard, code. In the WYSIWYG paradigm a double linebreak signifies a new paragraph, as that is the visual cue for such a construct, but a WYSIWYG HTML editor will usually convert such a sequence to <BR><BR>, which is structurally no new paragraph at all. As another example, converting from PDF to an editable word processor format is a tough chore, because PDF records the textual information like engraving on stone, with each character given a fixed position and linebreaks hard-coded, whereas word processor formats accommodate text reflow. PDF does not know of a word space character—the space between two letters and the space between two words differ only in quantity. Therefore, a title with ample letter-spacing for effect will usually end up with spaces in the word processor file, for example INTRODUCTION with spacing of 1 em as I N T R O D U C T I O N on the word processor.
Open vs. secret specifications
Successful data conversion requires thorough knowledge of the workings of both source and target formats. In the case where the specification of a format is unknown, reverse engineering will be needed to carry out conversion. Reverse engineering can achieve close approximation of the original specifications, but errors and missing features can still result.
Electronics
Data format conversion can also occur at the physical layer of an electronic communication system. Conversion between line codes such as NRZ and RZ can be accomplished when necessary.
See also
Character encoding
Comparison of programming languages (basic instructions)#Data conversions
Data migration
Data transformation
Data wrangling
Transcoding
Distributed Data Management Architecture (DDM)
Code conversion (computing)
Source-to-source translation
Presentation layer
References
Computer data | Data conversion | Technology | 1,922 |
13,433,266 | https://en.wikipedia.org/wiki/Gadolinium%28III%29%20nitrate | Gadolinium(III) nitrate is an inorganic compound of gadolinium. This salt is used as a water-soluble neutron poison in nuclear reactors. Gadolinium nitrate, like all nitrate salts, is an oxidizing agent.
The most common form of this substance is hexahydrate Gd(NO3)3•6H2O with molecular weight 451.36 g/mol and CAS Number: 19598-90-4.
Use
Gadolinium nitrate was used at the Savannah River Site heavy water nuclear reactors and had to be separated from the heavy water for storage or reuse.
The Canadian CANDU reactor, a pressurized heavy water reactor, also uses gadolinium nitrate as a water-soluble neutron poison in heavy water.
Gadolinium nitrate is also used as a raw material in the production of other gadolinium compounds, for production of specialty glasses and ceramics and as a phosphor.
References
Gadolinium compounds
Nitrates
Neutron poisons | Gadolinium(III) nitrate | Chemistry | 211 |
41,282,668 | https://en.wikipedia.org/wiki/Global%20Drifter%20Program | The Global Drifter Program (GDP) (formerly known as the Surface Velocity Program (SVP)) was conceived by Prof. Peter Niiler, with the objective of collecting measurements of surface ocean currents, sea surface temperature and sea-level atmospheric pressure using drifters. It is the principal component of the Global Surface Drifting Buoy Array, a branch of NOAA's Global Ocean Observations and a scientific project of the Data Buoy Cooperation Panel (DBCP). The project originated in February 1979 as part of the TOGA/Equatorial Pacific Ocean Circulation Experiment (EPOCS) and the first large-scale deployment of drifters was in 1988 with the goal of mapping the tropical Pacific Ocean's surface circulation. The current goal of the project is to use 1250 satellite-tracked surface drifting buoys to make accurate and globally dense in-situ observations of mixed layer currents, sea surface temperature, atmospheric pressure, winds and salinity, and to create a system to process the data. Horizontal transports in the oceanic mixed layer measured by the GDP are relevant to biological and chemical processes as well as physical ones.
Drifters
SVP project drifter deployments began in 1979; the design continued to develop until reaching its current form in 1992. Each drifter consists of a spherical surface buoy tethered to a weighted nylon drogue that allows it to track the horizontal motion of water at a depth of 15 meters. If the drogue breaks off, the wind pushes the surface buoy through the water, creating erroneous current observations. A tether strain gauge has been added to monitor tension of the buoy-drogue connection to resolve this issue. The original drifters are heavy, bulky (40 cm diameter), and expensive relative to the newer "mini" drifters that are smaller, (30.5 cm diameter) cheaper, and lighter because the hull contains fewer batteries. The surface float contains alkaline batteries, a satellite transmitter, a thermistor for sub-skin sea surface temperature, and sometimes other instruments that measure pressure, wind speed and direction, or salinity.
The drifters are deployed from research vessels, volunteer ships, and through air deployment. They typically transmit their data hourly and had an average lifetime of ~485 days in 2001. Presently, enough data is gathered to observe currents at a horizontal resolution of one degree (~100 km). Single drifters can be tracked with the name of the drifter.
Applications
The data from the GDP have been used by oceanographers to derive maps of lateral diffusivity and Lagrangian length- and time-scales across the Pacific. Other uses include studies of plastic accumulation the ocean, and climatological models that simulate equatorial ocean currents, as well as many others.
Organization and collaborators
The GDP consists of three components. The component at NOAA's Atlantic Oceanographic and Meteorological Laboratory (AOML) manages deployments, processes and archives the data, maintains META files describing each drifter deployed, develops and distributes data-based products, and updates the GDP website . The Lagrangian Drifter Laboratory at the Scripps Institution of Oceanography (SIO) leads the engineering aspects of the Lagrangian drifter technology, improves the existing designs, develops new drifters, manages the real-time data stream, including posting the drifter data to the Global Telecommunication System, supervises the industry, purchases and fabricates most drifters, and develops enhanced data sets. The third component is the manufacturers in private industry, who build drifters according to specifications. The GDP collaborates with partners from numerous countries including Argentina, Australia, Brazil, Canada, France, India, Italy, Republic of Korea, Mexico, New Zealand, South Africa, Spain, United Kingdom, and the United States.
References
Oceanography | Global Drifter Program | Physics,Environmental_science | 780 |
3,221,992 | https://en.wikipedia.org/wiki/Banked%20turn | A banked turn (or banking turn) is a turn or change of direction in which the vehicle banks or inclines, usually towards the inside of the turn. For a road or railroad this is usually due to the roadbed having a transverse down-slope towards the inside of the curve. The bank angle is the angle at which the vehicle is inclined about its longitudinal axis with respect to the horizontal.
Turn on flat surfaces
If the bank angle is zero, the surface is flat and the normal force is vertically upward. The only force keeping the vehicle turning on its path is friction, or traction. This must be large enough to provide the centripetal force, a relationship that can be expressed as an inequality, assuming the car is driving in a circle of radius :
The expression on the right hand side is the centripetal acceleration multiplied by mass, the force required to turn the vehicle. The left hand side is the maximum frictional force, which equals the coefficient of friction multiplied by the normal force. Rearranging the maximum cornering speed is
Note that can be the coefficient for static or dynamic friction. In the latter case, where the vehicle is skidding around a bend, the friction is at its limit and the inequalities becomes equations. This also ignores effects such as downforce, which can increase the normal force and cornering speed.
Frictionless banked turn
As opposed to a vehicle riding along a flat circle, inclined edges add an additional force that keeps the vehicle in its path and prevents a car from being "dragged into" or "pushed out of" the circle (or a railroad wheel from moving sideways so as to nearly rub on the wheel flange). This force is the horizontal component of the vehicle's normal force (N). In the absence of friction, the normal force is the only one acting on the vehicle in the direction of the center of the circle. Therefore, as per Newton's second law, we can set the horizontal component of the normal force equal to mass multiplied by centripetal acceleration:
Because there is no motion in the vertical direction, the sum of all vertical forces acting on the system must be zero. Therefore, we can set the vertical component of the vehicle's normal force equal to its weight:
Solving the above equation for the normal force and substituting this value into our previous equation, we get:
This is equivalent to:
Solving for velocity we have:
This provides the velocity that in the absence of friction and with a given angle of incline and radius of curvature, will ensure that the vehicle will remain in its designated path. The magnitude of this velocity is also known as the "rated speed" (or "balancing speed" for railroads) of a turn or curve. Notice that the rated speed of the curve is the same for all massive objects, and a curve that is not inclined will have a rated speed of 0.
Banked turn with friction
When considering the effects of friction on the system, once again we need to note which way the friction force is pointing. When calculating a maximum velocity for our automobile, friction will point down the incline and towards the center of the circle. Therefore, we must add the horizontal component of friction to that of the normal force. The sum of these two forces is our new net force in the direction of the center of the turn (the centripetal force):
Once again, there is no motion in the vertical direction, allowing us to set all opposing vertical forces equal to one another. These forces include the vertical component of the normal force pointing upwards and both the car's weight and the vertical component of friction pointing downwards:
By solving the above equation for mass and substituting this value into our previous equation we get:
Solving for we get:
Where is the critical angle, such that . This equation provides the maximum velocity for the automobile with the given angle of incline, coefficient of static friction and radius of curvature. By a similar analysis of minimum velocity, the following equation is rendered:
Notice
The difference in the latter analysis comes when considering the direction of friction for the minimum velocity of the automobile (towards the outside of the circle). Consequently, opposite operations are performed when inserting friction into equations for forces in the centripetal and vertical directions.
Improperly banked road curves increase the risk of run-off-road and head-on crashes. A 2% deficiency in superelevation (say, 4% superelevation on a curve that should have 6%) can be expected to increase crash frequency by 6%, and a 5% deficiency will increase it by 15%. Up until now, highway engineers have been without efficient tools to identify improperly banked curves and to design relevant mitigating road actions. A modern profilograph can provide data of both road curvature and cross slope (angle of incline). A practical demonstration of how to evaluate improperly banked turns was developed in the EU Roadex III project. See the linked referenced document below.
Banked turn in aeronautics
When a fixed-wing aircraft is making a turn (changing its direction) the aircraft must roll to a banked position so that its wings are angled towards the desired direction of the turn. When the turn has been completed the aircraft must roll back to the wings-level position in order to resume straight flight.
When any moving vehicle is making a turn, it is necessary for the forces acting on the vehicle to add up to a net inward force, to cause centripetal acceleration. In the case of an aircraft making a turn, the force causing centripetal acceleration is the horizontal component of the lift acting on the aircraft.
In straight, level flight, the lift acting on the aircraft acts vertically upwards to counteract the weight of the aircraft which acts downwards. If the aircraft is to continue in level flight (i.e. at constant altitude), the vertical component must continue to equal the weight of the aircraft and so the pilot must pull back on the stick to apply the elevators to pitch the nose up, and therefore increase the angle of attack, generating an increase in the lift of the wing. The total (now angled) lift is greater than the weight of the aircraft, The excess lift is the horizontal component of the total lift, which is the net force causing the aircraft to accelerate inward and execute the turn.
Because centripetal acceleration is:
During a balanced turn where the angle of bank is the lift acts at an angle away from the vertical. It is useful to resolve the lift into a vertical component and a horizontal component.
Newton's second law in the horizontal direction can be expressed mathematically as:
where:
is the lift acting on the aircraft
is the angle of bank of the aircraft
is the mass of the aircraft
is the true airspeed of the aircraft
is the radius of the turn
In straight level flight, lift is equal to the aircraft weight. In turning flight the lift exceeds the aircraft weight, and is equal to the weight of the aircraft () divided by the cosine of the angle of bank:
where is the gravitational field strength.
The radius of the turn can now be calculated:
This formula shows that the radius of turn is proportional to the square of the aircraft's true airspeed. With a higher airspeed the radius of turn is larger, and with a lower airspeed the radius is smaller.
This formula also shows that the radius of turn decreases with the angle of bank. With a higher angle of bank the radius of turn is smaller, and with a lower angle of bank the radius is greater.
In a banked turn at constant altitude, the load factor is equal to . We can see that the load factor in straight and level flight is , since , and to generate sufficient lift to maintain constant altitude, the load factor must approach infinity as the bank angle approaches and approaches . This is physically impossible, because structural limitations of the aircraft or physical endurance of the occupants will be exceeded well before then.
Banked turn in athletics
Most indoor track and field venues have banked turns since the tracks are smaller than outdoor tracks. The tight turns on these small tracks are usually banked to allow athletes to lean inward and neutralize the centrifugal force as they race around the curve; the lean is especially noticeable on sprint events.
See also
Camber angle
Cant (road/rail)
Coriolis force (perception)
Centripetal force
g-force
Oval track racing
References
Further reading
Surface vehicles
Serway, Raymond. Physics for Scientists and Engineers. Cengage Learning, 2010.
Health and Safety Issues, the EU Roadex III project on health and safety issues raised by poorly maintained road networks.
Aeronautics
Kermode, A.C. (1972) Mechanics of Flight, Chapter 8, 10th Edition, Longman Group Limited, London
Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London
Hurt, H.H. Jr, (1960), Aerodynamics for Naval Aviators, A National Flightshop Reprint, Florida
External links
Surface vehicles
http://hyperphysics.phy-astr.gsu.edu/hbase/mechanics/imgmech/carbank.gif
https://web.archive.org/web/20051222173550/http://whitts.alioth.net/
http://www.batesville.k12.in.us/physics/PHYNET/Mechanics/Circular%20Motion/banked_no_friction.htm
Aeronautics
NASA: Guidance on banking turns
aerospaceweb.org: Bank Angle and G's (math)
Pilot’s Handbook of Aeronautical Knowledge
Aerodynamics
Aerial maneuvers
Mechanics
Transportation engineering
https://edu-physics.com/2021/05/08/how-banking-of-road-will-help-the-vehicle-to-travel-along-a-circular-path-2/ | Banked turn | Physics,Chemistry,Engineering | 2,019 |
15,553,696 | https://en.wikipedia.org/wiki/Volta%20potential | The Volta potential (also called Volta potential difference, contact potential difference, outer potential difference, Δψ, or "delta psi") in electrochemistry, is the electrostatic potential difference between two metals (or one metal and one electrolyte) that are in contact and are in thermodynamic equilibrium. Specifically, it is the potential difference between a point close to the surface of the first metal and a point close to the surface of the second metal (or electrolyte).
The Volta potential is named after Alessandro Volta.
Volta potential between two metals
When two metals are electrically isolated from each other, an arbitrary potential difference may exist between them. However, when two different neutral metal surfaces are brought into electrical contact (even indirectly, say, through a long electro-conductive wire), electrons will flow from the metal with the higher Fermi level to the metal with the lower Fermi level until the Fermi levels in the two phases are equal.
Once this has occurred, the metals are in thermodynamic equilibrium with each other (the actual number of electrons that passes between the two phases is usually small).
Just because the Fermi levels are equal, however, does not mean that the electric potentials are equal. The electric potential outside each material is controlled by its work function, and so dissimilar metals can show an electric potential difference even at equilibrium.
The Volta potential is not an intrinsic property of the two bulk metals under consideration, but rather is determined by work function differences between the metals' surfaces. Just like the work function, the Volta potential depends sensitively on surface state, contamination, and so on.
Measurement of Volta potential (Kelvin probe)
The Volta potential can be significant (of order 1 volt) but it cannot be measured directly by an ordinary voltmeter.
A voltmeter does not measure vacuum electrostatic potentials, but instead the difference in Fermi level between the two materials, a difference that is exactly zero at equilibrium.
The Volta potential, however, corresponds to a real electric field in the spaces between and around the two metal objects, a field generated by the accumulation of charges at their surfaces. The total charge over each object's surface depends on the capacitance between the two objects, by the relation , where is the Volta potential. It follows therefore that the value of the potential can be measured by varying the capacitance between the materials by a known amount (e.g., by moving the objects further from each other) and measuring the displaced charge that flows through the wire that connects them.
The Volta potential difference between a metal and an electrolyte can be measured in a similar fashion.
The Volta potential of a metal surface can be mapped on very small scales by use of a Kelvin probe force microscope, based on atomic force microscopy. Over larger areas on the order of millimeters to centimeters, a scanning Kelvin probe (SKP), which uses a wire probe of tens to hundreds of microns in size, can be used. In either case the capacitance change is not known—instead, a compensating DC voltage is added to cancel the Volta potential so that no current is induced by the change in capacitance. This compensating voltage is the negative of the Volta potential.
See also
Electrode potential
Absolute electrode potential
Electric potential
Galvani potential
Potential difference (voltage)
Band bending
Volt
Volta effect
References
Electrochemical concepts
Electrochemical potentials
Alessandro Volta | Volta potential | Chemistry | 707 |
9,753,794 | https://en.wikipedia.org/wiki/Reticulum%20%28anatomy%29 | The reticulum is the second chamber in the four-chamber alimentary canal of a ruminant mammal. Anatomically it is the smaller portion of the reticulorumen along with the rumen. Together these two compartments make up 84% of the volume of the total stomach.
The reticulum is colloquially referred to as the honeycomb, bonnet''', or kings-hood''. When cleaned and used for food, it is called "tripe".
Heavy or dense feed and foreign objects will settle here. It is the site of hardware disease in cattle and because of the proximity to the heart this disease can be life-threatening.
Anatomy
The internal mucosa has a honeycomb shape. When looking at the reticulum with ultrasonography it is a crescent-shaped structure with a smooth contour. The reticulum is adjacent to the diaphragm, lungs, abomasum, rumen and liver. The heights of the reticular crests and depth of the structures vary across ruminant animal species. Grazing ruminants have higher crests than browsers. However, general reticulum size is fairly constant across ruminants of differing body size and feeding type.
Why nature selected the honeycomb shape as the internal mucosal structure of the reticulum. The main function of the reticulum is to store higher-density particles and generate biphasic contraction for the separation of food particles governed by the large amount of water in the reticulum. In The internal anatomical structure of the reticulum, the Honeycomb resembles the classical organic benzene structure; it is the most favourite structure of the nature. A hexagon has six sides, which allows it to distribute forces evenly across its structure, making it more stable and resistant to deformation. Additionally, the angles of a hexagon are evenly spaced, which helps to evenly distribute stress and prevent weak points, it has the angle that releases the most tension i.e.;120 degrees. The hexagon uses the least amount of material to hold the most weight that’s why higher density particles remain stay in the reticulum. High density particles may settle into the honeycomb structures and can be found after death. It is during the contractions of the reticulum that sharp objects can penetrate the wall and make their way to the heart. The hexagon is considered one of the strongest shapes in nature due to its ability to distribute force evenly and efficient. Hexagonal structures allow water molecules (with their one atom of oxygen and two of hydrogen) to group up together in the most efficient way and help in food particles separation.
In a mature cow, the reticulum can hold around 5 gallons of liquid. The rumen and reticulum are very close in structure and function and can be considered as one organ. They are separated only by a muscular fold of tissue.
In immature ruminants, a reticular groove is formed by the muscular fold of the reticulum. This allows milk to pass by the reticulorumen straight into the abomasum.
Role in digestion
The fluid contents of the reticulum play a role in particle separation. This is true both in domestic and wild ruminants. The separation takes place through biphasic contractions. In the first contraction, there is sending large particles back into the rumen while the reticulo-omasal orifice allows the passage of finer particles. In the second contraction, the reticulum contracts completely so the empty reticulum can refill with contents from the rumen. These contents are then sorted in the next biphasic contraction. The contractions occur in regular intervals. High density particles may settle into the honeycomb structures and can be found after death. It is during the contractions of the reticulum that sharp objects can penetrate the wall and make their way to the heart. Some ruminants, such as goats, also have monophasic contractions in addition to the biphasic contractions.
See also
Methanogens in digestive tract of ruminants
References
Mammal anatomy
Digestive system
Ruminants | Reticulum (anatomy) | Biology | 854 |
46,177 | https://en.wikipedia.org/wiki/Epidemic%20typhus | Epidemic typhus, also known as louse-borne typhus, is a form of typhus so named because the disease often causes epidemics following wars and natural disasters where civil life is disrupted. Epidemic typhus is spread to people through contact with infected body lice, in contrast to endemic typhus which is usually transmitted by fleas.
Though typhus has been responsible for millions of deaths throughout history, it is still considered a rare disease that occurs mainly in populations that suffer unhygienic extreme overcrowding. Typhus is most rare in industrialized countries. It occurs primarily in the colder, mountainous regions of central and east Africa, as well as Central and South America. The causative organism is Rickettsia prowazekii, transmitted by the human body louse (Pediculus humanus corporis). Untreated typhus cases have a fatality rate of approximately 40%.
Epidemic typhus should not be confused with murine typhus, which is more endemic to the United States, particularly Southern California and Texas. This form of typhus has similar symptoms but is caused by Rickettsia typhi, is less deadly, and has different vectors for transmission.
Signs and symptoms
Symptoms of this disease typically begin within 2 weeks of contact with the causative organism. Signs/Symptoms may include:
Fever
Chills
Headache
Confusion
Cough
Rapid Breathing
Body/Muscle Aches
Rash
Nausea
Vomiting
After 5–6 days, a macular skin eruption develops: first on the upper trunk and spreading to the rest of the body (rarely to the face, palms, or soles of the feet, however).
Brill–Zinsser disease, first described by Nathan Brill in 1913 at Mount Sinai Hospital in New York City, is a mild form of epidemic typhus that recurs in someone after a long period of latency (similar to the relationship between chickenpox and shingles). This recurrence often arises in times of relative immunosuppression, which is often in the context of a person suffering malnutrition or other illnesses. In combination with poor sanitation and hygiene in times of social chaos and upheaval, which enable a greater density of lice, this reactivation is why typhus generates epidemics in such conditions.
Complications
Complications are as follows
Myocarditis
Endocarditis
Mycotic aneurysm
Pneumonia
Pancreatitis
Kidney or bladder infections
Acute renal failure
Meningitis
Encephalitis
Myelitis
Septic shock
Transmission
Feeding on a human who carries the bacterium infects the louse. R. prowazekii grows in the louse's gut and is excreted in its feces. The louse transmits the disease by biting an uninfected human, who scratches the louse bite (which itches) and rubs the feces into the wound. The incubation period is one to two weeks. R. prowazekii can remain viable and virulent in the dried louse feces for many days. Typhus will eventually kill the louse, though the disease will remain viable for many weeks in the dead louse.
Epidemic typhus has historically occurred during times of war and deprivation. For example, typhus killed millions of prisoners in German Nazi concentration camps during World War II. The unhygenic conditions in camps such as Auschwitz, Theresienstadt, and Bergen-Belsen allowed diseases such as typhus to flourish. Situations in the twenty-first century with potential for a typhus epidemic would include refugee camps during a major famine or natural disaster. In the periods between outbreaks, when human to human transmission occurs less often, the flying squirrel serves as a zoonotic reservoir for the Rickettsia prowazekii bacterium.
In 1916, Henrique da Rocha Lima proved that the bacterium Rickettsia prowazekii was the agent responsible for typhus. He named it after his colleague Stanislaus von Prowazek, who had along with himself become infected with typhus while investigating an outbreak, subsequently dying, and H. T. Ricketts, another zoologist who had died from typhus while investigating it. Once these crucial facts were recognized, Rudolf Weigl in 1930 was able to fashion a practical and effective vaccine production method. He ground up the insides of infected lice that had been drinking blood. It was, however, very dangerous to produce, and carried a high likelihood of infection to those who were working on it.
A safer mass-production-ready method using egg yolks was developed by Herald R. Cox in 1938. This vaccine was widely available and used extensively by 1943.
Diagnosis
IFA, ELISA or PCR positive after 10 days.
Treatment
The infection is treated with antibiotics. Intravenous fluids and oxygen may be needed to stabilize the patient. There is a significant disparity between the untreated mortality and treated mortality rates: 10-60% untreated versus close to 0% treated with antibiotics within 8 days of initial infection. Tetracycline, chloramphenicol, and doxycycline are commonly used.
Some of the simplest methods of prevention and treatment focus on preventing infestation of body lice. Completely changing the clothing, washing the infested clothing in hot water, and in some cases also treating recently used bedsheets all help to prevent typhus by removing potentially infected lice. Clothes left unworn and unwashed for 7 days also result in the death of both lice and their eggs, as they have no access to a human host. Another form of lice prevention requires dusting infested clothing with a powder consisting of 10% DDT, 1% malathion, or 1% permethrin, which kill lice and their eggs.
Other preventive measures for individuals are to avoid unhygienic, extremely overcrowded areas where the causative organisms can jump from person to person. In addition, they are warned to keep a distance from larger rodents that carry lice, such as rats, squirrels, or opossums.
History
History of outbreaks
Before 19th century
During the second year of the Peloponnesian War (430 BC), the city-state of Athens in ancient Greece had an epidemic, known as the Plague of Athens, which killed, among others, Pericles and his two elder sons. The plague returned twice more, in 429 BC and in the winter of 427/6 BC. Epidemic typhus is proposed as a strong candidate for the cause of this disease outbreak, supported by both medical and scholarly opinions.
The first description of typhus was probably given in 1083 at La Cava abbey near Salerno, Italy. In 1546, Girolamo Fracastoro, a Florentine physician, described typhus in his famous treatise on viruses and contagion, De Contagione et Contagiosis Morbis.
Typhus was carried to mainland Europe by soldiers who had been fighting on Cyprus. The first reliable description of the disease appears during the siege of the Emirate of Granada by the Catholic Monarchs in 1489 during the Granada War. These accounts include descriptions of fever and red spots over arms, back and chest, progressing to delirium, gangrenous sores, and the stench of rotting flesh. During the siege, the Catholics lost 3,000 men to enemy action, but an additional 17,000 died of typhus.
Typhus was also common in prisons (and in crowded conditions where lice spread easily), where it was known as Gaol fever or Jail fever. Gaol fever often occurs when prisoners are frequently huddled together in dark, filthy rooms. Imprisonment until the next term of court was often equivalent to a death sentence. Typhus was so infectious that prisoners brought before the court sometimes infected the court itself. Following the Black Assize of Oxford 1577, over 510 died from epidemic typhus, including Speaker Robert Bell, Lord Chief Baron of the Exchequer. The outbreak that followed, between 1577 and 1579, killed about 10% of the English population.
During the Lent assize held at Taunton (1730), typhus caused the death of the Lord Chief Baron of the Exchequer, the High Sheriff of Somerset, the sergeant, and hundreds of other persons. During a time when there were 241 capital offences, more prisoners died from 'gaol fever' than were put to death by all the public executioners in the realm. In 1759 an English authority estimated that each year a quarter of the prisoners had died from gaol fever. In London, typhus frequently broke out among the ill-kept prisoners of Newgate Gaol and moved into the general city population.
19th century
Epidemics occurred in the British Isles and throughout Europe, for instance, during the English Civil War, the Thirty Years' War, and the Napoleonic Wars. Many historians believe that the typhus outbreak among Napoleon's troops is the real reason why he stalled his military campaign into Russia, rather than starvation or the cold. A major epidemic occurred in Ireland between 1816 and 1819, and again in the late 1830s. Another major typhus epidemic occurred during the Great Irish Famine between 1846 and 1849. The Irish typhus spread to England, where it was sometimes called "Irish fever" and was noted for its virulence. It killed people of all social classes since lice were endemic and inescapable, but it hit particularly hard in the lower or "unwashed" social strata. It was carried to North America by the many Irish refugees who fled the famine. In Canada, the 1847 North American typhus epidemic killed more than 20,000 people, mainly Irish immigrants in fever sheds and other forms of quarantine, who had contracted the disease aboard coffin ships. As many as 900,000 deaths have been attributed to the typhus fever during the Crimean War in 1853–1856, and 270,000 to the 1866 Finnish typhus epidemic.
In the United States, a typhus epidemic struck Philadelphia in 1837. The son of Franklin Pierce died in 1843 of a typhus epidemic in Concord, New Hampshire. Several epidemics occurred in Baltimore, Memphis, and Washington, D.C. between 1865 and 1873. Typhus fever was also a significant killer during the American Civil War, although typhoid fever was the more prevalent cause of US Civil War "camp fever." Typhoid is a completely different disease from typhus. Typically more men died on both sides of disease than wounds.
Rudolph Carl Virchow, a physician, anthropologist, and historian attempted to control an outbreak of typhus in Upper Silesia and wrote a 190-page report about it. He concluded that the solution to the outbreak did not lie in individual treatment or by providing small changes in housing, food or clothing, but rather in widespread structural changes to directly address the issue of poverty. Virchow's experience in Upper Silesia led to his observation that "Medicine is a social science". His report led to changes in German public health policy.
20th century
Typhus was endemic in Poland and several neighboring countries prior to World War I (1914–1918). During and shortly after the war, epidemic typhus caused up to three million deaths in Russia, and several million citizens also died in Poland and Romania. Since 1914, many troops, prisoners and even doctors were infected, and at least 150,000 died from typhus in Serbia, 50,000 of whom were prisoners. Delousing stations were established for troops on the Western Front, but the disease ravaged the armies of the Eastern Front. Fatalities were generally between 10 and 40 percent of those infected, and the disease was a major cause of death for those nursing the sick. During World War I and the Russian Civil War between the White and Red, the typhus epidemic caused 2–3 million deaths out of 20–30 million cases in Russia between 1918 and 1922.
Typhus caused hundreds of thousands of deaths during World War II. It struck the German Army during Operation Barbarossa, the invasion of Russia, in 1941. In 1942 and 1943 typhus hit French North Africa, Egypt and Iran particularly hard. Typhus epidemics killed inmates in the Nazi concentration camps and death camps such as Auschwitz, Dachau, Theresienstadt, and Bergen-Belsen. Footage shot at Bergen-Belsen concentration camp shows the mass graves for typhus victims. Anne Frank, at age 15, and her sister Margot both died of typhus in the camps. Even larger epidemics in the post-war chaos of Europe were averted only by the widespread use of the newly discovered DDT to kill lice on the millions of refugees and displaced persons.
Following the development of a vaccine during World War II, Western Europe and North America have been able to prevent epidemics. These have usually occurred in Eastern Europe, the Middle East, and parts of Africa, particularly Ethiopia. Naval Medical Research Unit Five worked there with the government on research to attempt to eradicate the disease.
In one of its first major outbreaks since World War II, epidemic typhus reemerged in 1995 in a jail in N'Gozi, Burundi. This outbreak followed the start of the Burundian Civil War in 1993, which caused the displacement of 760,000 people. Refugee camps were crowded and unsanitary, and often far from towns and medical services.
21st century
A 2005 study found seroprevalence of R. prowazekii antibodies in homeless populations in two shelters in Marseille, France. The study noted the "hallmarks of epidemic typhus and relapsing fever".
History of vaccines
Major developments for typhus vaccines started during World War I, as typhus caused high mortality, and threatened the health and readiness for soldiers on the battlefield. Vaccines for typhus, like other vaccines of the time, were classified as either living or killed vaccines. Live vaccines were typically an injection of live agent, and killed vaccines are live cultures of an agent that are chemically inactivated prior to use.
Attempts to create a living vaccine of classical, louse-borne, typhus were attempted by French researchers but these proved unsuccessful. Researchers turned to murine typhus to develop a live vaccine. At the time, murine vaccine was viewed as a less severe alternative to classical typhus. Four versions of a live vaccine cultivated from murine typhus were tested, on a large scale, in 1934.
While the French were making advancements with live vaccines, other European countries were working to develop killed vaccines. During World War II, there were three kinds of potentially useful killed vaccines. All three killed vaccines relied on the cultivation of Rickettsia prowazekii, the organism responsible for typhus. The first attempt at a killed vaccine was developed by Germany, using the Rickettsia prowazekii found in louse feces. The vaccine was tested extensively in Poland between the two world wars and used by the Germans for their troops during their attacks on the Soviet Union.
A second method of growing Rickettsia prowazekii was discovered using the yolk sac of chick embryos. Germans tried several times to use this technique of growing Rickettsia prowazekii but no effort was pushed very far.
The last technique was an extended development of the previously known method of growing murine typhus in rodents. It was discovered that rabbits could be infected, by a similar process, and contract classical typhus instead of murine typhus. Again, while proven to produce suitable Rickettsia prowazekii for vaccine development, this method was not used to produce wartime vaccines.
During WWII, the two major vaccines available were the killed vaccine grown in lice and the live vaccine from France. Neither was used much during the war. The killed, louse-grown vaccine was difficult to manufacture in large enough quantities, and the French vaccine was not believed to be safe enough for use.
The Germans worked to develop their own live vaccine from the urine of typhus victims. While developing a live vaccine, Germany used live Rickettsia prowazekii to test multiple possible vaccines' capabilities. They gave live Rickettsia prowazekii to concentration camp prisoners, using them as a control group for the vaccine tests.
The use of DDT as an effective means of killing lice, the main carrier of typhus, was discovered in Naples.
Society and culture
Biological weapon
Typhus was one of more than a dozen agents that the United States researched as potential biological weapons before President Richard Nixon suspended all non-defensive aspects of the U.S. biological weapons program in 1969.
Poverty and displacement
The CDC lists the following areas as active foci of human epidemic typhus: Andean regions of South America, some parts of Africa; on the other hand, the CDC only recognizes an active enzootic cycle in the United States involving flying squirrels (CDC). Though epidemic typhus is commonly thought to be restricted to areas of the developing world, serological examination of homeless persons in Houston found evidence for exposure to the bacterial pathogens that cause epidemic typhus and murine typhus. A study involving 930 homeless people in Marseille, France, found high rates of seroprevalence to R. prowazekii and a high prevalence of louse-borne infections in the homeless.
Typhus has been increasingly discovered in homeless populations in developed nations. Typhus among homeless populations is especially prevalent as these populations tend to migrate across states and countries, spreading the risk of infection with their movement. The same risk applies to refugees, who travel across country lines, often living in close proximity and unable to maintain necessary hygienic standards to avoid being at risk for catching lice possibly infected with typhus.
Because the typhus-infected lice live in clothing, the prevalence of typhus is also affected by weather, humidity, poverty and lack of hygiene. Lice, and therefore typhus, are more prevalent during colder months, especially winter and early spring. In these seasons, people tend to wear multiple layers of clothing, giving lice more places to go unnoticed by their hosts. This is particularly a problem for poverty-stricken populations as they often do not have multiple sets of clothing, preventing them from practicing good hygiene habits that could prevent louse infestation.
Due to fear of an outbreak of epidemic typhus, the US Government put a typhus quarantine in place in 1917 across the entirety of the US-Mexican border. Sanitation plants were constructed that required immigrants to be thoroughly inspected and bathed before crossing the border. Those who routinely crossed back and forth across the border for work were required to go through the sanitation process weekly, updating their quarantine card with the date of the next week's sanitation. These sanitation border stations remained active over the next two decades, regardless of the disappearance of the typhus threat. This fear of typhus and resulting quarantine and sanitation protocols dramatically hardened the border between the US and Mexico, fostering scientific and popular prejudices against Mexicans. This ultimately intensified racial tensions and fueled efforts to ban immigrants to the US from the Southern Hemisphere because the immigrants were associated with the disease.
Literature
(1847) In Jane Eyre by Charlotte Brontë, an outbreak of typhus occurs in Jane's school Lowood, highlighting the unsanitary conditions the girls live in.
(1862) In Fathers and Sons by Ivan Turgenev, Evgeny Bazarov dissects a local peasant and dies after contracting typhus.
(1886) In the short story "Excellent People" by Anton Chekhov, typhus kills a Russian provincial.
(1886) In The Strange Adventures of Captain Dangerous by George Augustus Henry Sala: "We Convicts were all had to the Grate, for the Knight and Alderman would not venture further in, for fear of the Gaol Fever;"
(1890) In How the Other Half Lives by Jacob Riis, the effects of typhus fever and smallpox on "Jewtown" are described.
(1935) Hans Zinsser's Rats, Lice and History, although a touch outdated on the science, contains many useful cross-references to classical and historical impact of typhus.
(1940) in The Don Flows Home to the Sea by Mikhail Sholokhov, numerous characters contract typhus during the Russian Civil War.
(1946) In Viktor Frankl's Man's Search for Meaning, Frankl, a Nazi concentration camp prisoner and trained psychiatrist, treats fellow prisoners for delirium due to typhus, while being occasionally affected with the disease himself.
(1955) In Vladimir Nabokov's Lolita, Humbert Humbert's childhood sweetheart, Annabel Leigh, dies of typhus.
(1956) In Doctor Zhivago by Boris Pasternak, the main character contracts epidemic typhus in the winter following the Russian Revolution, while living in Moscow.
(1964) In Nacht (novel) by Edgar Hilsenrath, characters imprisoned in a ghetto in Transnistria during World War II are portrayed infected with and dying of epidemic typhus.
(1978) In Patrick O'Brian's novel Desolation Island, an outbreak of "gaol-fever" strikes the crew while sailing aboard the Leopard.
(1980–1991) In Maus by Art Spiegelman, Vladek Spiegelman contracts typhus during his imprisonment at the Dachau concentration camp.
(1982) There is a typhus epidemic in Chile graphically described in The House of the Spirits by Isabel Allende
(1996) In Andrea Barrett's novella Ship Fever, the characters struggle with a typhus outbreak at the Canadian Grosse Isle Quarantine Station during 1847.
(2001) Lynn and Gilbert Morris' novel Where Two Seas Met portrays an outbreak of typhus on the island of Bequia in the Grenadines, in 1869.
(2004) In Neal Stephenson's The System Of The World, a fictionalized Sir Isaac Newton dies of "gaol fever" before being resurrected by Daniel Waterhouse.
See also
Globalization and disease
List of epidemics
Weil-Felix test
References
55. ↑ Alice S. Chapman (2006). "Cluster of Sylvatic Epidemic Typhus Cases Associated with Flying Squirrels, 2004 - 2006" MedscapeCME Epidemic Typhus Associated with Flying Squirrels – United States
Bacterium-related cutaneous conditions
Zoonoses
Insect-borne diseases
Biological agents
Rodent-carried diseases
Typhus | Epidemic typhus | Biology,Environmental_science | 4,566 |
6,447,481 | https://en.wikipedia.org/wiki/Rhizopus%20arrhizus | Rhizopus arrhizus is a fungus of the family Mucoraceae, characterized by sporangiophores that arise from nodes at the point where the rhizoids are formed and by a hemispherical columella. It is the most common cause of mucormycosis in humans and occasionally infects other animals.
Rhizopus arrhizus spores contain ribosomes as a spore ultrastructure.
Metabolism in the fungus changes from aerobic to fermentation at various points in its life cycle.
Nutrition
R. arrhizus produces siderophores which are also usable to adjacent plants. Holzberg & Artis 1983 finds a hydroxamate siderophore and Shenker et al. 1992 provides a method for detection of a carboxylate.
Plant diseases
See:
List of almond diseases
List of apricot diseases
List of beet diseases
List of carrot diseases
List of mango diseases
List of maize diseases
List of peach and nectarine diseases
List of sunflower diseases
R. arrhizus does not infect grape first or alone but is instead secondary. This fungus colonizes grape after another pathogen has begun degrading tissues and as such is better diagnosed by molecular diagnostics, especially in early stages when the difference between single and complex infection is not visually tractable.
Management
Howell & Stipanovic 1983 find gliovirin is not effective against R. arrhizus.
Uses
Rhizopus arrhizus can be used for bio-remediation, i.e., is useful in treating uranium and thorium-affected soils.
References
Fungal plant pathogens and diseases
Stone fruit tree diseases
Food plant pathogens and diseases
Carrot diseases
Mango tree diseases
Maize diseases
Sunflower diseases
Mucoraceae
Fungus species | Rhizopus arrhizus | Biology | 368 |
37,196,530 | https://en.wikipedia.org/wiki/Out%20of%20the%20box%20%28feature%29 | An out-of-the-box feature or functionality (also called OOTB or off the shelf), particularly in software, is a native feature or built-in functionality of a product that comes directly from the vendor and works immediately when the product is placed in service. In the context of software, out-of-the-box features and functionality are available for all users by default and do not require customization, modification, configuration, scripting, add-ons, modules, third-party tools, or additional fees in order to be used.
See also
Convention over configuration
Commercial off-the-shelf
Government off-the-shelf
Commodity computing
Out-of-box experience
References
Software features | Out of the box (feature) | Technology | 144 |
33,187,484 | https://en.wikipedia.org/wiki/Baker%27s%20technique | In theoretical computer science, Baker's technique is a method for designing polynomial-time approximation schemes (PTASs) for problems on planar graphs. It is named after Brenda Baker, who announced it in a 1983 conference and published it in the Journal of the ACM in 1994.
The idea for Baker's technique is to break the graph into layers, such that the problem can be solved optimally on each layer, then combine the solutions from each layer in a reasonable way that will result in a feasible solution. This technique has given PTASs for the following problems: subgraph isomorphism, maximum independent set, minimum vertex cover, minimum dominating set, minimum edge dominating set, maximum triangle matching, and many others.
The bidimensionality theory of Erik Demaine, Fedor Fomin, Hajiaghayi, and Dimitrios Thilikos and its offshoot simplifying decompositions (,) generalizes and greatly expands the applicability of Baker's technique
for a vast set of problems on planar graphs and more generally graphs excluding a fixed minor, such as bounded genus graphs, as well as to other classes of graphs not closed under taking minors such as the 1-planar graphs.
Example of technique
The example that we will use to demonstrate Baker's technique is the maximum weight independent set problem.
Algorithm
INDEPENDENT-SET(, , )
Choose an arbitrary vertex
find the breadth-first search levels for rooted at :
for
find the components of after deleting
for
compute , the maximum-weight independent set of
let be the solution of maximum weight among
return
Notice that the above algorithm is feasible because each is the union of disjoint independent sets.
Dynamic programming
Dynamic programming is used when we compute the maximum-weight independent set for each . This dynamic program works because each is a -outerplanar graph. Many NP-complete problems can be solved with dynamic programming on -outerplanar graphs. Baker's technique can be interpreted as covering the given planar graphs with subgraphs of this type, finding the solution to each subgraph using dynamic programming, and gluing the solutions together.
References
.
.
.
1983 in computing
Planar graphs
Approximation algorithms | Baker's technique | Mathematics | 447 |
32,541,936 | https://en.wikipedia.org/wiki/Ligand%20efficiency | Ligand efficiency is a measurement of the binding energy per atom of a ligand to its binding partner, such as a receptor or enzyme.
Ligand efficiency is used in drug discovery research programs to assist in narrowing focus to lead compounds with optimal combinations of physicochemical properties and pharmacological properties.
Mathematically, ligand efficiency (LE) can be defined as the ratio of Gibbs free energy (ΔG) to the number of non-hydrogen atoms of the compound:
LE = -(ΔG)/N
where ΔG = −RTlnKi and N is the number of non-hydrogen atoms. It can be transformed to the equation:
LE = 1.4(−log IC50)/N
Other metrics
Some suggest that better metrics for ligand efficiency are
percentage/potency efficiency index (PEI), binding efficiency index (BEI) and surface-binding efficiency index (SEI) because they are easier to calculate and take into account the differences between elements in different rows of the periodic table. It is important to note that PEI is a relative measure for comparing compounds tested in the same conditions (e.g. a single-point assay) and are not comparable at different inhibitor concentrations. Also for BEI and SEI, similar measurements must be used (e.g. always using pKi).
PEI = (% inhibition at a given compound concentration as fraction: 0 – 1.0) / (molecular weight, kDa)
BEI = (pKi, pKd, or pIC50) / (molecular weight, kDa)
SEI = (pKi, pKd, or pIC50) / (PSA/100 Å)
where pKi, pKd and pIC50 is defined as −log(Ki), −log(Kd), or −log(IC
50), respectively. Ki and IC50 in mol/L.
The authors suggest plotting compounds SEI and BEI on a plane and optimizing compounds towards the diagonal and so optimizing both SEI and BEI which incorporate potency, molecular weight and PSA.
There are other metrics which can be useful during hit to lead optimization: group efficiency (GE), lipophilic efficiency/lipophilic ligand efficiency (LipE/LLE), ligand lipophilicity index (LLEAT) ligand efficiency dependent lipophilicity (LELP), fit quality scaled ligand efficiency (LEscale), size independent ligand efficiency (SILE).
Group efficiency (GE) is a metric used to estimate the binding efficiency of groups added to a ligand. Unlike ligand efficiency which evaluates the efficiency of the entire molecule, group efficiency measures the relative change of the Gibbs free energy (ΔΔG), caused by addition or modification of groups, normalized by the change in the number heavy atoms in those groups (ΔN), using the equation:
GE = -(ΔΔG)/ΔN
See also
Drug design
Drug discovery hit to lead
References
Drug discovery
Medicinal chemistry | Ligand efficiency | Chemistry,Biology | 619 |
39,430,307 | https://en.wikipedia.org/wiki/Benedikt%20L%C3%B6we | Benedikt Löwe
(born 1972) is a German mathematician and logician working at the
universities of Hamburg and Cambridge.
He is known for his work on mathematical logic and the foundations of mathematics, as well as for initiating the interdisciplinary conference series Foundations of the Formal Sciences (FotFS; 1999–2013) and Computability in Europe (CiE; since 2005).
Biography
Löwe studied mathematics and philosophy at the universities of Hamburg, Tübingen, HU Berlin, and Berkeley. In 2001, he completed his PhD entitled Blackwell Determinacy about determinacy under supervision of Donald A. Martin and Ronald Björn Jensen.
He worked at the Institute for Logic, Language and Computation of the University of Amsterdam from 2003 to 2023. In 2009, he was appointed professor for mathematical logic and interdisciplinary applications of logic at the University of Hamburg
and is also an extraordinary fellow at Churchill College of the University of Cambridge.
Löwe was Managing Editor of the journal Mathematical Logic Quarterly from 2011 to 2022.
He was the President of the German Association for Mathematical Logic and for Basic Research in the Exact Sciences (DVMLG) from 2012 to 2022 and the Secretary General of the Division for Logic, Methodology and Philosophy of Science and Technology of the International Union of History and Philosophy of Science and Technology from 2015 to 2023.
Since 2023, he is one of the Vice Presidents of the International Council for Philosophy and Humanistic Studies (CIPSH).
He is a member of the International Academy for Philosophy of Science, the Academia Europaea (MAE),
the
Akademie der Wissenschaften in Hamburg,
and a Fellow of the International Science Council (FISC).
Co-edited Volumes (a selection)
2006. Logical approaches to computational barriers : Second Conference on Computability in Europe, CiE 2006, Swansea, UK, June 30 – July 5, 2006; proceedings. Co-edited with Arnold Beckmann, Ulrich Berger and John V. Tucker.
2008. Games, scales, and Suslin cardinals. Co-edited with Alexander S. Kechris and John R. Steel. Cambridge : Cambridge University
2008. Logic and theory of algorithms : 4th Conference on Computability in Europe, CiE 2008, Athens, Greece, June 15 – 20, 2008; proceedings. Co-edited with Arnold Beckmann and Costas Dimitracopoulos. Berlin; Heidelberg [u.a.] : Springer
2011. Wadge Degrees and Projective Ordinals The Cabal Seminar Volume II. Co-edited with Alexander S. Kechris and John R. Steel.
References
1972 births
Living people
German logicians
German set theorists
Mathematical logicians
20th-century German philosophers
21st-century German philosophers
German male writers
21st-century German mathematicians
Fellows of Churchill College, Cambridge
Academic staff of the University of Amsterdam
Academics of the University of Cambridge
Academic staff of the University of Hamburg
Humboldt University of Berlin alumni | Benedikt Löwe | Mathematics | 586 |
53,974,293 | https://en.wikipedia.org/wiki/Teufe | Teufe is the German mining term for depth. The Teufe (hT) indicates how deep a given point lies below the surface of an open-cast pit or below the ground level in the area surrounding the pit. By contrast, the height, h, refers to its distance from a reference surface 'above'. The vertical distance between the surface and a point in the mine (Grubengebäude), i.e. the vertical depth, was formerly also called the Seigerteufe. This difference is no longer made today. The terms Teufe and Seigerteufe are synonymous.
Reference points
The Teufe is always given from a reference point. In earlier times, during the construction of galleries (Stollen) there was the concept of "gallery depth" or Stollenteufe. For this purpose, the survey engineer or Markscheider determined a fixed reference point from which measurements were made. The gallery was either above or below the reference point. If the gallery lay above the Markscheider's reference point, it had a positive "vertical height difference" (Seigerteufe). If the gallery lay below the reference point, it had a negative vertical height difference.With the emergence of mining engineering the upper surface of the terrain was used as the reference point, usually at the headgear.
Today the reference surface for mining in Germany is standard sea level (Normalhöhennull, NHN, formerly Normalnull). As a result there are negative and positive heights, however, these heights are not an indication of depth. If a point is below NHN, it is give a minus sign (-); if it is above, a plus sign (+). A height based on the NHN has the symbol H.
References
External links
Measurement
Mining in Germany | Teufe | Physics,Mathematics | 380 |
40,699,009 | https://en.wikipedia.org/wiki/Alkylbenzene%20sulfonate | Alkylbenzene sulfonates are a class of anionic surfactants, consisting of a hydrophilic sulfonate head-group and a hydrophobic alkylbenzene tail-group. Along with sodium laureth sulfate, they are one of the oldest and most widely used synthetic detergents and may be found in numerous personal-care products (soaps, shampoos, toothpaste etc.) and household-care products (laundry detergent, dishwashing liquid, spray cleaner etc.).
They were introduced in the 1930s in the form of branched alkylbenzene sulfonates (BAS). However following environmental concerns these were replaced with linear alkylbenzene sulfonates (LAS) during the 1960s. Since then production has increased significantly from about one million tons in 1980, to around 3.5 million tons in 2016, making them most produced anionic surfactant after soaps.
Branched alkylbenzene sulfonates
Branched alkylbenzene sulfonates (BAS) were introduced in the early 1930s and saw significant growth from the late 1940s onwards, in early literature these synthetic detergents are often abbreviated as syndets. They were prepared by the Friedel–Crafts alkylation of benzene with 'propylene tetramer' (also called tetrapropylene) followed by sulfonation. Propylene tetramer being a broad term for a mixture of compounds formed by the oligomerization of propene, its use gave a mixture of highly branched structures.
Compared to traditional soaps, BAS offered superior tolerance to hard water and better foaming. However, the highly branched tail made it difficult to biodegrade. BAS was widely blamed for the formation of large expanses of stable foam in areas of wastewater discharge such as lakes, rivers and coastal areas (sea foams), as well as foaming problems encountered in sewage treatment and contamination of drinking water. As such, BAS was phased out of most detergent products during the 1960s, being replaced with linear alkylbenzene sulfonates (LAS), which biodegrade much more rapidly. BAS is still important in certain agrochemical and industrial applications, where rapid biodegradability is of reduced importance. For instance, inhibiting asphaltene deposition from crude oil.
Linear alkylbenzene sulfonates
Linear alkylbenzene sulfonates (LAS) are prepared industrially by the sulfonation of linear alkylbenzenes (LABs), which can themselves be prepared in several ways. In the most common route benzene is alkylated by long chain monoalkenes (e.g. dodecene) using hydrogen fluoride as a catalyst. The purified dodecylbenzenes (and related derivatives) are then sulfonated with sulfur trioxide to give the sulfonic acid. The sulfonic acid is subsequently neutralized with sodium hydroxide.
The term "linear" refers to the starting alkenes rather than the final product, perfectly linear addition products are not seen, in-line with Markovnikov's rule. Thus, the alkylation of linear alkenes, even 1-alkenes such as 1-dodecene, gives several isomers of phenyldodecane.
Structure property relationships
Under ideal conditions the cleaning power of BAS and LAS is very similar, however LAS performs slightly better in normal use conditions, due to it being less affected by hard water.
Within LAS itself the detergency of the various isomers are fairly similar, however their physical properties (Krafft point, foaming etc.) are noticeably different.
In particular the Krafft point of the high 2-phenyl product (i.e. the least branched isomer) remains below 0 °C up to 25% LAS whereas the low 2-phenyl cloud point is ~15 °C. This behavior is often exploited by producers to create either clear or cloudy products.
Environmental fate
The biodegradability of alkylbenzene sulfonates has been well studied, and is affected by isomerization, in this case, branching. The salt of the linear material has an of 2.3 mg/liter for fish, about four times more toxic than the branched compound; however the linear compound biodegrades far more quickly, making it the safer choice over time. It is biodegraded rapidly under aerobic conditions with a half-life of approximately 1–3 weeks; oxidative degradation initiates at the alkyl chain. Under anaerobic conditions it degrades very slowly or not at all, causing it to exist in high concentrations in sewage sludge, but this is not thought to be a cause for concern as it will rapidly degrade once returned to an oxygenated environment.
References
Organic sodium salts
Cleaning product components
Anionic surfactants
Sulfonates
Glycine receptor agonists
Alkyl-substituted benzenes | Alkylbenzene sulfonate | Chemistry,Technology | 1,052 |
21,123,339 | https://en.wikipedia.org/wiki/Covering%20graph | In the mathematical discipline of graph theory, a graph is a covering graph of another graph if there is a covering map from the vertex set of to the vertex set of . A covering map is a surjection and a local isomorphism: the neighbourhood of a vertex in is mapped bijectively onto the neighbourhood of in .
The term lift is often used as a synonym for a covering graph of a connected graph.
Though it may be misleading, there is no (obvious) relationship between covering graph and vertex cover or edge cover.
The combinatorial formulation of covering graphs is immediately generalized to the case of multigraphs. A covering graph is a special case of a covering complex. Both covering complexes and multigraphs with a 1-dimensional cell complex, are nothing but examples of covering spaces of topological spaces, so the terminology in the theory of covering spaces is available; say covering transformation group, universal covering, abelian covering, and maximal abelian covering.
Definition
Let G = (V1, E1) and C = (V2, E2) be two graphs, and let f: V2 → V1 be a surjection. Then f is a covering map from C to G if for each v ∈ V2, the restriction of f to the neighbourhood of v is a bijection onto the neighbourhood of f(v) in G. Put otherwise, f maps edges incident to v one-to-one onto edges incident to f(v).
If there exists a covering map from C to G, then C is a covering graph, or a lift, of G. An h-lift is a lift such that the covering map f has the property that for every vertex v of G, its fiber f−1(v) has exactly h elements.
Examples
In the following figure, the graph C is a covering graph of the graph H.
The covering map f from C to H is indicated with the colours. For example, both blue vertices of C are mapped to the blue vertex of H. The map f is a surjection: each vertex of H has a preimage in C. Furthermore, f maps bijectively each neighbourhood of a vertex v in C onto the neighbourhood of the vertex f(v) in H.
For example, let v be one of the purple vertices in C; it has two neighbours in C, a green vertex u and a blue vertex t. Similarly, let be the purple vertex in H; it has two neighbours in H, the green vertex and the blue vertex . The mapping f restricted to {t, u, v} is a bijection onto {, , }. This is illustrated in the following figure:
Similarly, we can check that the neighbourhood of a blue vertex in C is mapped one-to-one onto the neighbourhood of the blue vertex in H:
Double cover
In the above example, each vertex of H has exactly 2 preimages in C. Hence C is a 2-fold cover or a double cover of H.
For any graph G, it is possible to construct the bipartite double cover of G, which is a bipartite graph and a double cover of G. The bipartite double cover of G is the tensor product of graphs G × K2:
If G is already bipartite, its bipartite double cover consists of two disjoint copies of G. A graph may have many different double covers other than the bipartite double cover.
Universal cover
For any connected graph G, it is possible to construct its universal covering graph. This is an instance of the more general universal cover concept from topology; the topological requirement that a universal cover be simply connected translates in graph-theoretic terms to a requirement that it be acyclic and connected; that is, a tree.
The universal covering graph is unique (up to isomorphism). If G is a tree, then G itself is the universal covering graph of G. For any other finite connected graph G, the universal covering graph of G is a countably infinite (but locally finite) tree.
The universal covering graph T of a connected graph G can be constructed as follows. Choose an arbitrary vertex r of G as a starting point. Each vertex of T is a non-backtracking walk that begins from r, that is, a sequence w = (r, v1, v2, ..., vn) of vertices of G such that
vi and vi+1 are adjacent in G for all i, i.e., w is a walk
vi-1 ≠ vi+1 for all i, i.e., w is non-backtracking.
Then, two vertices of T are adjacent if one is a simple extension of another: the vertex (r, v1, v2, ..., vn) is adjacent to the vertex (r, v1, v2, ..., vn-1). Up to isomorphism, the same tree T is constructed regardless of the choice of the starting point r.
The covering map f maps the vertex (r) in T to the vertex r in G, and a vertex (r, v1, v2, ..., vn) in T to the vertex vn in G.
Examples of universal covers
The following figure illustrates the universal covering graph T of a graph H; the colours indicate the covering map.
For any k, all k-regular graphs have the same universal cover: the infinite k-regular tree.
Topological crystal
An infinite-fold abelian covering graph of a finite (multi)graph is called a topological crystal, an abstraction of crystal structures. For example, the diamond crystal as a graph is the maximal abelian covering graph of the four-edge dipole graph. This view combined with the idea of "standard realizations" turns out to be useful in a systematic design of (hypothetical) crystals.
Planar cover
A planar cover of a graph is a finite covering graph that is itself a planar graph. The property of having a planar cover may be characterized by forbidden minors, but the exact characterization of this form remains unknown. Every graph with an embedding in the projective plane has a planar cover coming from the orientable double cover of the projective plane; in 1988, Seiya Nagami conjectured that these are the only graphs with planar covers, but this remains unproven.
Voltage graphs
A common way to form covering graphs uses voltage graphs, in which the darts of the given graph G (that is, pairs of directed edges corresponding to the undirected edges of G) are labeled with inverse pairs of elements from some group. The derived graph of the voltage graph has as its vertices the pairs (v,x) where v is a vertex of G and x is a group element; a dart from v to w labeled with the group element y in G corresponds to an edge from (v,x) to (w,xy) in the derived graph.
The universal cover can be seen in this way as a derived graph of a voltage graph in which the edges of a spanning tree of the graph are labeled by the identity element of the group, and each remaining pair of darts is labeled by a distinct generating element of a free group. The bipartite double can be seen in this way as a derived graph of a voltage graph in which each dart is labeled by the nonzero element of the group of order two.
Notes
References
Graph theory | Covering graph | Mathematics | 1,526 |
40,111,892 | https://en.wikipedia.org/wiki/Delvotest | Delvotest is a broad spectrum microbial inhibition test used in dairy for testing antibiotic residue in milk and milk products. It is used to ensure milk safety and is produced by the global company DSM.
The dairy sector has a responsibility to prevent the presence of antibiotic residues in milk for health reasons, processing and economic reasons and legal responsibility to adhere to the Maximum Residue Levels as defined by law.
Delvotest is a testing method that is used throughout the dairy value chain by laboratories, dairy companies and farmers.
The test is validated by:
France – CNIEL The French Dairy Industry Organization - Ministere de l’Agriculture, de l’Agroalimentaire et de la Forêt
NL – Qlip – Dutch Quality and Assurance Organization for the Dairy Industry
UK - CSL Central Laboratory and Executive Agency, UK Department for Environment Food and Rural Affairs (DEFRA
Brazil – Agricultural Research Corporation, Ministry of Agriculture, Livestock and Food supply
Nestle Research Center - validation of 27 antibiotic residues in raw cow's milk and milk-based products, Lausanne, Switzerland
Nestlé Factory Laboratory, Shuangcheng, Italy; Centro Referenza Nazionale Qualità Latte Bovino IZSLER, Brescia, Italy (2015)
Italy – AGRIS Agency for Agriculture Development in Sardegna
Belgium - ILVO The Institute for Agricultural and Fisheries Research – government affiliated
Poland - PIWET- The National Veterinary Research Institute, part of ministry of Agriculture
References
Antibiotics
Milk
Food analysis | Delvotest | Chemistry,Biology | 302 |
49,209 | https://en.wikipedia.org/wiki/Carburetor | A carburetor (also spelled carburettor or carburetter) is a device used by a gasoline internal combustion engine to control and mix air and fuel entering the engine. The primary method of adding fuel to the intake air is through the Venturi effect or Bernoulli's principle in the main metering circuit, though various other components are also used to provide extra fuel or air in specific circumstances.
Since the 1990s, carburetors have been largely replaced by fuel injection for cars and trucks, but carburetors are still used by some small engines (e.g. lawnmowers, generators, and concrete mixers) and motorcycles. In addition, they are still widely used on piston-engine–driven aircraft. Diesel engines have always used fuel injection instead of carburetors, as the compression-based combustion of diesel requires the greater precision and pressure of fuel injection.
Etymology
The term carburetor is derived from the verb carburet, which means "to combine with carbon", or, in particular, "to enrich a gas by combining it with carbon or hydrocarbons". Thus a carburetor mixes intake air with hydrocarbon-based fuel, such as petrol or autogas (LPG).
The name is spelled carburetor in American English and carburettor in British English. Colloquial abbreviations include carb in the UK and North America or carby in Australia.
Operating principle
Air from the atmosphere enters the carburetor (usually via an air cleaner), has fuel added within the carburetor, passes into the inlet manifold, then through the inlet valve(s), and finally into the combustion chamber. Most engines use a single carburetor shared between all of the cylinders, though some high-performance engines historically had multiple carburetors.
The simplest carburetors work on Bernoulli's principle: the static pressure of the intake air reduces at higher speeds compared to the pressure in the float chamber which is vented to ambient air pressure, with the pressure difference then forcing more fuel into the airstream. In most cases (except for the accelerator pump), the driver pressing the throttle pedal does not directly increase the fuel entering the engine. Instead, the airflow through the carburetor increases, which in turn increases the amount of fuel drawn into the intake mixture.
Bernoulli's Principle applies (apart from friction and viscosity etc.) to both the air and the fuel, so that the pressure reduction in the air flow tends to be proportional to the square of the intake airspeed, and the fuel in the main jets will obtain a speed as the square root of the pressure reduction so the two will be proportional to each other. If the pressure reduction is taken as from a change of area along the air flow rather than from ambient pressure to the fuel entry point the effect can be described as the Venturi effect, but that is simply a derivation from the Bernoulli principle at two positions. The actual fuel and air flows are more complicated and need correction. This might be done variously at lower speeds or higher speeds, or over the whole range by a variable emulsion device to add air to the fuel after the main jets/s. In SU and other (e.g. Zenith-Stromberg) variable jet carburetors, it was mainly controlled by varying the jet size.
The orientation of the carburetor is a key design consideration. Older engines used updraft carburetors, where the air enters from below the carburetor and exits through the top. From the late 1930s, downdraft carburetors become more commonly used (especially in the United States), along with side draft carburetors (especially in Europe).
Fuel circuits
Main metering circuit
The main metering circuit usually consists of barrel/s which reduces to a narrow part where the air is at its highest speed before widening again, forming a venturi. Fuel is introduced into the air stream through small tubes leading from the main jet being driven by the difference in pressure to that at the float bowl.
Downstream of the venturi is a throttle (usually in the form of a butterfly valve) which is used to control the amount of air entering the carburetor. In a car, this throttle is usually mechanically connected to the vehicle's throttle pedal, which varies engine speed.
At lesser throttle openings, the air speed through the venturi may be insufficient to maintain the fuel flow, so then the fuel may be supplied by the carburetor's idle and off-idle circuits.
At greater throttle openings, the speed of air passing through the venturi increases, which lowers the pressure of the air and draws more fuel into the airstream. At the same time, the reduced manifold vacuum results in less fuel flow through the idle and off-idle circuits.
Choke
During cold weather fuel vaporizes less readily and tends to condense on the walls of the intake manifold, starving the cylinders of fuel and making cold starts difficult. Additional fuel is required (for a given amount of air) to start and run the engine until it warms up, provided by a choke valve.
While the engine is warming up the choke valve is partially closed, restricting the flow of air at the entrance to the carburetor. This increases the vacuum in the main metering circuit, causing more fuel to be supplied to the engine via the main jets. Prior to the late 1950s the choke was manually operated by the driver, often using a lever or knob on the dashboard. Since then, automatic chokes became more commonplace. These either use a bimetallic thermostat to automatically regulate the choke based on the temperature of the engine's coolant liquid, an electrical resistance heater to do so, or air drawn through a tube connected to an engine exhaust source. A choke left closed after the engine has warmed up increases the engine's fuel consumption and exhaust gas emissions, and causes the engine to run rough and lack power due to an over-rich fuel mixture.
However, excessive fuel can flood an engine and prevent it from starting. To remove the excess fuel, many carburetors with automatic chokes allow it to be held open (by manually, depressing the accelerator pedal to the floor and briefly holding it there while cranking the starter) to allow extra air into the engine until the excess fuel is cleared out.
Another method used by carburetors to improve the operation of a cold engine is a fast idle cam, which is connected to the choke and prevents the throttle from closing fully while the choke is in operation. The resulting increase in idle speed provides a more stable idle for a cold engine (by better atomizing the cold fuel) and helps the engine warm up quicker.
Idle circuit
The system within a carburetor that meters fuel when the engine is running at low RPM. The idle circuit is generally activated by vacuum near the (near closed) throttle plate, where the air speed increases to cause a low-pressure area in the idle passage/port, thus causing fuel to flow through the idle jet. The idle jet is set at some constant value by the carburetor manufacturer, thus flowing a specified amount of fuel.
Off-idle circuit
Many carburetors use an off-idle circuit, which includes an additional fuel jet which is briefly used as the throttle starts to open. This jet is located in a low-pressure area caused by the high air speed near the (partly closed) throttle. The additional fuel it provides is used to compensate for the reduced vacuum that occurs when the throttle is opened, thus smoothing the transition from the idle circuit to the main metering circuit.
Power valve
In a four-stroke engine it is often desirable to provide extra fuel to the engine at high loads (to increase the power output and reduce engine knocking). A 'power valve', which is a spring-loaded valve in the carburetor that is held shut by engine vacuum, is often used to do so. As the airflow through the carburetor increases the reduced manifold vacuum pulls the power valve open, allowing more fuel into the main metering circuit.
In a two-stroke engine, the carburetor power valve operates in the opposite manner: in most circumstances the valve allows extra fuel into the engine, then at a certain engine RPM it closes to reduce the fuel entering the engine. This is done in order to extend the engine's maximum RPM, since many two-stroke engines can temporarily achieve higher RPM with a leaner air-fuel ratio.
This is not to be confused with the unrelated exhaust power valve arrangements used on two-stroke engines.
Metering rod / step-up rod
A metering rod or step-up rod system is sometimes used as an alternative to a power valve in a four-stroke engine in order to supply extra fuel at high loads. One end of the rods is tapered, which sits in the main metering jets and acts as a valve for fuel flow in the jets. At high engine loads, the rods are lifted away from the jets (either mechanically or using manifold vacuum), increasing the volume of fuel flow through the jet. These systems have been used by the Rochester Quadrajet and in the 1950s Carter carburetors.
Accelerator pump
While the main metering circuit can adequately supply fuel to the engine in steady-state conditions, the inertia of fuel (being higher than that of air) causes a temporary shortfall as the throttle is opened. Therefore, an accelerator pump is often used to briefly provide extra fuel as the throttle is opened. When the driver presses the throttle pedal, a small piston or diaphragm pump injects extra fuel directly into the carburetor throat.
The accelerator pump can also be used to "prime" an engine with extra fuel prior to attempting a cold start.
Fuel supply
Float chamber
In order to ensure an adequate supply at all times, carburetors include a reservoir of fuel, called a "float chamber" or "float bowl". Fuel is delivered to the float chamber by a fuel pump or by gravity with the fuel tank located higher than the carburetor. A floating inlet valve regulates the fuel entering the float chamber, assuring a constant level. In some small engines that may instead of a float chamber just use a fuel tank close below the carburetor and use the fuel suction to supply the fuel.
Unlike in a fuel injected engine, the fuel system in a carbureted engine is not pressurized. For engines where the intake air travelling through the carburetor is pressurized (such as where the carburetor is downstream of a supercharger) the entire carburetor must be contained in an airtight pressurized box to operate. However, this is not necessary where the carburetor is upstream of the supercharger.
Problems of fuel boiling and vapor lock can occur in carbureted engines, especially in hotter climates. Since the float chamber is located close to the engine, heat from the engine (including for several hours after the engine is shut off) can cause the fuel to heat up to the point of vaporization. This causes air bubbles in the fuel (similar to the air bubbles that necessitate brake bleeding), which prevents the flow of fuel and is known as 'vapor lock'.
To avoid pressurizing the float chamber, vent tubes allow ambient air to enter and exit the float chamber. These tubes may instead extend into the carburetor air flow prior to where the fuel flows in, in order to use the Venturi effect to achieve suitable pressure difference rather than the Bernoulli principle which applies when the pressure difference is related to the ambient air pressure.
Diaphragm chamber
If an engine must be operated when the carburetor is not in an upright orientation (for example in a chainsaw or airplane), a float chamber and gravity activated float valve would not be suitable. Instead, a diaphragm chamber is typically used. This consists of a flexible diaphragm on one side of the fuel chamber, connected to a needle valve which regulates the fuel entering the chamber. As the flowrate of the air in the chamber (controlled by the throttling valve/butterfly valve) decreases, the diaphragm moves inward (downward), which closes the needle valve to admit less fuel. As the flowrate of the air in the chamber increases, the diaphragm moves outward (upward) which opens the needle valve to admit more fuel, allowing the engine to generate more power. A balanced state is reached which creates a steady fuel reservoir level, that remains constant in any orientation.
Other components
Other components that have been used on carburetors include:
Air bleeds allowing air into various portions of the fuel passages, to premix air and fuel, and minimise vaporization, and to largely correct air/fuel ratio over a large range, typically referred to as the emulsion system.
Fuel flow restrictors in aircraft engines, to prevent fuel starvation during inverted flight.
Heated vaporizers to assist with the atomization of the fuel, particularly for engines using kerosene, tractor vaporizing oil or in petrol-paraffin engines
Early fuel evaporators
Feedback carburetors, which adjusted the fuel/air mixture in response to signals from an oxygen sensor, in order to allow a catalytic converter to be used
Constant vacuum carburetors (also called variable choke carburetors), whereby the throttle cable is connected directly to the throttle cable plate. Pulling the cord caused raw gasoline to enter the carburetor, creating a large emission of hydrocarbons.
Constant velocity carburetors use a variable opening in the intake air stream after movement of the throttle plate from the accelerator pedal. This variable opening is controlled by pressure/vacuum at the variable opening itself. This pressure-controlled opening provides relatively even intake pressure throughout the engine's speed and load ranges.
Two-barrel and four-barrel designs
The basic design for a carburetor consists of a single venturi (main metering circuit), though designs with two or four venturi (two-barrel and four-barrel carburetors respectively) are also quite commonplace. Typically the barrels consist of "primary" barrel(s) used for lower load situations and secondary barrel(s) activating when required to provide additional air/fuel at higher loads. The primary and secondary venturi are often sized differently and incorporate different features to suit the situations in which they are used.
Many four-barrel carburetors use two primary and two secondary barrels. A four-barrel design of two primary and two secondary barrels was commonly used in V8 engines to conserve fuel at low engine speeds while still affording an adequate supply at high.
The use of multiple carburetors (e.g., a carburetor for each cylinder or pair of cylinders) also results in the intake air being drawn through multiple venturi. Some high-performance engines have used multiple two-barrel or four-barrel carburetors, for example six two-barrel carburetors on Ferrari V12s.
History
In 1826, American engineer Samuel Morey received a patent for a "gas or vapor engine", which had a carburetor that mixed turpentine and air. The design did not reach production. In 1875 German engineer Siegfried Marcus produced a car powered by the first petrol engine (which also debuted the first magneto ignition system). Karl Benz introduced his single-cylinder four-stroke powered Benz Patent-Motorwagen in 1885.
All three of these engines used surface carburetors, which operated by moving air across the top of a vessel containing the fuel.
The first float-fed carburetor design, which used an atomizer nozzle, was introduced by German engineers Wilhelm Maybach and Gottlieb Daimler in their 1885 Grandfather Clock engine. The Butler Petrol Cycle car—built in England in 1888—also used a float-fed carburetor.
The first carburetor for a stationary engine was patented in 1893 by Hungarian engineers János Csonka and Donát Bánki.
The first four-barrel carburetors were the Carter Carburetor WCFB and the identical Rochester 4GC, introduced in various General Motors models for 1952. Oldsmobile referred the new carburetor as the "Quadri-Jet" (original spelling) while Buick called it the "Airpower".
In the United States, carburetors were the common method of fuel delivery for most US-made gasoline (petrol) engines until the late 1980s, when fuel injection became the preferred method. One of the last motorsport users of carburetors was NASCAR, which switched to electronic fuel injection after the 2011 Sprint Cup series. NASCAR still uses the four-barrel carburetor in the NASCAR Xfinity Series.
In Europe, carburetors were largely replaced by fuel injection in the late 1980s, although fuel injection had been increasingly used in luxury cars and sports cars since the 1970s. EEC legislation required all vehicles sold and produced in member countries to have a catalytic converter after December 1992. This legislation had been in the pipeline for some time, with many cars becoming available with catalytic converters or fuel injection from around 1990.
Icing in aircraft engine carburetors
A significant concern for aircraft engines is the formation of ice inside the carburetor. The temperature of air within the carburetor can be reduced by up to 40 °C (72 °F), due to a combination of the reduced air pressure in the venturi and the latent heat of the evaporating fuel. The conditions during the descent to landing are particularly conducive to icing, since the engine is run at idle for a prolonged period with the throttle closed. Icing can also occur in cruise conditions at altitude.
A carburetor heat system is often used to prevent icing. This system consists of a secondary air intake which passes around the exhaust, in order to heat the air before it enters the carburetor. Typically, the system is operated by the pilot manually switching the intake air to travel via the heated intake path as required. The carburetor heat system reduces the power output (due to the lower density of heated air) and causes the intake air filter to be bypassed, therefore the system is only used when there is a risk of icing.
If the engine is operating at idle RPM, another method to prevent icing is to periodically open the throttle, which increases the air temperature within the carburetor.
Carburetor icing also occurs on other applications and various methods have been employed to solve this problem. On inline engines the intake and exhaust manifolds are on the same side of the head. Heat from the exhaust is used to warm the intake manifold and in turn the carburetor. On V configurations, exhaust gases were directed from one head through the intake cross over to the other head. One method for regulating the exhaust flow on the cross over for intake warming was a weighted eccentric butterfly valve called a heat riser that remained closed at idle and opened at higher exhaust flow. Some vehicles used a heat stove around the exhaust manifold. It was connected to the air filter intake via tubing and supplied warmed air to the air filter. A vacuum controlled butterfly valve pre heat tube on the intake horn of the air cleaner would open allowing cooler air when engine load increased.
See also
Bernoulli's principle
Fuel injection
Humidifier
List of auto parts
List of carburetor manufacturers
Venturi effect
References
German inventions
Carburettors
Engine fuel system technology
Engine components | Carburetor | Technology | 4,045 |
66,662,848 | https://en.wikipedia.org/wiki/Neoantrodia%20primaeva | Neoantrodia primaeva is a species of fungus belonging to the family Fomitopsidaceae.
Synonym:
Antrodia primaeva Renvall & Niemelä, 1992 (= basionym)
References
Fomitopsidaceae
Fungus species | Neoantrodia primaeva | Biology | 53 |
5,531,434 | https://en.wikipedia.org/wiki/Conservation-dependent%20species | A conservation-dependent species is a species which has been categorized as "Conservation Dependent" ("LR/cd") by the International Union for Conservation of Nature (IUCN), as dependent on conservation efforts to prevent it from becoming endangered. A species that is reliant on the conservation attempts of humans is considered conservation dependent. Such species must be the focus of a continuing species-specific and/or habitat-specific conservation program, the cessation of which would result in the species qualifying for one of the threatened categories within a period of five years. The determination of status is constantly monitored and can change.
This category is part of the IUCN 1994 Categories & Criteria (version 2.3), which is no longer used in evaluation of taxa, but persists in the IUCN Red List for taxa evaluated prior to 2001, when version 3.1 was first used. Using the 2001 (version 3.1) system these taxa are classed as near threatened, but those that have not been re-evaluated remain with the "Conservation Dependent" category.
Conservation-dependent species require maintenance additional to the use of the United States Endangered Species Act of 1973. This act is said to protect species from extinction by concerns and acts of conservation.
Challenges
Conservation-dependent species rely on population connectivity between humans and animals to maintain their life. Connectivity is based in regard to the federal regulatory provisions that protect the species and its habitat. Habitats and species are difficult to conserve when they are not susceptible to the regulations put in place. It is also seen that laws and acts have flaws that cause gaps in their motive. The Endangered Species Act fails to account for biological ecosystem conservation and threats to a species presence. Conservation in these conditions causes data gaps and leads to the depletion of species.
Funding of the federal provisions show to be a major concern when efforts are being made to conserve species. Legal members who don't agree on where funding should go cause more harm to the conservation-dependent species by making no effort for restoration. Despite legal efforts for defining a restoration program and setting regulatory provisions, conservation-dependent species are still in danger.
Flora vs Fauna conservation methods
While conservation dependent plants and animals fall under the same risk status in the environment different methods are used to protect them. Conservation dependent animals are typically protected by recovery plans and agreements for conservation by the government. Plants that are conservation dependent have less protection behind them as the major method for conservation is keeping a habitat healthy. In order to do so, keeping areas uncivilized and minimizing pollution emissions are predictable solutions. Keeping the flora(plants) and fauna(animals) in their region out of the conservation dependent category is the main goal of these methods.
Threatened categories
Species that are considered Conservation- Dependent are under the lower risk category of status in the IUCN Red List of Threatened Species. The category of species may change and vary depending on its status in its environment. The lower risk status section has three categories that species may fluctuate through.
Near Threatened (NT)
Conservation Dependent (CD)
Least Concern (LC)
Conservation Attempts
In fisheries around the world, there is a list of rules that people must follow which are in place as a conservation effort. These rules protect the conservation dependent listing of Scalloped Hammerhead shark (Sphyrna lewini) under the EPBC Act is one major step for conservation of endangered species.
Reporting catch by phone: fishers must report their catch of a shark to QDAF's automated interactive voice response.
Species specific catch and discard information in logbooks: all catches of sharks must be recorded in a log book.
Data validation: one hour after docking when there is a shark on board, fisheries officers are allowed to inspect boat and catches.
Conservation-dependent animals
Examples of conservation-dependent species include the black caiman (Melanosuchus niger), the sinarapan, and the California ground cricket.
As of December 2015, there remains 209 conservation-dependent plant species and 29 conservation-dependent animal species.
As of September 2022, the IUCN still lists 20 conservation-dependent animal species, and one conservation-dependent subpopulations or stocks.
Reptiles
Black caiman
Mollusks
Bear paw clam
China clam
Maxima clam
Fluted giant clam
Arthropods
Mono Lake brine shrimp
Attheyella yemanjae
Canthocamptus campaneri
Metacyclops campestris
Murunducaris juneae
Muscocyclops bidentatus
Muscocyclops therasiae
California ground cricket
Ponticyclops boscoi
Spaniacris deserticola
Stenopelmatus nigrocapitatus
Thermocyclops parvus
EPBC Act
In Australia, the Environment Protection and Biodiversity Conservation Act 1999 still uses a "Conservation Dependent" category for classifying fauna and flora species. Species recognized as "Conservation Dependent" do not receive special protection, as they are not considered "matters of national environmental significance under the EPBC Act". Any assemblage of species may be listed as a "threatened ecological community" under the EPBC Act. Fauna may be classified under this category if its flora is directly threatened.
The legislation uses categories similar to those of the IUCN 1994 Categories & Criteria. It does not, however, have a near threatened category or any other "lower risk" categories.
As of December 2018, eight species of fishes have received the status under the act:
Orange roughy (Hoplostethus atlanticus)
Silver gemfish (Rexea solandri)
School shark (Galeorhinus galeus)
Southern bluefin tuna (Thunnus maccoyii)
Southern dogfish (Centrophorus zeehaani)
Dumb gulper shark (Centrophorus harrissoni)
Blue warehou (Seriolella brama)
Scalloped hammerhead (Sphyrna lewini)
No flora has been given the category under the EPBC Act.
See also
Conservation-reliant species
IUCN Red List conservation dependent species, ordered by taxonomic rank.
:Category:IUCN Red List conservation dependent species, ordered alphabetically.
References
01
IUCN Red List | Conservation-dependent species | Biology | 1,236 |
59,006,704 | https://en.wikipedia.org/wiki/Xiang%20Li%20%28hacker%29 | Xiang Li () is a Chinese computer hacker. He is serving a twelve-year sentence in federal prison in the United States.
Early life
Li was born in Chengdu, China in 1979.
Career
From Chengdu, he operated "CRACK99", a website that sold stolen software globally from 2008 until his arrest by U.S. authorities in 2011. During that time, he sold over $100 million in industrial-grade software, the access controls of which had been circumvented by software cracking. The software had civilian and military applications, including aerospace and aviation simulation and design, communications systems design, electromagnetic simulation, explosives simulation, intelligence analysis, precision tooling, oil field management, and manufacturing plant design.
Operation, arrest and prosecution
Investigation
One of the software titles for sale on CRACK99 was "Satellite Tool Kit 8.0" ("STK"), now known as Systems Tool Kit, designed by Analytical Graphics Incorporated (AGI) to enable the U.S. military to simulate missile launches and flight trajectories of aircraft and satellites. AGI brought this fact to the attention of U.S. Department of Homeland Security Investigations in December 2009. A team of prosecutors and agents from the Department of Justice, Homeland Security and the Defense Criminal Investigative Service initiated an undercover investigation in 2010. As part of that investigation, federal agents purchased STK software from the CRACK99 website, as well as other advanced software used in spacecraft design and programmable logic devices.
Lurement and arrest in Saipan
U.S. undercover agents posed as criminals who were reselling software obtained from CRACK99. Li and the agents engaged in lengthy email and Skype conversations about increasing sales by expanding the U.S. market. Ultimately, Li agreed to meet the agents in Saipan to discuss future business opportunities. On June 6, 2011, Li met with undercover agents in Saipan. Li provided agents with 20 gigabytes of proprietary data hacked from a defense contractor.
"It's the database," explained Li, "I was thinking [it] would be difficult to pass through the custom." This data included military and civilian aircraft image models, a software module containing data associated with the International Space Station, and a high resolution, 3-dimensional imaging program.
Li further advised the undercover agents: "Don’' just sell it … randomly! … Only the familiar and reliable customers… The products…are pretty…um…like confidential. [Don’t]… go and tell other people."
The agents asked if Xiang Li could get software in addition to what he had listed on CRACK99. "I mean as long as [you] can tell me the name," Li said, "I could find a way to get it ...." Xiang Li asked the agents: "I want to ask a question. … Will [your] customers be able to find me? Will [they] be also contacting me? …. Will [the customers] be able to locate me?"
Shortly thereafter, Li was arrested, waived his right to remain silent, and confessed to his crimes.
Conviction
A federal grand jury indicted Li on multiple federal charges involving the sale of more than $100 million in stolen software. The $100 million figure was based on the results from search warrants executed on Xiang Li's email accounts, which revealed about 600 illegal transactions between April 2008 and August 2010. In January 2013, the federal district court in Delaware accepted Li's guilty plea to one count each of conspiracy to commit criminal copyright infringement and conspiracy to commit wire fraud, exposing him to a maximum of 25 years of incarceration.
In June 2013, the court held a sentencing hearing. Li contended that software piracy was "prevalent" in China, opining that "[p]robably ten million people in China are doing things illegally with software." The U.S. government agreed that cyber theft is prevalent in China, but contended that the prevalence of Chinese piracy is not a defense, and pointed the court to a report estimating that China's illegal software market reached $9 billion in 2011, out of a total market of nearly $12 billion, thus setting a piracy rate of 77 percent. The government emphasized the advanced nature of the software sold by Li and the fact that many of the software products had military applications.
The court noted the extensive amount of crime that the defendant was engaged in, finding: "This was nothing less than a crime spree, and it was brazen." The court found that the software was "highly sophisticated" and "ended up with individuals and sometimes in countries that are not authorized to have those software materials". The court sentenced Li to 12 years in prison, the longest criminal copyright sentence ever imposed. The Xiang Li case was featured in the CNN series Declassified.
American customers
Li sold software worth over $600,000 to Dr. Ronald Best, the “Chief Scientist” of a U.S. defense contractor involved in applications such as radio communication, radar, and microwave technology. Best used the cracked software to design components for Patriot missiles and radar for Marine One (the President's helicopter) and the Army's Blackhawk helicopter.
Another U.S. customer was Cosburn Wedderburn, who purchased over $1,000,000 in stolen software. At the time, Wedderburn was a NASA engineer.
References
1977 births
Living people
People from Chengdu
Hackers
Cybercriminals
21st-century Chinese criminals
Chinese male criminals | Xiang Li (hacker) | Technology | 1,126 |
19,860,955 | https://en.wikipedia.org/wiki/Sar%20%28Unix%29 | System Activity Report (sar) is a Unix System V-derived system monitor command used to report on various system loads, including CPU activity, memory/paging, interrupts, device load, network and swap space utilization. Sar uses /proc filesystem for gathering information.
Platform support
Sar was originally developed for the Unix System V operating system; it is available in AIX, HP-UX, Solaris and other System V based operating systems but it is not available for macOS or FreeBSD. Prior to 2013 there was a bsdsar tool, but it is now deprecated.
Most Linux distributions provide sar utility through the sysstat package.
Syntax
sar [-flags] [ -e time ] [ -f filename ] [-i sec ] [ -s time ]
filename Uses filename as the data source for sar. The default is the current daily data file /var/adm/sa/sadd.
time Selects data up to time. The default is 18:00.
sec Selects data at intervals as close as possible to sec seconds.
Example
[user@localhost]$ sar # Displays current CPU activity.
Sysstat package
Additional to sar command, Linux sysstat package in Debian, RedHat Enterprise Linux and SuSE provides additional reporting tools:
See also
atopsar
Nmon
sag - "system activity graph" command
ksar- BSD licensed Java-based application to create graph of all parameters from the data collected by Unix sar utilities.
CURT, IBM AIX CPU Usage Reporting Tool
isag, tcl based command to plot sar/sysstat data
References
Easy system monitoring with SAR (IBM developerWorks)
System Activity Reporter (Softpanorama)
Article on sar at Computerhope
Footnotes
Job scheduling
Computer performance
System administration | Sar (Unix) | Technology | 366 |
66,249,878 | https://en.wikipedia.org/wiki/Arctic%20Wolf%20Networks | Arctic Wolf Networks is a cybersecurity company that provides security monitoring to detect and respond to cyber threats. The company monitors on-premises computers, networks and cloud-based information assets from malicious activity such as cybercrime, ransomware, and malicious software attacks.
History
Founded in 2012, Arctic Wolf focused on providing managed security services to small and mid-market organizations. The company was listed as a Gartner Cool Vendor in security for mid-sized enterprises in June 2018.
Acquisitions
In December 2018, Arctic Wolf announced the acquisition of the company RootSecure, and subsequently turned the RootSecure product offering into a vulnerability management service.
On February 1, 2022, Arctic Wolf acquired Tetra Defense.
In October 2023, Arctic Wolf acquired Revelstoke, a cybersecurity company.
Cylance was acquired from Blackberry Limited by Arctic Wolf in December 2024.
Funding
In March 2020, following a $60M D Round of funding, the company announced moving its headquarters from Sunnyvale, California to Eden Prairie, Minnesota in October 2020.
In October 2020, Arctic Wolf announced a $200M E Round of funding at a valuation of 1.3B$.
On July 19, 2021, Arctic Wolf secured $150M at Series F, tripling its valuation to $4.3B.
References
External links
Software companies established in 2012
Network management
Software companies of the United States
American companies established in 2012
Computer security companies
Information technology companies of the United States
Security companies of the United States | Arctic Wolf Networks | Engineering | 302 |
27,012 | https://en.wikipedia.org/wiki/Software%20crisis | Software crisis is a term used in the early days of computing science for the difficulty of writing useful and efficient computer programs in the required time. The software crisis was due to the rapid increases in computer power and the complexity of the problems that could be tackled. With the increase in the complexity of the software, many software problems arose because existing methods were inadequate.
History
The term "software crisis" was coined by some attendees at the first NATO Software Engineering Conference in 1968 at Garmisch, Germany. Edsger Dijkstra's 1972 Turing Award Lecture makes reference to this same problem:
Causes
The causes of the software crisis were linked to the overall complexity of hardware and the software development process. The crisis manifested itself in several ways:
Projects running over-budget
Projects running over-time
Software was very inefficient
Software was of low quality
Software often did not meet requirements
Projects were unmanageable and code difficult to maintain
Software was never delivered
The main cause is that improvements in computing power had outpaced the ability of programmers to effectively use those capabilities. Various processes and methodologies have been developed over the last few decades to improve software quality management such as procedural programming and object-oriented programming. However, software projects that are large, complicated, poorly specified, or involve unfamiliar aspects, are still vulnerable to large, unanticipated problems.
See also
AI winter
List of failed and overbudget custom software projects
Fred Brooks
System accident
Technological singularity
References
External links
Edsger Dijkstra: The Humble Programmer (PDF file, 473kB)
Brian Randell: The NATO Software Engineering Conferences
Markus Bautsch: Cycles of Software Crises in: ENISA Quarterly on Secure Software (PDF file; 1,86MB)
Hoare 1996, "How Did Software Get So Reliable Without Proof?"
Software quality
History of software
1968 neologisms | Software crisis | Technology | 375 |
44,312,199 | https://en.wikipedia.org/wiki/Eww%20%28web%20browser%29 | Emacs Web Wowser (a backronym of "eww") is a lightweight web browser within the GNU Emacs text editor. Eww can only do basic rendering of HTML; there is no capability for executing JavaScript or handling the intricacies of CSS. It was developed by Lars Magne Ingebrigtsen, who also created the underlying HTML rendering library.
See also
w3m used with emacs-w3m interface
Emacs/W3
References
External links
GNU Emacs manual
Source code
Free web browsers
Emacs
Cross-platform free software
Free software programmed in Lisp
Software using the GNU General Public License
Emacs modes | Eww (web browser) | Technology | 145 |
12,534,519 | https://en.wikipedia.org/wiki/Amplitude%20adjusting | The Amplitude adjusting (also referred to as Amplitude control) enables the power control of electric loads, which are operated with AC voltage. A representative application is the heating control of industrial high temperature furnaces.
Functionality
Contrary to the conventional phase angle or full wave control,
during amplitude control only the Amplitude of the sinusoidal supply current is changed. The level of the amplitude only depends on the consumed power. The sinus oscillation does not change.
Because current and voltage are in phase, only real power is taken from the mains for amplitude control. So the current consumption from the mains is considerably lower than the current consumption in case of phase-angle operation.
Advantages
The continuous current flow causes a mild operation of the used heater elements and consequently significant longer lifetimes are realized. Depending on the ambient conditions the lifetime can be twice as long.
Especially the surface damage of the heater elements at thresholds can be reduced.
The amplitude control eliminates the flicker effects and harmonics, as usual for Thyristor units, so that the standard specifications according to EN 61000-3-2 and EN 61000-3-3 are observed.
Reactive power compensation is not required, reducing equipment costs.
Applications
Sinus units or IGBT power converters for power control of:
Resistance heatings
Silicon carbide (SC) - heater elements
Molybdenum disilicide (MoSi2) - heater elements
Infralight radiators
Literature
Manfred Schleicher, Winfried Schneider: Electronic power units. , (Download as PDF)
Electric power | Amplitude adjusting | Physics,Engineering | 322 |
53,611,457 | https://en.wikipedia.org/wiki/NGC%204424 | NGC 4424 is a spiral galaxy located in the equatorial constellation of Virgo. It was discovered February 27, 1865 by German astronomer Heinrich Louis d'Arrest. This galaxy is located at a distance of 13.5 million light years and is receding with a heliocentric radial velocity of 442 km/s. It has a morphological class of SB(s)a, which normally indicates a spiral galaxy with a barred structure (SB), no inner ring feature (s), and tightly-wound spiral arms (a). The galactic plane is inclined at an angle of 62° to the line of sight from the Earth. It is a likely member of the Virgo Cluster of galaxies.
The galaxy NGC 4424 has a peculiar morphology with shells that give the appearance of a galaxy merger within the last half billion years. It has a long tail of hydrogen stretching to the south that is likely due to stripping from ram pressure. Because of the lack of gas, star formation has completely ceased in the outer parts of the galaxy, while there is still a mild amount occurring in the inner region. NGC 4424 will most likely end up as a lenticular galaxy by three billion years from now.
There is no indication of a compact source of X-ray emission in the nucleus, but there is an ionized tail stretching from the core. The velocity dispersion at the core suggests there is a supermassive black hole (SMBH) with a mass of . Hubble images of this galaxy show a tidally-stretched cluster located at a projected distance of from the nucleus. This is probably the core of a captured galaxy. It contains a compact source that is emitting X-rays and may be an active massive black hole. In time this may merge with the core SMBH of NGC 4424.
Supernovae
Two supernovae have been observed in NGC 4424:
Max Wolf discovered SN 1895A (type unknown, mag. 12.5) on March 16, 1895.
Type Ia supernova designated SN 2012cg was discovered in this galaxy by the LOSS program from images taken May 17, 2012. It reached maximum light nine days later at the end of May, and was the brightest supernova of the year 2012. The supernova reached a peak absolute magnitude of in the B (blue) band and synthesized of the radioactive isotope nickel-56. The available observations favor a merger of double degenerate progenitors as the source for the event. The proximity of the galaxy made this one of the best studied supernova explosions to date.
References
External links
Spiral galaxies
Peculiar galaxies
4424
07561
040809
Virgo (constellation) | NGC 4424 | Astronomy | 542 |
4,243,339 | https://en.wikipedia.org/wiki/Russula%20vesca | Russula vesca, known by the common names of bare-toothed Russula or the flirt, is a basidiomycete mushroom of the genus Russula.
Taxonomy
Russula vesca was described, and named by the eminent Swedish mycologist Elias Magnus Fries (1794–1878). The specific epithet is the feminine of the Latin adjective vescus, meaning "edible".
Description
The skin of the cap typically does not reach the margins (resulting in the common names). The cap is 5–10 cm wide, flat, convex, or with slightly depressed centre, weakly sticky, colour brownish to dark brick-red. Taste mild. Gills close apart, white. The stipe narrows toward the base, 2–7 cm long, 1.5–2.5 cm wide, white. It turns deep salmon when rubbed with iron salts (Ferrous sulfate). The spore print is white.
Distribution and habitat
Russula vesca appears in summer or autumn, and grows primarily in deciduous forests in Europe, and North America.
Edibility
Russula vesca is considered edible and good, with a mild nutty flavour. In some countries, including Russia, Ukraine and Finland it is considered entirely edible even in the raw state.
See also
List of Russula species
References
"Danske storsvampe. Basidiesvampe" [a key to Danish basidiomycetes] J.H. Petersen and J. Vesterholt eds. Gyldendal. Viborg, Denmark, 1990.
External links
vesca
Fungi described in 1836
Fungi of Europe
Fungi of North America
Edible fungi
Fungus species | Russula vesca | Biology | 336 |
32,046,954 | https://en.wikipedia.org/wiki/Marasmius%20sullivantii | Marasmius sullivantii is a species of fungus in the family Marasmiaceae. The species was originally described by the French botanist Jean Pierre François Camille Montagne in 1856.
References
External links
Fungi described in 1856
Fungi of North America
sullivantii
Taxa named by Camille Montagne
Fungus species | Marasmius sullivantii | Biology | 60 |
57,522,707 | https://en.wikipedia.org/wiki/Kathleen%20Madden | Kathleen Marie Madden is an American mathematician who works in dynamical systems. She was the dean of the School of Natural Sciences, Mathematics, and Engineering at California State University, Bakersfield. She won the George Pólya Award and is the co-author of the book Discovering Discrete Dynamical Systems.
Education and career
Madden did her undergraduate studies at the University of Colorado. She then spent two years with the Peace Corps teaching mathematics in Cameroon before returning to the US for graduate study.
She completed her Ph.D. in 1994 at the University of Maryland, College Park; her dissertation, On the Existence and Consequences of Exotic Cocycles, was supervised by Nelson G. Markley.
Before joining California State University, Bakersfield as associate dean in 2015,
she was a faculty member in the mathematics department at Lafayette College
and then at Drew University, where she also served as chair of the department and associate dean. At California State University, Bakersfield, she was appointed interim dean in 2016 and permanent dean in 2017. She served in this position until 2021, at which time she retired to a part-time position in the faculty.
Books and recognition
In 1998, Madden and Aimee Johnson won the George Pólya Award for their joint paper on aperiodic tiling, "Putting the Pieces Together: Understanding Robinson's Nonperiodic Tilings". In 2017, Madden, Johnson, and their co-author Ayşe Şahin published the textbook Discovering Discrete Dynamical Systems through the Mathematical Association of America.
References
Year of birth missing (living people)
Living people
20th-century American mathematicians
21st-century American mathematicians
Dynamical systems theorists
University of Colorado alumni
University of Maryland, College Park alumni
Lafayette College faculty
Drew University faculty
California State University, Bakersfield faculty
20th-century American women mathematicians
21st-century American women mathematicians | Kathleen Madden | Mathematics | 364 |
42,591,787 | https://en.wikipedia.org/wiki/Axial%20line%20%28dermatomes%29 | The axial line is the line between two adjacent dermatomes that are not represented by immediately adjacent spinal levels. Although dermatomes are shown to be discrete segments on dermatomal maps (like in the image opposite), they are in fact not; adjacent dermatomes overlap with one another. This is one of the reasons for the variety of different dermatomal maps proposed. However, at axial lines, adjacent dermatomes do not overlap. An example of an axial line would be the line between the S2 and L4 dermatomes on the calf.
References
Anatomy
Animal anatomy
Anatomical terminology | Axial line (dermatomes) | Biology | 130 |
3,973,930 | https://en.wikipedia.org/wiki/Electrophoretic%20mobility%20shift%20assay | An electrophoretic mobility shift assay (EMSA) or mobility shift electrophoresis, also referred as a gel shift assay, gel mobility shift assay, band shift assay, or gel retardation assay, is a common affinity electrophoresis technique used to study protein–DNA or protein–RNA interactions. This procedure can determine if a protein or mixture of proteins is capable of binding to a given DNA or RNA sequence, and can sometimes indicate if more than one protein molecule is involved in the binding complex. Gel shift assays are often performed in vitro concurrently with DNase footprinting, primer extension, and promoter-probe experiments when studying transcription initiation, DNA gang replication, DNA repair or RNA processing and maturation, as well as pre-mRNA splicing. Although precursors can be found in earlier literature, most current assays are based on methods described by Garner and Revzin and Fried and Crothers.
Principle
A mobility shift assay is electrophoretic separation of a protein–DNA or protein–RNA mixture on a polyacrylamide or agarose gel for a short period (about 1.5-2 hr for a 15- to 20-cm gel). The speed at which different molecules (and combinations thereof) move through the gel is determined by their size and charge, and to a lesser extent, their shape (see gel electrophoresis). The control lane (DNA probe without protein present) will contain a single band corresponding to the unbound DNA or RNA fragment. However, assuming that the protein is capable of binding to the fragment, the lane with a protein that binds present will contain another band that represents the larger, less mobile complex of nucleic acid probe bound to protein which is 'shifted' up on the gel (since it has moved more slowly).
Under the correct experimental conditions, the interaction between the DNA (or RNA) and protein is stabilized and the ratio of bound to unbound nucleic acid on the gel reflects the fraction of free and bound probe molecules as the binding reaction enters the gel. This stability is in part due to a "caging effect", in that the protein, surrounded by the gel matrix, is unable to diffuse away from the probe before they recombine. If the starting concentrations of protein and probe are known, and if the stoichiometry of the complex is known, the apparent affinity of the protein for the nucleic acid sequence may be determined. Unless the complex is very long lived under gel conditions, or dissociation during electrophoresis is taken into account, the number derived is an apparent Kd. If the protein concentration is not known but the complex stoichiometry is, the protein concentration can be determined by increasing the concentration of DNA probe until further increments do not increase the fraction of protein bound. By comparison with a set of standard dilutions of free probe run on the same gel, the number of moles of protein can be calculated.
Variants and additions
An antibody that recognizes the protein can be added to this mixture to create an even larger complex with a greater shift. This method is referred to as a supershift assay, and is used to unambiguously identify a protein present in the protein – nucleic acid complex.
Often, an extra lane is run with a competitor oligonucleotide to determine the most favorable binding sequence for the binding protein. The use of different oligonucleotides of defined sequence allows the identification of the precise binding site by competition (not shown in diagram). Variants of the competition assay are useful for measuring the specificity of binding and for measurement of association and dissociation kinetics. Thus, EMSA might also be used as part of a SELEX experiment to select for oligonucleotides that do actually bind a given protein.
Once DNA-protein binding is determined in vitro, a number of algorithms can narrow the search for identification of the transcription factor. Consensus sequence oligonucleotides for the transcription factor of interest will be able to compete for the binding, eliminating the shifted band, and must be confirmed by supershift. If the predicted consensus sequence fails to compete for binding, identification of the transcription factor may be aided by Multiplexed Competitor EMSA (MC-EMSA), whereby large sets of consensus sequences are multiplexed in each reaction, and where one set competes for binding, the individual consensus sequences from this set are run in a further reaction.
For visualization purposes, the nucleic acid fragment is usually labelled with a radioactive, fluorescent or biotin label. Standard ethidium bromide staining is less sensitive than these methods and can lack the sensitivity to detect the nucleic acid if small amounts of nucleic acid or single-stranded nucleic acid(s) are used in these experiments. When using a biotin label, streptavidin conjugated to an enzyme such as horseradish peroxidase is used to detect the DNA fragment. While isotopic DNA labeling has little or no effect on protein binding affinity, use of non-isotopic labels including flurophores or biotin can alter the affinity and/or stoichiometry of the protein interaction of interest. Competition between fluorophore- or biotin-labeled probe and unlabeled DNA of the same sequence can be used to determine whether the label alters binding affinity or stoichiometry.
References
External links
Chemiluminescent Gel Shift Protocol
Genetics techniques
Molecular genetics
Molecular biology
Protein methods
Proteomics
Analytical chemistry
Laboratory techniques
Electrophoresis
Biological techniques and tools | Electrophoretic mobility shift assay | Chemistry,Engineering,Biology | 1,155 |
78,273,960 | https://en.wikipedia.org/wiki/PKS%200537-441 | PKS 0537-441 is a blazar located in the constellation of Pictor. It has a redshift of 0.896 and was discovered in 1973 by an American astronomer named Olin J. Eggen, who noted it as a luminous quasar. This is a BL Lacertae object in literature because of its featureless optical spectra as well as both a possible gravitational microlensing and a gravitationally lensed candidate. Its radio source is found compact and is characterized by a spectral peak in the gigahertz range, making it a gigahertz-peaked spectrum source (GPS).
Description
PKS 0537-441 is found violently variable on the electromagnetic spectrum at all frequencies, and is a source of gamma ray emission. Between December 2004 and March 2005, it underwent intense activity showing more than 4 magnitudes in a V filter in 50 days and 2.5 in 10 days. PKS 0537-441 is also known to display two flaring episodes, one in July 2009 and one in March 2010, with its gamma ray luminosity in the 0.1-100 GeV energy range reaching a peak value (2.6 x 1048 erg s−1) on 3-d timescales at the end of the month. During its variability, PKS 0537-441 shows signs of both flux and color index variability on timescales.
PKS 0537-441 contains a radio structure. The source is found to be core dominated on arcsecond scales with a secondary bright component separated by 7".2 at a 305° positional angle (PA). However, according to 2.3 GHz observations conducted by the Southern Hemisphere VLBI Experimental program (SHEVE), the radio structure has a 4.2 Jansky core with a measured diameter of 1.1 mas. There is a jetlike component, confirmed to be an asymmetric core-jet structure according to a 5 GHz Very-long-baseline interferometry imaging. This component is located north of the compact core.
PKS 0537-441 shows gamma ray and optical oscillations. During its high state between August 2008 and 2011, the periodogram of its gamma ray light curve displays a peak reaching T0 ~ 280 days with significance of 99.7%. A broad magnesium ionized emission line was also discovered at redshift (z) 0.885, implied to be a mini low ionization broad absorption-line quasar. This speculates PKS 0537-441 might be a binary quasar.
References
External links
PKS 0537-441 on SIMBAD
PKS 0537-441 on NASA/IPAC Database
Blazars
Pictor
2824444
Astronomical objects discovered in 1973
Quasars
BL Lacertae objects
Active galaxies | PKS 0537-441 | Astronomy | 581 |
47,941,274 | https://en.wikipedia.org/wiki/2%2C3-Xylidine | 2,3-Xylidine is the organic compound with the formula C6H3(CH3)2NH2. it is one of several isomeric xylidines. It is a colorless viscous liquid. The compound is used in the production of the drug mefenamic acid and the herbicide xylachlor.
References
Anilines | 2,3-Xylidine | Chemistry | 78 |
35,281,127 | https://en.wikipedia.org/wiki/.NET%20Gadgeteer | Microsoft .NET Gadgeteer is an open-source rapid-prototyping standard for building small electronic devices using the Microsoft .NET Micro Framework and Microsoft Visual Studio/Visual C# Express.
The Gadgeteer platform
The Gadgeteer platform centers around a Gadgeteer mainboard with a microcontroller running the .NET Micro Framework. Gadgeteer sets out rules about how hardware devices packaged as add-on modules may connect to the mainboard, using solderless push-on connectors. Gadgeteer includes a small class library to simplify the implementation details for integrating these add-on modules into a system. It is a way of assigning the plethora of functions that a microcontroller provides to sockets that have a standardized, small set of interfaces at the hardware level.
History and licensing
.NET Gadgeteer was created by researchers at Microsoft Research Cambridge, where the Sensors and Devices group created it as a way develop device ideas rapidly and iteratively. It quickly generated interest from hobbyists, teachers, and developers, who wanted a platform to build gadgets in a short time.
In response to outside interest, Microsoft then released Gadgeteer as an open source software project, describing the project as "an open collaboration between Microsoft, hardware manufacturers, and end users".
The core libraries are published under the Apache 2.0 License, while the hardware designs are under the Creative Commons 3.0 License. The core source code is publicly available from the CodePlex source repository.
Microsoft has stated plans to continue supporting and investing in the .NET Gadgeteer ecosystem, including hosting educational materials and working with companies to create compatible kits and modules.
Design and construction
.NET Gadgeteer projects consist of a mainboard and a series of modules connected via a standard 10 pin connector. The mainboard sockets can support one or more different types of modules, shown by a series of letters next to the socket. Each module has a letter showing its module type. (Connecting modules incorrectly does not harm the hardware – providing only one red power module is used). Any module that supplies power (via USB, DC or battery) is coloured red to help prevent multiple power sources that can potentially harm the devices.
The Gadgeteer library includes a layer of event-driven drivers and code generation, which integrates with Visual Studio. This enables developers to visually create a diagram in Visual Studio of which hardware modules (for instance, a camera module, button module and screen module) are connected to which sockets on the mainboard, and the Gadgeteer SDK then auto-generates code creating object instances for all the relevant hardware. In this way the developer can immediately begin writing .NET code targeting the connected hardware.
Many different modules are currently available for a series of hardware vendors, including wireless transmission, environment sensors, actuators and custom community modules resulting in a large ecosystem of projects.
Hardware
Any hardware manufacturer, builder or hobbyist can create .NET Gadgeteer-compatible hardware; currently multiple manufacturers participate.
GHI Electronics
Love Electronics
Micromint
Mountaineer Group
Seeed Studio
Sytech design
See also
Arduino
BASIC Stamp
Fritzing
Gumstix
ioBridge
Make Controller Kit
Maximite
mbed microcontroller
Minibloq
Netduino
OOPic
Parallax Propeller
PICAXE
Raspberry Pi
Simplecortex
Tinkerforge
References
NET Gadgeteer
NET Gadgeteer
NET Gadgeteer
Microsoft free software
Software using the Apache license | .NET Gadgeteer | Engineering | 705 |
68,597,933 | https://en.wikipedia.org/wiki/Zhilan%20Feng | Zhilan Julie Feng (born 1959) is a Chinese-American applied mathematician whose research topics include mathematical biology, population dynamics, and epidemiology. She is a professor of mathematics at Purdue University, and a program director in the Division of Mathematical Sciences at the National Science Foundation.
Education and career
Feng studied mathematics at Jilin University in China, earning a bachelor's degree in 1982 and a master's degree in 1985. She came to Arizona State University for graduate study, completing her Ph.D. in 1994. Her dissertation, A Mathematical Model for the Dynamics of Childhood Diseases Under the Impact of Isolation, was supervised by Horst R. Thieme.
After her postdoctoral study at Cornell University, she joined Purdue University as an assistant professor in 1996. She was promoted to full professor in 2005, and became a program director at the National Science Foundation in 2019.
Recognition
Feng was named a Fellow of the American Mathematical Society, in the 2022 class of fellows, "for contributions to applied mathematics, particularly in biology, ecology, and epidemiology".
Books
Feng's books include:
Disease Evolution: Models, Concepts, and Data Analyses (American Mathematical Society, 2006, edited with Ulf Dieckmann and Simon A. Levin)
Applications of Epidemiological Models to Public Health Policymaking: The role of heterogeneity in model predictions (World Scientific, 2014)
Mathematical Models of Plant-Herbivore Interactions (Chapman & Hall / CRC, 2018, with Donald DeAngelis)
Mathematical Models in Epidemiology (Springer, 2019, with Fred Brauer and Carlos Castillo-Chavez)
References
External links
Home page
1959 births
Living people
Chinese biologists
Chinese epidemiologists
Chinese mathematicians
Chinese women biologists
Chinese women mathematicians
American biologists
American epidemiologists
20th-century American mathematicians
21st-century American mathematicians
American women biologists
American women epidemiologists
Applied mathematicians
Theoretical biologists
Jilin University alumni
Cornell University alumni
Purdue University faculty
United States National Science Foundation officials
Fellows of the American Mathematical Society
20th-century American women mathematicians
21st-century American women mathematicians | Zhilan Feng | Mathematics,Biology | 425 |
3,371,392 | https://en.wikipedia.org/wiki/Infill | In urban planning, infill, or in-fill, is the rededication of land in an urban environment, usually open-space, to new construction. Infill also applies, within an urban polity, to construction on any undeveloped land that is not on the urban margin. The slightly broader term "land recycling" is sometimes used instead. Infill has been promoted as an economical use of existing infrastructure and a remedy for urban sprawl. Detractors view increased urban density as overloading urban services, including increased traffic congestion and pollution, and decreasing urban green-space. Many also dislike it for social and historical reasons, partly due to its unproven effects and its similarity with gentrification.
In the urban planning and development industries, infill has been defined as the use of land within a built-up area for further construction, especially as part of a community redevelopment or growth management program or as part of smart growth.
It focuses on the reuse and repositioning of obsolete or underutilized buildings and sites.
Urban infill projects can also be considered as a means of sustainable land development close to a city's urban core.
Redevelopment or land recycling are broad terms which include redevelopment of previously developed land. Infill development more specifically describes buildings that are constructed on vacant or underused property or between existing buildings. Terms describing types of redevelopment that do not involve using vacant land should not be confused with infill development. Infill development is commonly misunderstood to be gentrification, which is a different form of redevelopment.
Urban infill development vs. gentrification
Infill development is sometimes a part of gentrification thus providing a source of confusion which may explain social opposition to infill development.
Gentrification is a term that is challenging to define because it manifests differently by location, and describes a process of gradual change in the identity of a neighborhood. Because gentrification represents a gradual change, scholars have struggled to draw a hard line between ordinary, natural changes in a neighborhood and special, unnatural ones based in larger socio-economic and political structures.
While the exact definition of gentrification varies by scholar, most can agree that gentrification redevelops a lower income neighborhood in a way that attracts higher income residents, or caters to their increasing presence. Peter Moskowitz, the author of How to Kill a City, has more specifically put gentrification into context by describing it as a process permitted by "decades of racist housing policy" and perpetuated through a "political system focused more on the creation and expansion of business opportunities than the well-being of its citizens." Gentrification is most common in urban neighborhoods, although it has also been studied in suburban and rural areas.
A defining feature of gentrification is the effect it has on residents. Specifically, gentrification results in the physical displacement of lower class residents by middle or upper class residents. The mechanism by which this displacement most traditionally occurs is through rental increases and increases in property values. As gentrifiers start moving into a neighborhood, developers make upgrades to the neighborhood that are catered to them. The initial influx of middle class gentry occurs due to the affordability of the neighborhood combined with attractive developments that have already been made in the neighborhood. In order to accommodate these new residents, local governments will change zoning codes and give out subsidies to encourage the development of new living spaces. Rental increases are then justified by the new capital and demand for housing coming into an area. Through increased rents for existing shops and rental units, long time residents and shopkeepers are forced to move, making way for the more new development.
The major difference between gentrification and infill development is that infill development does not always involve physical displacement whereas gentrification does. This is because infill development describes any development on unused or blighted land. When successful, infill development creates stable, mixed income communities. Gentrification is more strongly associated with the development of higher-end shopping centers, apartment complexes, and industrial sites. These structures are developed on used land, with the goal of attracting higher income residents to maximize the capital of a certain area. The mixed income communities seen during gentrification are inherently transitional (based on how gentrification is defined), whereas the mixed-income communities caused by infill development are ideally stable.
Despite their differences, similarities between gentrification and infill development are apparent. Infill development can involve the development of the same high-end residential and non-residential structures seen with gentrification (i.e. malls, grocery stores, industrial sites, and apartment complexes) and it often brings middle and upper-class residents into the neighborhoods being developed.
Social challenges
The similarities, and subsequent confusion, between gentrification and infill housing can be identified in John A. Powell’s broader scholarship on regional solutions to urban sprawl and concentrated poverty. This is particularly clear in his article titled Race, poverty, and urban sprawl: Access to opportunities through regional strategies. In this work, he argues that urban civil rights advocates must focus on regional solutions to urban sprawl and concentrated poverty. To make his point, powell focuses on infill development, explaining that one of the major challenges to it is the lack of advocacy that it receives locally from urban civil rights advocates and community members. He cites that the concern within these groups is that infill development will bring in middle and upper-class residents and cause the eventual displacement of low-income residents. The fact that infill development "is mistakenly perceived as a gentrification process that will displace inner city residents from their existing neighborhoods," demonstrates that there exists confusion between the definitions of the terms.
Powell also acknowledges that there is historical merit to these concerns, citing how during the 1960s infill development proved to favor white residents over minorities and how white-flight to the suburbs occurred throughout the mid-to-late twentieth century. Many opponents to infill development are "inner-city residents of color." They often view "return by whites to the city as an effort to retake the city" that they had previously left. This alludes to the fear of cultural displacement, which has most often been associated with gentrification, but can also apply to infill development. Cultural displacement describes the “changes in the aspects of a neighborhood that have provided long-time residents with a sense of belonging and allowed residents to live their lives in familiar ways.” Due white flight throughout the mid-to-late 20th century, minorities began to constitute the dominant group in inner-city communities. In the decades following, they developed distinct cultural identities and power within these communities. Powell suggests that it is unsurprising that they would want to risk relinquishing this sense of belonging to an influx of upper class white people, especially considering the historical tensions leading up to white flight in urban areas across the country throughout the mid to late 20th century.
Benefits of infill development
Despite these concerns, Powell claims that, depending on the city, the benefits of infill development may outweigh the risks that such groups are concerned about. For example, poor cities with high levels of vacant land (such as Detroit) have much to gain through infill development. He also addresses the concern that minority groups will lose power in these communities by explaining how "cities like Detroit and Cleveland are far from being at risk of political domination by whites."
The ways that Powell believes infill development could help poor cities like Detroit and Cleveland are through the increase in middle class residents and the new buildings that are constructed in the neighborhoods. These new buildings are an attractive alternative to blight, so they can have the benefit of improving property values for lower-class homeowners. While increases property values can sometimes force non-homeowners to relocate, Powell suggests that in poor cities there are enough options for relocation that the displacement often remains "intra-jurisdictional." Another benefit of infill development is the raising of the tax base, which brings more revenue into the city and improves the city’s ability to serve its residents. Infill development's ability to eradicate old industrial sites and city-wide blight also can improve the quality of life for residents and spark much-needed outside investment in cities.
Considering the confusion between gentrification and infill development, a major obstacle for advocates of infill development is to educate community members on the differences between infill development and gentrification. Doing so requires explaining that infill projects use vacant land and do not displace lower income residents, but instead benefit them in the creation of stable, mixed-income communities. Addressing the issue of cultural displacement is also paramount, as infill development still has the potential to shift the cultural identity of a neighborhood even if there is no physical displacement associated with it.
Logistical challenges
Although urban infill is an appealing tool for community redevelopment and growth management, it is often far more costly for developers to develop land within the city than it is to develop on the periphery, in suburban greenfield land. Costs for developers include acquiring land, removing existing structures, and testing for and cleaning up any environmental contamination.
Scholars have argued that infill development is more financially feasible for development when it occurs on a large plot of land, with several acres. Large-scale development benefits from what economists call economies of scale and reduces the surrounding negative influences of neighborhood blight, crime, or poor schools. However, large scale infill development is often difficult in a blighted neighborhood for several reasons, such as the difficulties in acquiring land and in gaining community support.
Amassing land is one challenge that infill development poses, but greenfield development does not. Neighborhoods that are targets for infill often have parcels of blighted land scattered among places of residence. Developers must be persistent to amass land parcel by parcel and often find resistance from landowners in the target area. One way to approach that problem is for city management to use eminent domain to claim land. However, that is often unpopular with city management and neighborhood residents. Developers must also deal with regulatory barriers, visit numerous government offices for permitting, interact with a city management that is frequently unwilling to use eminent domain to remove current residents, and generally engage in public-private partnerships with local government.
Developers also meet with high social goal barriers in which the local officials and residents are not interested in the same type of development. Although citizen involvement has been found to facilitate the development of brownfield land, residents in blighted neighborhoods often want to convert vacant lots to parks or recreational facilities, but external actors seek to build apartment complexes, commercial shopping centers, or industrial sites.
Suburban infill
Suburban infill is the development of land in existing suburban areas that was left vacant during the development of the suburb. It is one of the tenets of New Urbanism and smart growth, trends that urge densification to reduce the need for automobiles, encourage walking, and save energy ultimately. In New Urbanism, an exception to infill is the practice of urban agriculture in which land in the urban or suburban area is retained to grow food for local consumption.
Infill housing
Infill housing is the insertion of additional housing units into an already-approved subdivision or neighborhood. They can be provided as additional units built on the same lot, by dividing existing homes into multiple units, or by creating new residential lots by further subdivision or lot line adjustments. Units may also be built on vacant lots.
Infill residential development does not require the subdivision of greenfield land, natural areas, or prime agricultural land, but it usually reduces green space. In some cases of residential infill, existing infrastructure may need expansion to provide enough utilities and other services: increased electrical and water usage, additional sewage, increased traffic control, and increased fire damage potential.
As with other new construction, structures built as infill may clash architecturally with older, existing buildings.
See also
Land recycling
Redevelopment
Urban consolidation
Gentrification
Urban sprawl
References
External links
DenverInfill.com, News and information about urban infill development in the Mile High City of Denver.
(examples of in-fill development in the greater Washington D.C. area)
Building engineering
Urban studies and planning terminology
Housing | Infill | Engineering | 2,527 |
47,807,559 | https://en.wikipedia.org/wiki/Oldenburger%20Computer-Museum | The Oldenburg Computer Museum (OCM) is a museum founded in 2008 in Oldenburg (Oldb), Lower-Saxony, Germany that is dedicated to the preservation and operational presentation of the history of home computing.
Overview
The museum presents computers, video game consoles, and arcade video game machines from the 1970s through 1990s. The exhibits are functional and invite visitors to try out and use them. Founded in 2008 by the non-profit association "Oldenburger Computer-Museum e. V.”, which is organised and run by dedicated volunteers. The goal of the Oldenburger Computer Museum is the preservation of the home computer culture as an interactive exhibition with fully functional exhibits. Everything on exhibit is equipped with software, which can - and should - be used, explored and experienced. In this way, visitors can get a sense of how computer technology has developed over time and how it relates to current technology, especially with regard to aspects like; graphics, sound, speed, mass storage, and the reduction in size of components.
At the Oldenburger Computer Museum one can play games on the Commodore 64, Atari 2600, and Amiga, write their own programs on original software and experience hands-on the history of computing.
History
The museum grew out of the private collection of Thiemo Eddiks. In the beginning, small temporary exhibitions were presented in the OFFIS - Institute for Computer Science and the Carl von Ossietzky University in Oldenburg as well as in other locations. In November 2008, the permanent exhibition was officially opened. In November 2009 the association, “Oldenburger Computer- Museum e. V.” was founded und was recognized as a non-profit organization. Initially started as an exhibition in a small space, the move to the current larger location took place in 2014. In addition to the permanent exhibition “Home computers of the 1970’s and 80’s”, a classic video arcade hall has also been created.
Exhibition
The permanent exhibition, “Homecomputers of the 1970’s and 1980’s" showcases 23 functional computer systems, including PDP-8, Commodore PET, Apple II, Osborne 1, Amstrad CPC 464, Apple Macintosh and Amiga 500 and is open very Tuesday from 6pm until 9pm.
Literature
References
External links
Oldenburger Computer Museum website
2014 establishments in Germany
Buildings and structures in Oldenburg (city)
Computer museums
Museums established in 2008
Museums in Lower Saxony
Technology museums in Germany
Tourist attractions in Oldenburg (city) | Oldenburger Computer-Museum | Technology | 505 |
66,443,279 | https://en.wikipedia.org/wiki/H3R26me2 | H3R26me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 26th arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
Nomenclature
The name of this modification indicates dimethylation of arginine 26 on histone H3 protein subunit:
Arginine
Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases.
Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction, and transcriptional regulation.
Arginine methylation plays a major role in gene regulation because of the ability of the PRMTs to deposit key activating (histone H4R3me2, H3R2me2, H3R17me2a, H3R26me2) or repressive (H3R2me2, H3R8me2, H4R3me2) histone marks.
Histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin.
Mechanism and function of modification
Methylation of H3R26me2 is mediated by CARM1 and is recruited to promoter upon gene activation along with acetyltransferases and activates transcription. When CARM1 is recruited to transcriptional promoters the histone H3 is methylated (H3R17me2 & H3R26me2).
H3R26 lies close to H3K27, which is a repressive mark when methylated. There are several ways that H3R26 could change gene expression.
Epigenetic implications
The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states, which define genomic regions by grouping different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterized by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions.
The human genome is annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Clinical significance
CARM1 knockout mice are smaller and die shortly after birth. CARM1 is required for the epigenetic maintenance of pluripotency and self-renewal, as it methylates H3R17 and H3R26 at core pluripotency genes such as Oct4, SOX2, and Nanog.
It is possible that H3R26me2 levels are changed before pre-implantation of bovine embryos and their development.
Methods
The histone mark H3K4me1 can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone methylation
Histone methyltransferase
References
Epigenetics
Post-translational modification | H3R26me2 | Chemistry | 1,158 |
42,104,891 | https://en.wikipedia.org/wiki/Packet%20Sender | is an open source utility to allow sending and receiving TCP and UDP packets. It also supports TCP connections using SSL, intense traffic generation, HTTP(S) GET/POST requests, and panel generation. It is available for Windows, Mac, and Linux. It is licensed GNU General Public License v2 and is free software. Packet Sender's web site says "It's designed to be very easy to use while still providing enough features for power users to do what they need.".
Uses
Typical applications of Packet Sender include:
Troubleshooting network devices that use network servers (send a packet and then analyze the response)
Troubleshooting network devices that use network clients (devices that "phone home" via UDP, TCP, or SSL—Packet Sender can capture these requests)
Testing and development of new network protocols (send a packet, see if device behaves appropriately)
Reverse-engineering network protocols for security analysis (such as malware)
Troubleshooting secure connections (using the SSL server and client).
Automation (via Packet Sender's command line interface or resend feature)
Stress-testing a device (using intense network generator tool)
Sharing/Saving/Collaboration using the Packet Sender Cloud service
Packet Sender comes with a built-in TCP, UDP, and SSL server on multiple ports a user specifies. This remains running listening for packets while sending other packets.
Features
As of version v8.1.1 Packet Sender supports the following features:
Live traffic log (Time / From IP / From Port / To IP / Method / Error / ASCII / HEX)
Persistent TCP and SSL Connections
HTTP Requests with Auth headers
Portable Mode
IPv6 Client / Server
IPv4 Subnet Calculator
Saved packets (with sending directly from saved list)
Mixed ASCII packet notation (ASCII with embedded syntax to allow hex)
Multiple TCP servers
Multiple UDP servers
Multiple SSL servers
Multicast send and receive
Packet resending at n intervals (where n is seconds)
Multi-threaded TCP/SSL connections
Command-line interface
Packet responses
Smart Packet responses
Macros inside packet responses for TIME, DATE, UNIXTIME, RANDOM, UNIQUE
Packet search (for saved packets)
Packet export/import
Intense Traffic Generator (UDP Flooding) via GUI or CLI
Quick-send from traffic log
Save traffic log
Panel Generation for scripting buttons
Packet Sender Cloud
Platforms
Windows (64-bit)
OS X (Intel-based x86-64 or M1 Macs with Rosetta 2)
Linux (Source Distribution with Qt or x86-64 AppImage or Snap)
Packet Sender Mobile is available on iOS. It only has the core features of desktop Packet Sender (send, receive, TCP, UDP, and Cloud).
See also
Hping
Wireshark
Netcat
References
External links
Original website
Packet Sender source code on GitHub
Computer security software
Free network management software
Network analyzers
Unix network-related software
Windows network-related software
MacOS network-related software | Packet Sender | Engineering | 629 |
46,809,690 | https://en.wikipedia.org/wiki/Penicillium%20moldavicum | Penicillium moldavicum is an anamorph species of the genus Penicillium.
References
moldavicum
Fungi described in 1967
Fungus species | Penicillium moldavicum | Biology | 34 |
19,688,961 | https://en.wikipedia.org/wiki/Berm%C3%BAdez%20%28rum%29 | J. Armando Bermúdez & Co., S.A., known as Bermúdez, is the oldest distillery of rums in the Dominican Republic. It was founded in 1852. With Barceló and Brugal, the company dominates the Dominican rum production. Bermúdez has its distillery in Santiago de los Caballeros.
References
Brands of the Dominican Republic
Distilleries
Dominican Republic companies established in 1852
Rum produced in the Dominican Republic
Rum brands | Bermúdez (rum) | Chemistry | 99 |
4,515,786 | https://en.wikipedia.org/wiki/Probe%20effect | Probe effect is an unintended alteration in system behavior caused by measuring that system. In code profiling and performance measurements, the delays introduced by insertion or removal of code instrumentation may result in a non-functioning application, or unpredictable behavior.
Examples
In electronics, by attaching a multimeter, oscilloscope, or other testing device via a test probe, small amounts of capacitance, resistance, or inductance may be introduced. Though good scopes have very slight effects, in sensitive circuitry these can lead to unexpected failures, or conversely, unexpected fixes to failures.
In debugging of parallel computer programs, sometimes failures (such as deadlocks) are not present when the debugger's code (which was meant to help to find a reason for deadlocks by visualising points of interest in the program code) is attached to the program. This is because additional code changed the timing of the execution of parallel processes, and because of that deadlocks were avoided. This type of bug is known colloquially as a Heisenbug, by analogy with the observer effect in quantum mechanics.
See also
Observer effect (physics)
Observer's paradox
Sources
Software testing
Debugging | Probe effect | Engineering | 248 |
34,255,844 | https://en.wikipedia.org/wiki/Cheshire%20eyepiece | A Cheshire eyepiece or Cheshire collimator is a simple tool that helps aligning the optical axes of the mirrors or lenses of a telescope, a process called collimation. It consists of a peephole to be inserted into the focuser in place of the eyepiece. Through a lateral opening, ambient light falls on the brightly painted oblique back of the peephole. Images of this bright surface are reflected by the mirrors or lenses of the telescope and can thus be seen by a person peering through the hole. A Cheshire eyepiece contains no lenses or other polished optical surfaces.
The tool was first described by F. J. Cheshire in 1921. It was repopularized in the 1980s and is now mass-produced. Amateur astronomers in particular use them to collimate reflecting or refracting telescopes.
Some modern models of Cheshire eyepieces in common use include extended sight tubes and are equipped with crosshairs. When inserted into a Newtonian telescope whose primary mirror is marked in its center, such aids allow the user to adjust the position and tilt of both the secondary and the primary mirror. It can also be used to verify focuser alignment.
References
Optical devices | Cheshire eyepiece | Materials_science,Engineering | 240 |
69,451,773 | https://en.wikipedia.org/wiki/Insects%20in%20Japanese%20culture | Within Japanese culture, insects have occupied an important role as aesthetic, allegorical, and symbolic objects. In addition, insects have had a historical importance within the context of the culture and art of Japan.
Kenta Takada, longhorn beetle collector and author, noted that the Japanese appreciation for insects lies within the Shinto religion. Shinto, a form of animism, places emphasis that every facet of the natural world is worthy of reverence as they are the creation of the spiritual dimension. Takada additionally noted the importance of mono no aware, Zen awareness of the transience of all things, as an important factor within the perception of insects in a Japanese context. Lafcadio Hearn remarked that "[the] belief in a mysterious relation between ghosts and insects, or rather between spirits and insects, is a very ancient belief in the East".
Historical context
Insects have occupied a place within Japanese culture for centuries. The Lady who Loved Insects, is a classic tale of a woman who collected caterpillars during the 12th century. The Tamamushi Shrine, a miniature temple from the 7th century was formerly adorned with beetlewing from the jewel beetle Chrysochroa fulgidissima.
Lafcadio Hearn, a European-American scholar who became a Japanese citizen in the 19th century remarked: "In old Japanese literature, poems upon insects are to be found by thousands". Hearn's body of work while he was a citizen of Japan and as he analyzed Japanese literary works particularly focused on the comparative state of Western perceptions of insects compared to the Japanese ways of "[finding] delight, century after century, in watching the ways of insects". Twelve of the eleven books that Hearn had written included passages devoted to insects. In particular, Hearn wrote about Japanese tales regarding silkworms, cicadas, dragonflies, flies, kusa-hibari (a cricket known under the scientific name Svistella bifasciata), ants, fireflies, butterflies, and mosquitoes.
Entomophagy
Entomophagy has been a tradition within the regions of Gifu and Nagano, mountainous regions where there was a lack of fish and livestock for protein. In times of famine, such as the end of the Second World War, the consumption of insects like inago and hachinoko served to supplement the diets of those with little access towards other forms of protein and vitamins. Consumption of insects waned when the Japanese people gradually gained access to higher-quality livestock products. However the practice of entomophagy has been seeing a resurgence in recent years, with easily available packaged versions and with chefs looking for a sustainable food source popping up. Access to edible insects for the purpose of consumption have been gradually made easier with the advent of online shopping, as well as vending machines and retailers providing a supply. Additionally, cultural festivals, matsuri, provide a venue for consumption and highlighting local traditions of entomophagy. The Kushihara Hebo Matsuri of Ena, Gifu, is an example of such an event. Where beekeepers cultivate and then proceed to try and harvest the most hachinoko.
Inago - Orthoptera
The consumption of grasshoppers, known as inago in the Japanese language, is considered a luxury food product.
Hachinoko - Hymenoptera
In the Chūbu region of central Japan, local people raise wasp or bee larvae for the purposes of consumption. The larvae are referred to as hachinoko. Foraged wasps are consumed at all life stages, from larva to adult. The type of wasp harvested is known in places where insects are consumed, such as Gifu Prefecture, as hebo. Hebo is consumed only during the month of November, and is a local delicacy in mountainous regions. The name refers to two species of black wasp (identified as クロスズメバチ, kuro-suzumebachi, Vespula flaviceps), an easy to catch and non-aggressive species of wasp. During the Kushihara Hebo Matsuri, foraged wasp nests are weighed in a competition to determine which is the largest for a trophy. The festival itself arising from the need to protect local customs by older wasp foragers during 1993. The consumption of hebo is understood to be more of a supplementary food source, rather than as a primary means of nutrition, with individuals coming across and harvesting the larvae when coming across them by chance. The Kushihara region of Gifu was unique in that locals would actively seek out wasp nests and would subsequently raise them at their own home.
Hebo and hachinoko more broadly is consumed in a variety of ways. Hebo gohan is cooked rice mixed with the wasp larvae. Hebo gohei mochi, grilled mochi served with a sauce consisting of wasp larvae, miso, and peanuts.
In addition to hebo, the more aggressive Vespa mandarina japonica is additionally consumed by the people of Kushihara.
Insect-related hobbies
Hobbies involving insects have always been a popular within Japanese culture, ranging from competitive beetle wrestling, to the more casual raising of beetles for the purpose of keeping them as pets, and childhood pastimes of catching insects in nearby forests and parks.
The practice of collecting beetles and crickets for the purpose of keeping them as pets is a common hobby for children in Japan. Some are captured for the purpose of fighting one another. However, the concept of beetle wrestling is a relatively new concept.
The live trade of exotic beetles is a common practice for collectors young and old. Live specimens have been known to exceed 10 million yen, or $94,000, in retail price. Locally sourced beetles can sell for 100 yen, while the exotic varieties can go up to 1.2 million yen in price. The kabutomushi, Japanese rhinoceros beetle, can sell at convenience stores for between 500 and 1000 yen. The largest market for the insect trade have been men in their 30s and 40s. The trade of insects, particularly beetles, have been incorporated in everyday locations, in addition to specialty stores, such as department stores and vending machines. Additionally, there exist televised matches of beetle wrestling competitions and petting zoos featuring beetles.
Beetle wrestling
Beetle wrestling, or more broadly referred to as bugfighting, is a form of competition whereby two beetles are provoked in order to try to flip over or toss its opponent, or haul its opponent out of the ring through the use of their mandibles. Another way to lose the match is if the beetle walks out of the ring. Not all competitions involve provoking violence between beetles, in the context of the National Rhinoceros Beetle Sumo Tournament, beetles are made to climb meter-tall tree branches. Beetles are deliberately trained for combat, Shin Yuasa, a previous victor, trained his beetle by prompting his beetle to fight smaller beetles, thus "[getting] him into the habit of winning."
Beetle wrestling tends to inflict very little harm to the beetle itself, as the beetles involved have little to harm their opponents, with few instances of insect death during events. Critics of the sport question the ethics of pitting insects against each other, some viewers of the sport raise questions regarding the morality of the sport, akin to cockfighting or dog fighting. Fights involving other varieties of insects or other arthropods with stingers, such as scorpions and centipedes, can also result in severe injury to the participant arthropod. Those who host wrestling matches online insist that the participants are kept "happy and comfortable" in their care.
Tournaments of beetle wrestling have found online spaces, such as YouTube to livestream tournaments. Cash prizes are often awarded to winners. Betting is common among those who watch tournaments, particularly in Okinawa Prefecture, despite the practice being banned in Japan. Tessho Suzuki, eight years old, won a national tournament and was awarded beef and plums from Nakayama, Yamagata.
An adverse effect of the intense fascination with the practice of beetle wrestling is that it can fuel demand for wildlife smuggling to take place. One of the leading causes for beetle population decline has been directly due to poaching for fighting exhibitions. Beetles such as Dynastes satanas, which are rare and protected under CITES, are not protected under Japanese law through the Invasive Alien Species Act. In addition, Japanese policy has eased restrictions for the import of rare beetles, due to the perception that exotic beetles are not a threat to the local ecosystem. Japanese markets demand exotic species, as local ones tend to be short-lived, whereas exotic species such as members of the genus Dynastes can live up to two years. As a result of intense demand, populations of rare beetles and other insects are poached by smugglers seeking Japanese markets, where demand for rare insects is high for the purposes of beetle wrestling, the pet trade, and preservation. In 2007, Hosogushi Masatsugu was arrested in Mariscal Sucre International Airport for his attempt in smuggling 423 rare species of beetles. Despite several arrests, beetle poaching remains prolific in nations such as Bolivia as a result of a lack of oversight on the Peruvian border.
Insect collecting
Insects as pets
Beetles require relatively little attention when kept as pets. Some exotic species can live up to five years, and rearing beetles has been described as "good for relieving stress."
Symbolic uses
The firefly occupies a place within the Japanese perception of summer. Poets, including Matsuo Basho, Yosa Buson, and Issa Kobayashi employ the firefly as a kigo, phrase associated with a particular season, within the body of their works. Fireflies appear as the second most prevalent kigo within their works. In addition, classical literature has had a focus on the transient lives of the mayfly as well as the chirping calls of the cricket.
In music
Japanese people view the sounds that insects produce as "soothing" or "comfortable". While Westerners perceive the sounds of insects as "noise" and processed through the right brain, Japanese people perceive insect sounds as a "voice" in the left brain. The capture and sale of "insect musicians", insects that produce audible calls, was a popular practice in the animal trade sector during eighteenth and nineteenth century Japan, alongside the trade of live birds.
Two sounds that feature heavily within the perception of insect sounds in Japan are the sounds of cicadas and bell crickets. Cicada emergence in Japan occurs during the summer, and is often associated with the season. The sound of cicada calls in unison is referred to as the "cicada drizzle", as the sound of harmonizing cicadas resembles the sounds of falling rain. The bell cricket's (Meloimorpha japonica) clear-sounding chirping cry during the autumn season has been described as giving a "refreshing feeling" to those listening. Capturing the bell cricket for the use of hearing their cries during the evening has been a cultural practice since ancient times.
Conservation
In 2003, Japan had 500 organizations dedication to the preservation of satoyama, mixed use human settlement and natural area in regions where mountains and flatlands intersect. There exists a high amount of biodiversity due to the lack of a monoculture, as most species cannot solely dominate the curated landscape. It is thought by Japanese entomologists Minoru Ishii and Yasuhiro Nakamura, that the preservation of satoyama would be crucial for the recovery of declining insect populations.
In popular culture
Monster collecting franchise Pokémon was inspired by its creator, Satoshi Tajiri's childhood hobby of collecting and capturing insects. Tajiri expressed his interest in sharing his experiences in capturing and collecting creatures with the younger generation. Within the games themselves bug-inspired Pokémon exist. Such as Mega-Heracross, inspired by the Hercules beetle.
The practice of beetle fighting is a core part of the 2001 Sega video game Mushiking. 20,000 competitions were officially set up from the hype surrounding the franchise. The game's popularity was further boosted by popular media's and related merchandise highlighting of the sport.
The kabutomushi, Japanese rhinoceros beetle, is a ubiquitous design motif for pop culture mascots. In addition to Heracross, whose base form is inspired by the rhinoceros beetle, other characters based on, or inspired by, the rhinoceros beetle include Medabee (Medabots), Gravity Beetle (Mega Man X3), and Kabuterimon (Digimon).
Mothra, a gigantic moth monster appears prominently within the kaiju genre with films, is second to Godzilla in number of film appearances. Its prominence within kaiju media has been attributed to Japan's unique relationship with insects.
References
Ethnobiology
Biology and culture
Insects in culture
Japanese cuisine
Culture of Japan | Insects in Japanese culture | Biology,Environmental_science | 2,642 |
37,672,033 | https://en.wikipedia.org/wiki/Alexander%20Gettler | Alexander Oscar Gettler (August 13, 1883 – August 4, 1968) was a toxicologist with the Office of Chief Medical Examiner of the City of New York (OCME) between 1918 and 1959, and the first forensic chemist to be employed in this capacity by a U.S. city. His work at OCME with Charles Norris, the chief medical examiner, created the foundation for modern medicolegal investigation in the U.S. and Gettler has been described by peers as "the father of forensic toxicology in America."
The Alexander O. Gettler Award is a prize established in his name by the American Academy of Forensic Sciences.
Early life and education
Gettler was born Jewish in Galicia, Poland, a part of the Empire of Austria-Hungary in 1883. As Oscar Gettler, aged seven, he emigrated to the U.S. with his father, Joseph Gettler, and sister, Elise, on board the Red Star Line steamer, Westernland, which arrived at the Port of New York on May 6, 1891; they settled in Brooklyn, where he was raised. He studied at the City College of New York and in 1912 received his PhD in Biochemistry from Columbia University. Prior to his employment with OCME he worked as a clinical chemist at the Bellevue Hospital in Manhattan and taught Biochemistry at the New York University School of Medicine. He married Alice Gorman in 1912.
Toxicology work
Charles Norris established the Office of the Chief Medical Examiner (OCME) in 1918 and set up his first offices in the Pathology Building (the 'City Morgue') of Bellevue Hospital. While there he asked Gettler if he would be willing to conduct any chemical testing that might be required to which Gettler agreed. An OCME laboratory, where testing was carried out for the presence of the common poisons, was set up on the third floor of the City Morgue building on First Avenue and 29th Street.
Gettler often had to create new tests to isolate poisons. He regularly experimented by poisoning raw liver and attempting to isolate ever-smaller amounts of poison from it. These tests often involved mashing or liquifying tissue, followed by such tests as crystal formation, melting and boiling point analysis, color reactions, and titration. In 1935, Gettler was the first scientist to use a spectrograph in a criminal investigation in order to prove that the thallium that had poisoned the four children of Brooklyn bookkeeper Frederick Gross did not come from cocoa powder Gross had brought home from work. A previous chemical test had mistaken copper contamination from the box for thallium leading to Gross's arrest. The examiners eventually concluded that his wife had murdered the children before dying herself of encephalitis.
In addition to this, Gettler wrote numerous papers on isolating poisons such as benzene from human bodies. In 1933, Gettler was among the first to recognize a normal endogenous presence of carbon monoxide in the human body and was the first to suggest the human gut microbiome as a contributing source.
Gettler often had to work for low pay, due to severe budget cuts to the toxicology office.
Teaching
In the 1920s, Gettler took the post of professor of chemistry at Washington Square College of New York University. At the same time he held a post at the New York University Graduate School. Gettler established a toxicology course in 1935 at the City College of New York. He retired from teaching in 1948, when he reached the mandatory retirement age.
Later life and death
Gettler retired from the office of the medical examiner on January 1, 1959, when he was 75. He remained interested in toxicology until he died due to a terminal illness approximately ten years after retiring.
References
External links
1883 births
1968 deaths
New York University faculty
Columbia University alumni
American forensic scientists
American toxicologists
Clinical chemists
Jews from Austria-Hungary | Alexander Gettler | Chemistry | 788 |
25,705,749 | https://en.wikipedia.org/wiki/Canadian%20Science%20and%20Engineering%20Hall%20of%20Fame | The Canadian Science and Engineering Hall of Fame, was located at the Canada Science and Technology Museum in Ottawa, Ontario, honoured Canadians who have made outstanding contributions to society in science and engineering. It also promoted role models to encourage young Canadians to pursue careers in science, engineering and technology. The hall included a permanent exhibition, a traveling exhibition, a virtual gallery, and events and programming to celebrate inductees. In 2017, the hall of fame was closed down.
History
The Canadian Science and Engineering Hall of Fame was established in 1991 through a joint partnership by the Canada Science and Technology Museum, the National Research Council of Canada (NRC), Industry Canada and the Association of Partners in Education, to mark the NRC's 75th anniversary. The hall became a major feature of the Canada Science and Technology Museum, and has become a part of the museum's permanent Innovation Canada exhibition.
Induction Process
The museum used an open process for nomination of new members. A selection committee reviewed nominations annually. Nominees must have met the following criteria:
They must have contributed in an exceptional way to the advancement of science and engineering in Canada;
Their work must have brought great benefits to society and their communities as a whole;
They must possess leadership qualities that can serve as an inspiration to young Canadians to pursue careers in science, engineering or technology.
In April 2015, two members of the selection committee, Judy Illes and Dr. Catherine Anderson, resigned over concerns that, for the second year in a row, there were no female candidates in the list of finalists.
Members
The following people have been inducted into the Canadian Science and Engineering Hall of Fame (listed by date of birth):
William Edmond Logan (1798–1875)
John William Dawson (1820–1899)
Sandford Fleming (1827–1915)
Alexander Graham Bell (1847–1922)
Reginald Fessenden (1866–1932)
Charles Edward Saunders (1867–1937)
Maude Abbott (1869–1940)
Wallace Rupert Turnbull (1870–1954)
Ernest Rutherford (1871–1937)
Harriet Brooks Pitcher (1876–1933)
Frances Gertrude McGill (1882–1959)
Alice Evelyn Wilson (1881–1964)
Frère Marie-Victorin (1885–1944)
John A.D. McCurdy (1886–1961)
Andrew McNaughton (1887–1966)
Margaret Newton (1887–1971)
Chalmers Jack Mackenzie (1888–1984)
Henry Norman Bethune (1890–1939), inducted in 2010
Frederick Banting (1891–1941)
Wilder Penfield (1891–1976)
E.W.R. "Ned" Steacie (1900–1962)
George J. Klein (1904–1992), inducted in 1995
Gerhard Herzberg (1904–1999)
Elizabeth "Elsie" MacGill (1905–1980)
George C. Laurence (1905–1987), inducted in 2010
Helen Sawyer Hogg (1905–1993)
Joseph-Armand Bombardier (1907–1964)
Alphonse Ouimet (1908–1988)
John Tuzo Wilson (1908–1993)
Arthur Porter (1910-2010)
Pierre Dansereau (1911–2011), inducted in 2001
M. Vera Peters (1911-1993)
Hugh Le Caine (1914–1977)
Douglas Harold Copp (1915–1998)
Harold Elford Johns (1915–1998), inducted in 2000
James Hillier 1915-2007, inducted in 2002
Bertram Neville Brockhouse 1918-2003
Brenda Milner 1918-
John "Jack" A. Hopps 1919-1998
Gerald Heffernan (1919–2007)
James Milton Ham (1920–1997)
Raymond Urgel Lemieux (1920–2000)
Lawrence Morley 1920-2013
Louis Siminovitch (1920–)
Ursula Franklin (1921–2016)
Gerald Hatch (1922–2014)
Willard Boyle (1924–2011), inducted in 2005
Ernest McCulloch (1926–2011), inducted in 2010
Sylvia Fedoruk (1927–2012)
Sidney van den Bergh (1929-)
John Polanyi (1929–)
Richard E. Taylor (1929–), inducted in 2008
Vernon Burrows (1930–)
Charles Robert Scriver (1930–), inducted in 2001
James Till (1931–), inducted in 2010
Michael Smith (1932–2000)
Hubert Reeves (1932–)
Kelvin K. Ogilvie (1942-)
Arthur B. McDonald (1943–)
Ransom A. Myers (1952–2007)
References
External links
Canadian Science and Engineering Hall of Fame official webpage. Canada Science and Technology Museum official website
Science and technology halls of fame
Science and Engineering
Culture of Ottawa
Awards established in 1991
1991 establishments in Canada
2017 disestablishments in Canada | Canadian Science and Engineering Hall of Fame | Technology | 936 |
43,692,331 | https://en.wikipedia.org/wiki/North%20Pacific%20Marine%20Science%20Organization | The North Pacific Marine Science Organization, (referring to the organization's status as a Pacific version of the International Council for the Exploration of the Sea), is an intergovernmental organization that promotes and coordinates marine scientific research in the North Pacific Ocean and provides a mechanism for information and data exchange among scientists in its member countries.
Legal framework
PICES is an international intergovernmental organization established under a Convention for a North Pacific Marine Science Organization. The Convention entered into force on 1992-03-24 with an initial membership that included the governments of Canada, Japan, and the United States of America. The Convention was ratified by the People's Republic of China on 1992-08-31 to increase membership to four countries. Although the Soviet Union had participated in the development of the Convention, it was not ratified there until 1994-12-16, by the Russian Federation. The Republic of Korea acceded to the Convention on 1995-7-30. The Republic of Mexico and the Democratic People's Republic of Korea are located within the Convention Area (generally north of 30°N) but are not members.
Oversight
A Governing Council consisting of up to two delegates appointed by each member country is the primary decision-making body. Day-to-day operations of the organization are managed by the staff of the PICES Secretariat, located in Canada at the Institute of Ocean Sciences (located on Patricia Bay in British Columbia).
History
The idea to create a Pacific version of ICES (International Council for the Exploration of the Sea) was first discussed by scientists from Canada, Japan, the Soviet Union, and the United States who were attending a conference, sponsored by the United Nations Food and Agriculture Organization, in Vancouver, British Columbia, Canada in February 1973. ICES had provided a forum since 1902 for scientists bordering the Atlantic Ocean and its marginal seas to exchange information, conduct joint research, publish their results, and provide scientific advice about fisheries, primarily in the North Atlantic. In recognition of its Atlantic heritage, the nickname PICES (Pacific ICES) was adopted for the North Pacific Marine Science Organization. Informal meetings of proponents occurred sporadically through the 1980s, eventually entraining government officials into the discussion in the mid-1980s. The final text of a convention for a new marine science organization was endorsed in Ottawa, Canada on 1990-12-12. A scientific planning meeting was held in Seattle, USA in December 1991 to prepare for decisions made at the first annual meeting in October 1992 in Victoria, British Columbia, Canada.
Mandate
The primary mandate of the organization is to promote and to coordinate marine scientific research in the North Pacific Ocean and to provide a mechanism for information and data exchange among scientists in its member country. This has been achieved through various means, but primarily by establishing major integrative scientific programs such as the PICES/GLOBEC Scientific Program on Carrying Capacity and Climate Change. The alignment of PICES with a global research program (GLOBEC) in the mid 1990s was fundamental to building the reputation of the organization in the international scientific community. Research on climate and marine ecosystem variability, global warming, ocean acidification and related topics was about to explode in the 1990s, and PICES was strategically in a position to take a significant role in exploring how oceans, atmospheres, and their biota were affected by the changes.
A fundamental difference between PICES and ICES was a much lower priority on developing assessments of fisheries and fish stocks. International borders are much farther apart in the North Pacific compared to the northeastern Atlantic, so coastal fisheries that might compete are separated by large distances involving fewer countries. As a result, transboundary and straddling stock issues generally tend to be managed bilaterally, rather than multilaterally in the North Pacific. Research on fish stocks in PICES has generally been directed toward environmental influences on species. This has produced a naturally synergy with ICES as the expertise on different topics of fisheries science has developed.
Committees
Since its creation, the primary scientific directions of the organization have stemmed from its scientific committees (biological oceanography, fishery science, physical oceanography and climate, marine environmental quality, and human dimensions) supported by two technical committees (data exchange and monitoring). Each committee has authority to create subsidiary expert groups, with the approval of the Governing Council, to undertake the scientific work of the committees. National membership of all committees and other expert groups is determined by the delegates of each member country. Committee chairmen serve on the PICES Science Board, which is responsible for authorizing and overseeing all scientific activities of the organization.
Scientific programs
To facilitate cooperative research on important topics by marine scientists in member countries, PICES established two major integrative scientific programs during its first two decades. From 1995 to 2006, the PICES/GLOBEC regional programme on Climate Change and Carrying Capacity sought to improve understanding of climate variation in the North Pacific, its effect on marine ecosystems, and the productive capacity of the ocean. An important outcome of the work was learning that decadal-scale variation is dominant in the North Pacific, to the point where long-term changes are difficult to detect The second program on Forecasting and Understanding Trends, Uncertainty and Responses of North Pacific Marine Ecosystems (FUTURE) began in October 2009. Its primary goals are to understand how marine ecosystems in the North Pacific respond to climate change and human activities, to provide forecasts of ecosystem status, and to communicate the results of the program broadly.
Special Projects
From time to time, member countries have asked PICES to undertake scientific studies on a particular topic of special interest to them. These differ from the regular work of the expert groups because incremental funding is typically provided to the organization to conduct the work. Recent examples, funding sources, and links to their products are listed here:
climate regime shifts (U.S.A.)
harmful algal blooms (Japan)
invasive marine species (Japan)
survival of Fraser River sockeye salmon (Canada)
Marine ecosystem health and human well-being (Japan)
Assessing the debris-related impacts from the tsunami (Japan)
Building local warning networks for the detection and human dimension of Ciguatera fish poisoning in Indonesian communities (Japan)
Sea turtle ecology in relation to environmental stressors in the North Pacific region (Korea)
Ecosystem status reporting
Recognizing the need to understand and to communicate information on variability in marine ecosystems among member countries, PICES initiated a pilot project in 2002 that would result in the publication of its first ecosystem status report. Thereafter, ocean climate and marine ecosystem reporting was seen as an important function of the organization so it continued with an extensive update. This version placed greater emphasis on basin-scale comparisons, the primary scale of interest of the organization, but the cost and effort that was required to create such a document led to simplifications. Future versions will feature greater use of automation and technology, with printed versions appearing less frequently.
Global collaboration
During its first decade, PICES became a major international forum for exchanging results and discussing climatic-oceanic-biotic research in the North Pacific. Awareness of the benefits of working cooperatively on scientific problems led to increasing collaboration with like-minded organizations in the North Pacific. The first occasion where PICES had a leadership role was the Beyond El Niño Conference (La Jolla, USA – 2000). The results of the conference appeared in the largest special issue of Progress in Oceanography ever published In the years that followed, PICES partnered with ICES, IOC, SCOR, and others to build its scientific and organizational reputations.
Capacity building
Capacity building is a high priority activity in PICES and in many other organizations. Initially, PICES focused on the need to develop capacity in those member countries with developing economies, or economies in transition. The primary goals are now focused on developing young scientific talents in all member countries . This is achieved through an intern program at the Secretariat, early career scientist conferences, summer schools, travel grants to allow participation in the PICES Annual Meeting, sponsoring speakers at international conferences, and awards and recognition of deserving early career scientists.
Officers
Chairman
Warren Wooster (USA, 1992-1996)
William Doubleday (Canada, 1996-1998)
Hyung Tack Huh (Korea, 1998-2002)
Vera Alexander (USA, 2002-2006)
Tokio Wada (Japan, 2006-2010)
Lev Bocharov (Russia, 2010-2012)
Laura Richards (Canada, 2012-2016)
Chul Park (Korea, 2016-2020)
Enrique Curchitser (USA, 2020-)
Executive Secretary
Douglas McKone (Canada, 1993-1998)
Alexander Bychkov (Russia, 1999-2014)
Robin Brown (Canada, 2015-2020)
Sonia Batten (Canada, 2020-)
Deputy Executive Secretary
Motoyasu Miyata (Japan, 1992-1996)
Alexander Bychkov (Russia, 1996-1999)
Stewart M. McKinnell (Canada, 1999-2014)
Harold P. Batchelder (USA, 2014-2021)
Sanae Chiba (Japan, 2021-)
Notes
References
Anonymous (1964). http://www.ices.dk/explore-us/who-we-are/Documents/ICES_Convention_1964.pdf
Anonymous (1991). Convention for a North Pacific Marine Science Organization (PICES). http://www.pices.int/about/convention.aspx
Anonymous (1993). PICES Annual Report for 1992. PICES Secretariat.
Batchelder, H. P., Kim, S. (2008). "Climate Variability and Ecosystem Impacts on the North Pacific: A Basin-Scale Synthesis" Progress in Oceanography 77(23).
Bindoff, N. L., Willebrand, J., Artale, V., Cazenave, A., Gregory, J., Gulev, S., Hanawa, K., Le Quéré, C., Levitus, S., Nojiri, Y., Shum, C. K., Talley, L. D., Unnikrishnan, A. (2007). "Observations: Oceanic Climate Change and Sea Level" In: Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K. B., Tignor, M., Miller, H. L. (eds.). Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
FUTURE (2009). http://www.pices.int/members/scientific_programs/FUTURE/FUTURE-main.aspx
Kestrup, Åsa M. (2014). Report of Working Group 21 on Non-indigenous Aquatic Species PICES Sci. Rep. 47
King, J. R., Editor (2005). Report of the Study Group on the Fisheries and Ecosystem Responses to Recent Regime Shifts. PICES Sci. Rep. No. 28.
McKinnell, S. M., Brodeur, R. D., Hanawa, K., Hollowed, A. B., Polovina, J .J., Zhang, Chank-Ik. (2001). "Pacific climate variability and marine ecosystem impacts" "Pacific climate variability and marine ecosystem impacts" Progress in Oceanography Vol. 49, Nos. 1-4.
McKinnell, S. M., Curchitser, E., Groot, C., Kaeriyama, M., Myers, K. W. (2012). PICES Advisory Report on the Decline of Fraser River Sockeye Salmon Oncorhynchus nerka (Steller, 1743) in Relation to Marine Ecology. PICES Sci. Rep. 41.
McKinnell, S. M., Dagg, M. J. (2010). Marine Ecosystems of the North Pacific PICES Spec. Pub. 4.
PICES (2004). Marine Ecosystems of the North Pacific PICES Spec. Pub. 1.
Tjossem, Sara (2005). The Journey to PICES: Scientific Cooperation in the North Pacific, Alaska Sea Grant Press.
Fisheries and aquaculture research institutes
Biology organizations
Oceanographic organizations
Marine biology
Organizations established in 1992
Intergovernmental organizations
Intergovernmental organizations established by treaty | North Pacific Marine Science Organization | Biology | 2,526 |
35,176,060 | https://en.wikipedia.org/wiki/Kervaire%20semi-characteristic | In mathematics, the Kervaire semi-characteristic, introduced by , is an invariant of closed manifolds M of dimension taking values in , given by
where F is a field.
showed that the Kervaire semi-characteristic of a differentiable manifold is given by the index of a skew-adjoint elliptic operator.
Assuming M is oriented, the Atiyah vanishing theorem states that if M has two linearly independent vector fields, then .
The difference is the de Rham invariant of .
References
Notes
Differential topology | Kervaire semi-characteristic | Mathematics | 107 |
66,826,136 | https://en.wikipedia.org/wiki/Catherine%20McCammon | Catherine Ann McCammon is a Canadian geoscientist who is employed by the University of Bayreuth. Her research focuses on surface and mantle processes, as well as the physics and chemistry of minerals. She is a Fellow of the European Association of Geochemistry and American Geophysical Union. In 2013, she was awarded the European Geosciences Union Robert Wilhelm Bunsen medal. She is the editor of the journal Physics and Chemistry of Minerals.
Early life and education
McCammon attended the Massachusetts Institute of Technology (MIT) for her undergraduate studies, where she majored in physics. She joined the American Geophysical Union as a member in 1978. She earned her doctorate at the Australian National University, where she studied the behaviour of iron oxides and sulphides. McCammon was a postdoctoral fellow with Natural Sciences and Engineering Research Council (NSERC) at the University of Manitoba.
Research and career
In 1985, McCammon moved to the University of British Columbia, first as a postdoctoral scholar and then as an assistant professor. She left Canada for the University of Bayreuth in 1990, where she was appointed to a permanent position in 1996. McCammon studies Earth materials using advanced spectroscopic techniques. In particular, McCammon makes use of X-ray emission, X-ray absorption and Mössbauer spectroscopy. She developed the Mössbauer milliprobe, which allows measurements of the Mössbauer spectra of objects with diameters below 500 μm. The milliprobe allows the characterisation of minerals in high-pressure environments as well as at various interfaces.
McCammon has investigated the characteristics of iron in high-pressure materials, measuring their oxidation states and spin transitions. Better understanding of the oxidation states of iron in deep regions of Earth's interior not only allows for accurate modelling of the formation of the core, but also provides insight into the planet's geochemical evolution. High pressure experimentation has helped McCammon to understand the cycling of oxygen on planet Earth and the effect of iron on mantle discontinuities and the conductivity of minerals. She is the editor of the journal Physics and Chemistry of Minerals.
Awards and honors
2001 Mineralogical Society of America Distinguished Lecturer
2002 Elected Fellow of the Mineralogical Society of America
2007 Elected Fellow of the European Association of Geochemistry
2007 Elected Fellow of the American Geophysical Union
2013 European Geosciences Union Robert Wilhelm Bunsen Medal
2017 International Board on the Applications of the Mössbauer Effect (IBAME) Science Award
2023 Harry H. Hess Medal of the American Geophysical Union
Selected publications
References
American women geologists
Academic staff of the University of Bayreuth
Fellows of the American Geophysical Union
Massachusetts Institute of Technology alumni
American academic journal editors
Australian National University alumni
American geochemists
American women inventors
Year of birth missing (living people)
Living people
American women academics
21st-century American women
Fellows of the Mineralogical Society of America | Catherine McCammon | Chemistry | 594 |
31,687,977 | https://en.wikipedia.org/wiki/Thioenol | In organic chemistry, thioenols (also known as alkenethiols) are alkenes with a thiol group () affixed to one of the carbon atoms composing the double bond (i.e. ). They are the sulfur analogs of enols (hence the thio- prefix). Alkenes with a thiol group on both atoms of the double bond are called enedithiols. Deprotonated anions of thioenols are called thioenolates.
These structures exhibit tautomerism to give thioketones or thioaldehydes, analogous to keto–enol tautomerism of carbonyl structures.
References
Thiols
Enols | Thioenol | Chemistry | 152 |
71,775,593 | https://en.wikipedia.org/wiki/Resilience%20engineering | Resilience engineering is a subfield of safety science research that focuses on understanding how complex adaptive systems cope when encountering a surprise. The term resilience in this context refers to the capabilities that a system must possess in order to deal effectively with unanticipated events. Resilience engineering examines how systems build, sustain, degrade, and lose these capabilities.
Resilience engineering researchers have studied multiple safety-critical domains, including aviation, anesthesia, fire safety, space mission control, military operations, power plants, air traffic control, rail engineering, health care, and emergency response to both natural and industrial disasters. Resilience engineering researchers have also studied the non-safety-critical domain of software operations.
Whereas other approaches to safety (e.g., behavior-based safety, probabilistic risk assessment) focus on designing controls to prevent or mitigate specific known hazards (e.g., hazard analysis), or on assuring that a particular system is safe (e.g., safety cases), resilience engineering looks at a more general capability of systems to deal with hazards that were not previously known before they were encountered.
In particular, resilience engineering researchers study how people are able to cope effectively with complexity to ensure safe system operation, especially when they are experiencing time pressure. Under the resilience engineering paradigm, accidents are not attributable to human error. Instead, the assumption is that humans working in a system are always faced with goal conflicts, and limited resources, requiring them to constantly make trade-offs while under time pressure. When failures happen, they are understood as being due to the system temporarily being unable to cope with complexity. Hence, resilience engineering is related to other perspectives in safety that have reassessed the nature of human error, such as the "new look", the "new view", "safety differently", and Safety-II.
Resilience engineering researchers ask questions such as:
What can organizations do in order to be better prepared to handle unforeseeable challenges?
How do organizations adapt their structure and behavior to cope effectively when faced with an unforeseen challenge?
Because incidents often involve unforeseen challenges, resilience engineering researchers often use incident analysis as a research method.
Resilience engineering symposia
The first symposium on resilience engineering was held in October 2004 in Soderkoping, Sweden. It brought together fourteen safety science researchers with an interest in complex systems.
A second symposium on resilience engineering was held in November 2006 in Sophia Antipolis, France. The symposium had eighty participants. The Resilience Engineering Association, an association of researchers and practitioners with an interest in resilience engineering, continues to hold bi-annual symposia.
These symposia led to a series of books being published (see Books section below).
Themes
This section discusses aspects of the resilience engineering perspective that are different from traditional approaches to safety.
Normal work leads to both success and failure
The resilience engineering perspective assumes that the nature of work which people do within a system that contributes to an accident is fundamentally the same as the work that people do that contributes to successful outcomes. As a consequence, if work practices are only examined after an accident and are only interpreted in the context of the accident, the result of this analysis is subject to selection bias.
Fundamental surprise
The resilience engineering perspective posits that a significant number of failure modes are literally inconceivable in advance of them happening, because the environment that systems operate in are very dynamic and the perspectives of the people within the system are always inherently limited. These sorts of events are sometimes referred to as fundamental surprise. Contrast this with the approach of probabilistic risk assessment which focuses on evaluate conceivable risks.
Human performance variability as an asset
The resilience engineering perspective holds that human performance variability has positive effects as well as negative ones, and that safety is increased by amplifying the positive effects of human variability as well as adding controls to mitigate the negative effects. For example, the ability of humans to adapt their behavior based on novel circumstances is a positive effect that creates safety. As a consequence, adding controls to mitigate the effects of human variability can reduce safety in certain circumstances
The centrality of expertise and experience
Expert operators are an important source of resilience inside of systems. These operators become experts through previous experience at dealing with failures.
Risk is unavoidable
Under the resilience engineering perspective, the operators are always required to trade-off risks. As a consequence, in order to create safety, it is sometimes necessary for a system to take on some risk.
Bringing existing resilience to bear vs generating new resilience
The researcher Richard Cook distinguishes two separate kinds of work that tend to be conflated under the heading resilience engineering:
Bringing existing resilience to bear
The first type of resilience engineering work is determining how to best take advantage of the resilience that is already present in the system. Cook uses the example of setting a broken bone as this type of work: the resilience is already present in the physiology of bone, and setting the bone uses this resilience to achieving better healing outcomes.
Cook notes that this first type of resilience work does not require a deep understanding of the underlying mechanisms of resilience: humans have been setting bones long before the mechanism by which bone heals was understood.
Generating new resilience
The second type of resilience engineering work involves altering mechanisms in the system in order to increase the amount of the resilience. Cook uses the example of new drugs such as Abaloparatide and Teriparatide, which mimic Parathyroid hormone-related protein and are used to treat osteoporosis.
Cook notes that this second type of resilience work requires a much deeper understanding of the underlying existing resilience mechanisms in order to create interventions that can effectively increase resilience.
Hollnagel perspective
The safety researcher Erik Hollnagel views resilient performance as requiring four systemic potentials:
The potential to respond
The potential to monitor
The potential to learn
The potential to anticipate.
This has been described in a White Paper from Eurocontrol on Systemic Potentials Management https://skybrary.aero/bookshelf/systemic-potentials-management-building-basis-resilient-performance
Woods perspective
The safety researcher David Woods considers the following two concepts in his definition of resilience:
graceful extensibility: the ability of a system to develop new capabilities when faced with a surprise that cannot be dealt with effectively with a system's existing capabilities
sustained adaptability: the ability of a system to continue to keep adapting to surprises, over long periods of time
These two concepts are elaborated in Woods's theory of graceful extensibility.
Woods contrasts resilience with robustness, which is the ability of a system to deal effectively with potential challenges that were anticipated in advance.
The safety researcher Richard Cook argued that bone should serve as the archetype for understanding what resilience is in the Woods perspective. Cook notes that bone has both graceful extensibility (has a soft boundary at which it can extend function) and sustained adaptability (bone is constantly adapting through a dynamic balance between creation and destruction that is directed by mechanical strain).
In Woods's view, there are three common patterns to the failure of complex adaptive systems:
decompensation: exhaustion of capacity when encountering a disturbance
working at cross purposes: when individual agents in a system behave in a way that achieves local goals but goes against global goals
getting stuck in outdated behaviors: relying on strategies that were previously adaptive but are no longer so due to changes in the environment
Resilient Health care
In 2012 the growing interest for resilience engineering gave rise to the sub-field of Resilient Health Care. This led to a series of annual conferences on the topic that are still ongoing as well as a series of books, on Resilient Health Care, and in 2022 to the establishment of the Resilient Health Care Society (registered in Sweden). (https://rhcs.se/)
Books
Resilience Engineering: Concepts and Precepts by David Woods, Erik Hollnagel, and Nancy Leveson, 2006.
Resilience Engineering in Practice: A Guidebook by Jean Pariès, John Wreathall, and Erik Hollnagel, 2013.
Resilient Health Care, Volume 1: Erik Hollnagel, Jeffrey Braithwaite, and Robert L. Wears (eds), 2015.
Resilient Health Care, Volume 2: The Resilience of Everyday Clinical Work by Erik Hollnagel, Jeffrey Braithwaite, Robert Wears (eds), 2015.
Resilient Health Care, Volume 3: Reconciling Work-as-Imagined and Work-as-Done by Jeffrey Braithwaite, Robert Wears, and Erik Hollnagel (eds), 2016.
Resilience Engineering Perspectives, Volume 1: Remaining Sensitive to the Possibility of Failure by Erik Hollnagel, Christopher Nemeth, and Sidney Dekker (eds.), 2016.
Resilience Engineering Perspectives, Volume 2: Remaining Sensitive to the Possibility of Failure by Christopher Nemeth, Erik Hollnagel, and Sidney Dekker (eds.), 2016.
Governance and Control of Financial Systems: A Resilience Engineering Perspective by Gunilla Sundström and Erik Hollnagel, 2018.
References
Safety engineering
Hazard analysis | Resilience engineering | Engineering | 1,966 |
9,563,728 | https://en.wikipedia.org/wiki/Rudolf%20Luneburg | Rudolf Karl Lüneburg (30 March 1903, Volkersheim (Bockenem) - 19 August 1949, Great Falls, Montana), after his emigration at first Lueneburg, later Luneburg, sometimes misspelled Luneberg or Lunenberg) was a professor of mathematics and optics at the Dartmouth College Eye Institute. He was born in Germany, received his doctorate at Göttingen, and emigrated to the United States in 1935.
His work included an analysis of the geometry of visual space as expected from physiology and the assumption that the angle of vergence provides a constant measure of distance. From these premises he concluded that near field visual space is hyperbolic.
Bibliography
published in:
Reprint:
See also
Luneburg lens
Luneburg method
1903 births
1949 deaths
Emigrants from Nazi Germany to the United States
Geometers
Optical physicists
Dartmouth College faculty
20th-century German mathematicians
Academic staff of Leiden University
University of Göttingen alumni
New York University faculty
University of Southern California faculty
Brown University faculty
University of Göttingen alumni | Rudolf Luneburg | Mathematics | 205 |
2,903,793 | https://en.wikipedia.org/wiki/Theta%20Cancri | Theta Cancri, Latinised from θ Cancri, is a multiple star system in the zodiac constellation of Cancer. It is visible to the naked eye as a dim point of light with an apparent visual magnitude of +5.32. The system is located at a distance of approximately 450 light-years away from the Sun, based on parallax, and is drifting further away with a radial velocity of +44 km/s. Since it is near the ecliptic, it can be occulted by the Moon and, very rarely, by planets.
The primary, designated component A, is K-type giant star with a stellar classification of K5 III, having exhausted the supply of hydrogen at its core, then cooled and expanded. At present it has 40 times the girth of the Sun. It is radiating 353 times the luminosity of the Sun at an effective temperature of .
In Chinese astronomy, Ghost () refers to an asterism consisting of Theta Cancri, Eta Cancri, Gamma Cancri and Delta Cancri. Theta Cancri is the first star of Ghost (), as it is also the determinative star for that asterism.
References
K-type giants
Double stars
Spectroscopic binaries
Cancri, Theta
Cancer (constellation)
BD+18 1963
Cancri, 31
072094
041822
3357 | Theta Cancri | Astronomy | 288 |
45,517,747 | https://en.wikipedia.org/wiki/GR-196%2C429 | GR-196,429 is a melatonin receptor agonist with some selectivity for the MT1 subtype. It was one of the first synthetic melatonin agonists developed and continues to be used in scientific research, though it has never been developed for medical use. Studies in mice have shown GR-196,429 to produce both sleep-promoting effects and alterations of circadian rhythm, as well as stimulating melatonin release.
References
Acetamides
Melatonin receptor agonists | GR-196,429 | Chemistry | 105 |
14,458,673 | https://en.wikipedia.org/wiki/Rudiviridae | Rudiviridae is a family of viruses with linear double stranded DNA genomes that infect archaea. The viruses of this family are highly thermostable and can act as a template for site-selective and spatially controlled chemical modification. Furthermore, the two strands of the DNA are covalently linked at both ends of the genomes, which have long inverted terminal repeats. These inverted repeats are an adaptation to stabilize the genome in these extreme environments.
Taxonomy
The following genera are assigned to the family:
Azorudivirus
Hoswirudivirus
Icerudivirus
Itarudivirus
Japarudivirus
Mexirudivirus
Usarudivirus
References
Virus families | Rudiviridae | Biology | 138 |
24,958,527 | https://en.wikipedia.org/wiki/Group%20testing | In statistics and combinatorial mathematics, group testing is any procedure that breaks up the task of identifying certain objects into tests on groups of items, rather than on individual ones. First studied by Robert Dorfman in 1943, group testing is a relatively new field of applied mathematics that can be applied to a wide range of practical applications and is an active area of research today.
A familiar example of group testing involves a string of light bulbs connected in series, where exactly one of the bulbs is known to be broken. The objective is to find the broken bulb using the smallest number of tests (where a test is when some of the bulbs are connected to a power supply). A simple approach is to test each bulb individually. However, when there are a large number of bulbs it would be much more efficient to pool the bulbs into groups. For example, by connecting the first half of the bulbs at once, it can be determined which half the broken bulb is in, ruling out half of the bulbs in just one test.
Schemes for carrying out group testing can be simple or complex and the tests involved at each stage may be different. Schemes in which the tests for the next stage depend on the results of the previous stages are called adaptive procedures, while schemes designed so that all the tests are known beforehand are called non-adaptive procedures. The structure of the scheme of the tests involved in a non-adaptive procedure is known as a pooling design.
Group testing has many applications, including statistics, biology, computer science, medicine, engineering and cyber security. Modern interest in these testing schemes has been rekindled by the Human Genome Project.
Basic description and terms
Unlike many areas of mathematics, the origins of group testing can be traced back to a single report written by a single person: Robert Dorfman. The motivation arose during the Second World War when the United States Public Health Service and the Selective service embarked upon a large-scale project to weed out all syphilitic men called up for induction. Testing an individual for syphilis involves drawing a blood sample from them and then analysing the sample to determine the presence or absence of syphilis. At the time, performing this test was expensive, and testing every soldier individually would have been very expensive and inefficient.
Supposing there are soldiers, this method of testing leads to separate tests. If a large proportion of the people are infected then this method would be reasonable. However, in the more likely case that only a very small proportion of the men are infected, a much more efficient testing scheme can be achieved. The feasibility of a more effective testing scheme hinges on the following property: the soldiers can be pooled into groups, and in each group the blood samples can be combined. The combined sample can then be tested to check if at least one soldier in the group has syphilis. This is the central idea behind group testing. If one or more of the soldiers in this group has syphilis, then a test is wasted (more tests need to be performed to find which soldier(s) it was). On the other hand, if no one in the pool has syphilis then many tests are saved, since every soldier in that group can be eliminated with just one test.
The items that cause a group to test positive are generally called defective items (these are the broken lightbulbs, syphilitic men, etc.). Often, the total number of items is denoted as and represents the number of defectives if it is assumed to be known.
Classification of group-testing problems
There are two independent classifications for group-testing problems; every group-testing problem is either adaptive or non-adaptive, and either probabilistic or combinatorial.
In probabilistic models, the defective items are assumed to follow some probability distribution and the aim is to minimise the expected number of tests needed to identify the defectiveness of every item. On the other hand, with combinatorial group testing, the goal is to minimise the number of tests needed in a 'worst-case scenario' – that is, create a minmax algorithm – and no knowledge of the distribution of defectives is assumed.
The other classification, adaptivity, concerns what information can be used when choosing which items to group into a test. In general, the choice of which items to test can depend on the results of previous tests, as in the above lightbulb problem. An algorithm that proceeds by performing a test, and then using the result (and all past results) to decide which next test to perform, is called adaptive. Conversely, in non-adaptive algorithms, all tests are decided in advance. This idea can be generalised to multistage algorithms, where tests are divided into stages, and every test in the next stage must be decided in advance, with only the knowledge of the results of tests in previous stages.
Although adaptive algorithms offer much more freedom in design, it is known that adaptive group-testing algorithms do not improve upon non-adaptive ones by more than a constant factor in the number of tests required to identify the set of defective items. In addition to this, non-adaptive methods are often useful in practice because one can proceed with successive tests without first analysing the results of all previous tests, allowing for the effective distribution of the testing process.
Variations and extensions
There are many ways to extend the problem of group testing. One of the most important is called noisy group testing, and deals with a big assumption of the original problem: that testing is error-free. A group-testing problem is called noisy when there is some chance that the result of a group test is erroneous (e.g. comes out positive when the test contained no defectives). The Bernoulli noise model assumes this probability is some constant, , but in general it can depend on the true number of defectives in the test and the number of items tested. For example, the effect of dilution can be modelled by saying a positive result is more likely when there are more defectives (or more defectives as a fraction of the number tested), present in the test. A noisy algorithm will always have a non-zero probability of making an error (that is, mislabeling an item).
Group testing can be extended by considering scenarios in which there are more than two possible outcomes of a test. For example, a test may have the outcomes and , corresponding to there being no defectives, a single defective, or an unknown number of defectives larger than one. More generally, it is possible to consider the outcome-set of a test to be for some .
Another extension is to consider geometric restrictions on which sets can be tested. The above lightbulb problem is an example of this kind of restriction: only bulbs that appear consecutively can be tested. Similarly, the items may be arranged in a circle, or in general, a net, where the tests are available paths on the graph. Another kind of geometric restriction would be on the maximum number of items that can be tested in a group, or the group sizes might have to be even and so on. In a similar way, it may be useful to consider the restriction that any given item can only appear in a certain number of tests.
There are endless ways to continue remixing the basic formula of group testing. The following elaborations will give an idea of some of the more exotic variants. In the 'good–mediocre–bad' model, each item is one of 'good', 'mediocre' or 'bad', and the result of a test is the type of the 'worst' item in the group. In threshold group testing, the result of a test is positive if the number of defective items in the group is greater than some threshold value or proportion. Group testing with inhibitors is a variant with applications in molecular biology. Here, there is a third class of items called inhibitors, and the result of a test is positive if it contains at least one defective and no inhibitors.
History and development
Invention and initial progress
The concept of group testing was first introduced by Robert Dorfman in 1943 in a short report published in the Notes section of Annals of Mathematical Statistics. Dorfman's report – as with all the early work on group testing – focused on the probabilistic problem, and aimed to use the novel idea of group testing to reduce the expected number of tests needed to weed out all syphilitic men in a given pool of soldiers. The method was simple: put the soldiers into groups of a given size, and use individual testing (testing items in groups of size one) on the positive groups to find which were infected. Dorfman tabulated the optimum group sizes for this strategy against the prevalence rate of defectiveness in the population.
Stephen Samuels
found a closed-form solution for the optimal group size as a function of the prevalence rate.
After 1943, group testing remained largely untouched for a number of years. Then in 1957, Sterrett produced an improvement on Dorfman's procedure. This newer process starts by again performing individual testing on the positive groups, but stopping as soon as a defective is identified. Then, the remaining items in the group are tested together, since it is very likely that none of them are defective.
The first thorough treatment of group testing was given by Sobel and Groll in their formative 1959 paper on the subject. They described five new procedures – in addition to generalisations for when the prevalence rate is unknown – and for the optimal one, they provided an explicit formula for the expected number of tests it would use. The paper also made the connection between group testing and information theory for the first time, as well as discussing several generalisations of the group-testing problem and providing some new applications of the theory.
The fundamental result by Peter Ungar in 1960 shows that if the prevalence rate , then individual testing is the optimal group testing procedure with respect to the expected number of tests, and if , then it is not optimal. However, it is important to note that despite 80 years' worth of research effort, the optimal procedure is yet unknown for and a general population size .
Combinatorial group testing
Group testing was first studied in the combinatorial context by Li in 1962, with the introduction of Li’s -stage algorithm. Li proposed an extension of
Dorfman's '2-stage algorithm' to an arbitrary number of stages that required no more
than tests to be guaranteed to find or fewer defectives among items.
The idea was to remove all the items in negative tests, and divide the remaining items into groups as was done with the initial pool. This was to be done times before performing individual testing.
Combinatorial group testing in general was later studied more fully by Katona in 1973. Katona introduced the matrix representation of non-adaptive group-testing and produced a procedure for finding the defective in the non-adaptive 1-defective case in no more than tests, which he also proved to be optimal.
In general, finding optimal algorithms for adaptive combinatorial group testing is difficult, and although the computational complexity of group testing has not been determined, it is suspected to be hard in some complexity class. However, an important breakthrough occurred in 1972, with the introduction of the generalised binary-splitting algorithm. The generalised binary-splitting algorithm works by performing a binary search on groups that test positive, and is a simple algorithm that finds a single defective in no more than the information-lower-bound number of tests.
In scenarios where there are two or more defectives, the generalised binary-splitting algorithm still produces near-optimal results, requiring at most tests above the information lower bound where is the number of defectives. Considerable improvements to this were made in 2013 by Allemann, getting the required number of tests to less than above the information lower bound when and . This was achieved by changing the binary search in the binary-splitting algorithm to a complex set of sub-algorithms with overlapping test groups. As such, the problem of adaptive combinatorial group testing – with a known number or upper bound on the number of defectives – has essentially been solved, with little room for further improvement.
There is an open question as to when individual testing is minmax. Hu, Hwang and Wang showed in 1981 that individual testing is minmax when , and that it is not minmax when . It is currently conjectured that this bound is sharp: that is, individual testing is minmax if and only if . Some progress was made in 2000 by Riccio and Colbourn, who showed that for large , individual testing is minmax when .
Non-adaptive and probabilistic testing
One of the key insights in non-adaptive group testing is that significant gains can be made by eliminating the requirement that the group-testing procedure be certain to succeed (the "combinatorial" problem), but rather permit it to have some low but non-zero probability of mis-labelling each item (the "probabilistic" problem). It is known that as the number of defective items approaches the total number of items, exact combinatorial solutions require significantly more tests than probabilistic solutions — even probabilistic solutions permitting only an asymptotically small probability of error.
In this vein, Chan et al. (2011) introduced COMP, a probabilistic algorithm that requires no more than tests to find up to defectives in items with a probability of error no more than . This is within a constant factor of the lower bound.
Chan et al. (2011) also provided a generalisation of COMP to a simple noisy model, and similarly produced an explicit performance bound, which was again only a constant (dependent on the likelihood of a failed test) above the corresponding lower bound. In general, the number of tests required in the Bernoulli noise case is a constant factor larger than in the noiseless case.
Aldridge, Baldassini and Johnson (2014) produced an extension of the COMP algorithm that added additional post-processing steps. They showed that the performance of this new algorithm, called DD, strictly exceeds that of COMP, and that DD is 'essentially optimal' in scenarios where , by comparing it to a hypothetical algorithm that defines a reasonable optimum. The performance of this hypothetical algorithm suggests that there is room for improvement when , as well as suggesting how much improvement this might be.
Formalisation of combinatorial group testing
This section formally defines the notions and terms relating to group testing.
The input vector, , is defined to be a binary vector of length (that is, ), with the j-th item being called defective if and only if . Further, any non-defective item is called a 'good' item.
is intended to describe the (unknown) set of defective items. The key property of is that it is an implicit input. That is to say, there is no direct knowledge of what the entries of are, other than that which can be inferred via some series of 'tests'. This leads on to the next definition.
Let be an input vector. A set, is called a test. When testing is noiseless, the result of a test is positive when there exists such that , and the result is negative otherwise.
Therefore, the goal of group testing is to come up with a method for choosing a 'short' series of tests that allow to be determined, either exactly or with a high degree of certainty.
A group-testing algorithm is said to make an error if it incorrectly labels an item (that is, labels any defective item as non-defective or vice versa). This is not the same thing as the result of a group test being incorrect. An algorithm is called zero-error if the probability that it makes an error is zero.
denotes the minimum number of tests required to always find defectives among items with zero probability of error by any group-testing algorithm. For the same quantity but with the restriction that the algorithm is non-adaptive, the notation is used.
General bounds
Since it is always possible to resort to individual testing by setting for each , it must be that that . Also, since any non-adaptive testing procedure can be written as an adaptive algorithm by simply performing all the tests without regard to their outcome, . Finally, when , there is at least one item whose defectiveness must be determined (by at least one test), and so .
In summary (when assuming ), .
Information lower bound
A lower bound on the number of tests needed can be described using the notion of sample space, denoted , which is simply the set of possible placements of defectives. For any group testing problem with sample space and any group-testing algorithm, it can be shown that , where is the minimum number of tests required to identify all defectives with a zero probability of error. This is called the information lower bound. This bound is derived from the fact that after each test, is split into two disjoint subsets, each corresponding to one of the two possible outcomes of the test.
However, the information lower bound itself is usually unachievable, even for small problems. This is because the splitting of is not arbitrary, since it must be realisable by some test.
In fact, the information lower bound can be generalised to the case where there is a non-zero probability that the algorithm makes an error. In this form, the theorem gives us an upper bound on the probability of success based on the number of tests. For any group-testing algorithm that performs tests, the probability of success, , satisfies . This can be strengthened to: .
Representation of non-adaptive algorithms
Algorithms for non-adaptive group testing consist of two distinct phases. First, it is decided how many tests to perform and which items to include in each test. In the second phase, often called the decoding step, the results of each group test are analysed to determine which items are likely to be defective. The first phase is usually encoded in a matrix as follows.
Suppose a non-adaptive group testing procedure for items consists of the tests for some . The testing matrix for this scheme is the binary matrix, , where if and only if (and is zero otherwise).
Thus each column of represents an item and each row represents a test, with a in the entry indicating that the test included the item and a indicating otherwise.
As well as the vector (of length ) that describes the unknown defective set, it is common to introduce the result vector, which describes the results of each test.
Let be the number of tests performed by a non-adaptive algorithm. The result vector, , is a binary vector of length (that is, ) such that if and only if the result of the test was positive (i.e. contained at least one defective).
With these definitions, the non-adaptive problem can be reframed as follows: first a testing matrix is chosen, , after which the vector is returned. Then the problem is to analyse to find some estimate for .
In the simplest noisy case, where there is a constant probability, , that a group test will have an erroneous result, one considers a random binary vector, , where each entry has a probability of being , and is otherwise. The vector that is returned is then , with the usual addition on (equivalently this is the element-wise XOR operation). A noisy algorithm must estimate using (that is, without direct knowledge of ).
Bounds for non-adaptive algorithms
The matrix representation makes it possible to prove some bounds on non-adaptive group testing. The approach mirrors that of many deterministic designs, where -separable matrices are considered, as defined below.
A binary matrix, , is called -separable if every Boolean sum (logical OR) of any of its columns is distinct. Additionally, the notation -separable indicates that every sum of any of up to of 's columns is distinct. (This is not the same as being -separable for every .)
When is a testing matrix, the property of being -separable (-separable) is equivalent to being able to distinguish between (up to) defectives. However, it does not guarantee that this will be straightforward. A stronger property, called disjunctness does.
A binary matrix, is called -disjunct if the Boolean sum of any columns does not contain any other column. (In this context, a column A is said to contain a column B if for every index where B has a 1, A also has a 1.)
A useful property of -disjunct testing matrices is that, with up to defectives, every non-defective item will appear in at least one test whose outcome is negative. This means there is a simple procedure for finding the defectives: just remove every item that appears in a negative test.
Using the properties of -separable and -disjunct matrices the following can be shown for the problem of identifying defectives among total items.
The number of tests needed for an asymptotically small average probability of error scales as .
The number of tests needed for an asymptotically small maximum probability of error scales as .
The number of tests needed for a zero probability of error scales as .
Generalised binary-splitting algorithm
The generalised binary-splitting algorithm is an essentially-optimal adaptive group-testing algorithm that finds or fewer defectives among items as follows:
If , test the items individually. Otherwise, set and .
Test a group of size . If the outcome is negative, every item in the group is declared to be non-defective; set and go to step 1. Otherwise, use a binary search to identify one defective and an unspecified number, called , of non-defective items; set and . Go to step 1.
The generalised binary-splitting algorithm requires no more than tests where
.
For large, it can be shown that , which compares favorably to the tests required for Li's -stage algorithm. In fact, the generalised binary-splitting algorithm is close to optimal in the following sense. When it can be shown that , where is the information lower bound.
Non-adaptive algorithms
Non-adaptive group-testing algorithms tend to assume that the number of defectives, or at least a good upper bound on them, is known. This quantity is denoted in this section. If no bounds are known, there are non-adaptive algorithms with low query complexity that can help estimate .
Combinatorial orthogonal matching pursuit (COMP)
Combinatorial Orthogonal Matching Pursuit, or COMP, is a simple non-adaptive group-testing algorithm that forms the basis for the more complicated algorithms that follow in this section.
First, each entry of the testing matrix is chosen i.i.d. to be with probability and otherwise.
The decoding step proceeds column-wise (i.e. by item). If every test in which an item appears is positive, then the item is declared defective; otherwise the item is assumed to be non-defective. Or equivalently, if an item appears in any test whose outcome is negative, the item is declared non-defective; otherwise the item is assumed to be defective. An important property of this algorithm is that it never creates false negatives, though a false positive occurs when all locations with ones in the j-th column of (corresponding to a non-defective item j) are "hidden" by the ones of other columns corresponding to defective items.
The COMP algorithm requires no more than tests to have an error probability less than or equal to . This is within a constant factor of the lower bound for the average probability of error above.
In the noisy case, one relaxes the requirement in the original COMP algorithm that the set of locations of ones in any column of corresponding to a positive item be entirely contained in the set of locations of ones in the result vector. Instead, one allows for a certain number of “mismatches” – this number of mismatches depends on both the number of ones in each column, and also the noise parameter, . This noisy COMP algorithm requires no more than tests to achieve an error probability at most .
Definite defectives (DD)
The definite defectives method (DD) is an extension of the COMP algorithm that attempts to remove any false positives. Performance guarantees for DD have been shown to strictly exceed those of COMP.
The decoding step uses a useful property of the COMP algorithm: that every item that COMP declares non-defective is certainly non-defective (that is, there are no false negatives). It proceeds as follows.
First the COMP algorithm is run, and any non-defectives that it detects are removed. All remaining items are now "possibly defective".
Next the algorithm looks at all the positive tests. If an item appears as the only "possible defective" in a test, then it must be defective, so the algorithm declares it to be defective.
All other items are assumed to be non-defective. The justification for this last step comes from the assumption that the number of defectives is much smaller than the total number of items.
Note that steps 1 and 2 never make a mistake, so the algorithm can only make a mistake if it declares a defective item to be non-defective. Thus the DD algorithm can only create false negatives.
Sequential COMP (SCOMP)
SCOMP (Sequential COMP) is an algorithm that makes use of the fact that DD makes no mistakes until the last step, where it is assumed that the remaining items are non-defective. Let the set of declared defectives be . A positive test is called explained by if it contains at least one item in . The key observation with SCOMP is that the set of defectives found by DD may not explain every positive test, and that every unexplained test must contain a hidden defective.
The algorithm proceeds as follows.
Carry out steps 1 and 2 of the DD algorithm to obtain , an initial estimate for the set of defectives.
If explains every positive test, terminate the algorithm: is the final estimate for the set of defectives.
If there are any unexplained tests, find the "possible defective" that appears in the largest number of unexplained tests, and declare it to be defective (that is, add it to the set ). Go to step 2.
In simulations, SCOMP has been shown to perform close to optimally.
Polynomial Pools (PP)
A deterministic algorithm that is guaranteed to exactly
identify up to positives is Polynomial Pools (PP).
. The algorithm is for the construction of the pooling matrix , which can be straightforwardly used
to decode the observations in . Similar to
COMP, a sample is decoded according to the relation:
, where represents element wise multiplication and is the th column of . Since the decoding step is not difficult, PP is specialized for generating .
Forming groups
A group/pool is
generated using a polynomial relation that specifies the indices of the samples contained in each pool. A set of input parameters determines the algorithm. For a prime number and an integer any prime power is defined by . For a dimension parameter the total number of samples is and the number of samples per pool is .
Further, the
Finite field of order is denoted by
(i.e., the integers defined by special arithmetic operations that ensure that addition and multiplication in remains in ).
The method arranges each sample in a grid and represents it by coordinates . The coordinates are computed according to a polynomial relation using the integers
,
The combination of looping through the values is represented by a set with elements of a sequence of integers, i.e.,
, where
. Without loss of generality, the combination is such that
cycles every times, cycles every times until
cycles only once.
Formulas that compute the sample indices, and thus the corresponding pools, for fixed and , are given by
The computations in can be implemented with publicly available software libraries for finite fields, when is a prime power. When is a prime number then the computations in simplify to modulus arithmetics, i.e., . An example of how to generate one pool when
is displayed in the table below, while the corresponding selection of samples is shown in the figure above.
This method uses tests to exactly identify up to positives among samples.
Because of this PP is particularly effective for large sample sizes, since the number of tests grows only linearly with respect to while the samples grow exponentially with this parameter. However PP can also be effective for small sample sizes.
Example applications
The generality of the theory of group testing lends it to many diverse applications, including clone screening, locating electrical shorts; high speed computer networks; medical examination, quantity searching, statistics; machine learning, DNA sequencing; cryptography; and data forensics. This section provides a brief overview of a small selection of these applications.
Multiaccess channels
A multiaccess channel is a communication channel that connects many users at once. Every user can listen and transmit on the channel, but if more than one user transmits at the same time, the signals collide, and are reduced to unintelligible noise. Multiaccess channels are important for various real-world applications, notably wireless computer networks and phone networks.
A prominent problem with multiaccess channels is how to assign transmission times to the users so that their messages do not collide. A simple method is to give each user their own time slot in which to transmit, requiring slots. (This is called time division multiplexing, or TDM.) However, this is very inefficient, since it will assign transmission slots to users that may not have a message, and it is usually assumed that only a few users will want to transmit at any given time – otherwise a multiaccess channel is not practical in the first place.
In the context of group testing, this problem is usually tackled by dividing time into 'epochs' in the following way. A user is called 'active' if they have a message at the start of an epoch. (If a message is generated during an epoch, the user only becomes active at the start of the next one.) An epoch ends when every active user has successfully transmitted their message. The problem is then to find all the active users in a given epoch, and schedule a time for them to transmit (if they have not already done so successfully). Here, a test on a set of users corresponds to those users attempting a transmission. The results of the test are the number of users that attempted to transmit, and , corresponding respectively to no active users, exactly one active user (message successful) or more than one active user (message collision). Therefore, using an adaptive group testing algorithm with outcomes , it can be determined which users wish to transmit in the epoch. Then, any user that has not yet made a successful transmission can now be assigned a slot to transmit, without wastefully assigning times to inactive users.
Machine learning and compressed sensing
Machine learning is a field of computer science that has many software applications such as DNA classification, fraud detection and targeted advertising. One of the main subfields of machine learning is the 'learning by examples' problem, where the task is to approximate some unknown function when given its value at a number of specific points. As outlined in this section, this function learning problem can be tackled with a group-testing approach.
In a simple version of the problem, there is some unknown function, where , and (using logical arithmetic: addition is logical OR and multiplication is logical AND). Here is ' sparse', which means that at most of its entries are . The aim is to construct an approximation to using point evaluations, where is as small as possible. (Exactly recovering corresponds to zero-error algorithms, whereas is approximated by algorithms that have a non-zero probability of error.)
In this problem, recovering is equivalent to finding . Moreover, if and only if there is some index, , where . Thus this problem is analogous to a group-testing problem with defectives and total items. The entries of are the items, which are defective if they are , specifies a test, and a test is positive if and only if .
In reality, one will often be interested in functions that are more complicated, such as , again where . Compressed sensing, which is closely related to group testing, can be used to solve this problem.
In compressed sensing, the goal is to reconstruct a signal, , by taking a number of measurements. These measurements are modelled as taking the dot product of with a chosen vector. The aim is to use a small number of measurements, though this is typically not possible unless something is assumed about the signal. One such assumption (which is common) is that only a small number of entries of are significant, meaning that they have a large magnitude. Since the measurements are dot products of , the equation holds, where is a matrix that describes the set of measurements that have been chosen and is the set of measurement results. This construction shows that compressed sensing is a kind of 'continuous' group testing.
The primary difficulty in compressed sensing is identifying which entries are significant. Once that is done, there are a variety of methods to estimate the actual values of the entries. This task of identification can be approached with a simple application of group testing. Here a group test produces a complex number: the sum of the entries that are tested. The outcome of a test is called positive if it produces a complex number with a large magnitude, which, given the assumption that the significant entries are sparse, indicates that at least one significant entry is contained in the test.
There are explicit deterministic constructions for this type of combinatorial search algorithm, requiring measurements. However, as with group-testing, these are sub-optimal, and random constructions (such as COMP) can often recover sub-linearly in .
Multiplex assay design for COVID19 testing
During a pandemic such as the COVID-19 outbreak in 2020, virus detection assays are sometimes run using nonadaptive group testing designs.
One example was provided by the Origami Assays project which released open source group testing designs to run on a laboratory standard 96 well plate.
In a laboratory setting, one challenge of group testing is the construction of the mixtures can be time-consuming and difficult to do accurately by hand. Origami assays provided a workaround for this construction problem by providing paper templates to guide the technician on how to allocate patient samples across the test wells.
Using the largest group testing designs (XL3) it was possible to test 1120 patient samples in 94 assay wells. If the true positive rate was low enough, then no additional testing was required.
Data forensics
Data forensics is a field dedicated to finding methods for compiling digital evidence of a crime. Such crimes typically involve an adversary modifying the data, documents or databases of a victim, with examples including the altering of tax records, a virus hiding its presence, or an identity thief modifying personal data.
A common tool in data forensics is the one-way cryptographic hash. This is a function that takes the data, and through a difficult-to-reverse procedure, produces a unique number called a hash. Hashes, which are often much shorter than the data, allow us to check if the data has been changed without having to wastefully store complete copies of the information: the hash for the current data can be compared with a past hash to determine if any changes have occurred. An unfortunate property of this method is that, although it is easy to tell if the data has been modified, there is no way of determining how: that is, it is impossible to recover which part of the data has changed.
One way to get around this limitation is to store more hashes – now of subsets of the data structure – to narrow down where the attack has occurred. However, to find the exact location of the attack with a naive approach, a hash would need to be stored for every datum in the structure, which would defeat the point of the hashes in the first place. (One may as well store a regular copy of the data.) Group testing can be used to dramatically reduce the number of hashes that need to be stored. A test becomes a comparison between the stored and current hashes, which is positive when there is a mismatch. This indicates that at least one edited datum (which is taken as defectiveness in this model) is contained in the group that generated the current hash.
In fact, the amount of hashes needed is so low that they, along with the testing matrix they refer to, can even be stored within the organisational structure of the data itself. This means that as far as memory is concerned the test can be performed 'for free'. (This is true with the exception of a master-key/password that is used to secretly determine the hashing function.)
Notes
References
Citations
General references
Atri Rudra's course on Error Correcting Codes: Combinatorics, Algorithms, and Applications (Spring 2007), Lectures 7.
Atri Rudra's course on Error Correcting Codes: Combinatorics, Algorithms, and Applications (Spring 2010), Lectures 10, 11, 28, 29
See also
Balance puzzle
Combinatorics
Design of experiments | Group testing | Mathematics | 7,649 |
9,904,516 | https://en.wikipedia.org/wiki/Tin%20can%20wall | A tin can wall is a wall constructed from tin cans, which are not a common building source. The cans can be laid in concrete, stacked vertically on top of each other, and crushed or cut and flattened to be used as shingles. They can also be used for furniture.
Tin cans can form the actual fill-in structure (or walls) of a building, as is done with earthships.
Tin cans have not been around for a long time, and neither have their building methods. The two main structural methods for building with tin cans are by laying them horizontally in a concrete matrix and by stacking them vertically.
History
Tin can building in New Mexico originated in the early 1980s as a response to the massive amounts of trash being discarded and the wasteful nature of common building practices. Tin can construction was an attempt to utilize a readily available resource that was normally sent to landfills or recycling centers. This led to various experiments in tin can building, including space-filler between wooden frames in traditional house styles and creating domes and archways using cans and cement. Within time, more simplified and practical methods were developed, such as the earthship tin can wall. The main person behind these efforts was Mike Reynolds, also creator of the earthship building method.
Construction
A “traditional” earthship tin can wall is made by horizontally stacking tin cans in a concrete matrix. The cans are laid side by side and in alternating rows, similar to bricks. This is done simply and efficiently, using batches of concrete between the cans. The consistency of the concrete must be relatively thick, so as to hold its form and the tin cans in place. A surprisingly large number of cans are required.
The method for stacking the cans involves creating a row of cans separated by hand-formed “lumps” of concrete. The layout of a row is can, concrete, can, etc. This is then repeated, except that the alternating pattern is reversed, so that every can is laid on top of a concrete “lump” below it. This continues until completed, or the weight of the wall and the hardness of the cement seem questionable in terms of solidity. At that point it would be wise to wait for the wall to harden, but the laying time for cans and concrete is such that by the time a builder makes it back to an area that was recently laid it has had time to set. It is a judgment call as to whether or not the builder should continue, but by the next day or even later in the same day building can resume.
Materials
The materials that go into a tin can wall are simple: mainly tin cans and concrete. Tin cans (now aluminum cans as real tin cans are not as readily available) can be acquired from any recycling center or a local bar. Brick mortar may also be used instead of cement.
Coating
Once the wall is completed, the cans and the concrete are covered with a layer of cement or adobe mud mixture. What is applied depends on the location of the wall; if it is located in an area where it will be exposed to water (such as in a bathroom or utility room) it will need to be coated with a concrete layer. If it is located in a living room or bedroom, it can be covered with adobe plaster. A tin can wall that has half of its structure outside (such as the wall of an entrance to a building) will be coated with cement on the outside and adobe on the inside. The shape of the cans (their pull-tabs, etc.) and the roughness of the cement will provide a lath-like surface for the cement or adobe to stick to.
This initial layer is “screeded” (scratched with a tool that creates a ridge-like pattern thereby making it easier to apply another coat) and a second layer is added. More layers may be added, but it is up to the builder’s judgment and dependent upon the material being applied. In the case of adobe mud, once the initial layer is applied and allowed to harden it will crack and will need additional coats. With cement fewer layers are needed. The basic rule is: the more coats/layers, the stronger and better-looking the wall will be. This can be overdone however.
When a tin can wall has been sufficiently coated it will then be “finished” with a fine plaster (lime-based or other), stucco (if the wall is outside), or linseed oil (in the case of adobe). It can also be finished with a clay “slip”, or aliz, which is an earth-based coat that can have natural pigment and fine grains of mica mixed in to produce a beautiful shimmering and organic-looking surface.
Examples
An outside tin can insulating wall is a simple design. It is made out of two tin can walls with a layer of solid insulation in the middle. The insulation can vary in thickness, depending on climate and budget. It can be made out of various “green” or sustainable materials or average run-of-the-mill solid insulation. The exposed sides of the tin can walls (those not facing the insulation) are finished using methods aforementioned. The inside part of the wall can be coated with adobe while the outside is finished with concrete and stucco.
A door frame can be built into the can wall, or rather the can wall is built around the frame. The process involves initially having a door frame set in place (on the foundation) and stacking cans to either side of the frame until they reach the other walls of the building and the ceiling. The door frame is fastened to the tin can wall by hammering nails partially into the side of the frame that will touch the tin can wall and allowing the concrete to harden around the nails. Short strips of metal lath are also attached to the frame and folded out (perpendicular to the frame) and allowed to set in the can/concrete matrix.
The same method is applied to windows. The only difference is all sides of window are fastened to the tin can wall, while the door frame is fastened to the foundation on one side (bottom) and the can wall on three sides. Metal lath and nails are all that is needed, along with a bubble level or similar device. Once the desired height is reached to install a window frame, the wall is leveled. If any cans stick above the level plane they can be flattened to the desired height. Nails and lath sticking out from under the window frame holds the bottom of it in place, and the sides and top of the frame are fastened in the same fashion as a door frame.
To make a smooth transition from door (or window frame) to tin can wall with plaster, sheets of metal lath are attached to the rim of the frame and folded over the gap between the frame and the can wall. A double-layered wooden frame is therefore required, to give a surface for the metal lath to be nailed to while leaving the inside frame untouched. However, this is not necessarily a necessity.
Electrical wiring is simple, with the wires attached to the cans or fastened to the concrete before the initial coat. If a wire needs to go to the other side of a wall it can be punched directly through a can. Plumbing and pipework can use similar methods. The can wall can always be built around a pipe, or there can be a wooden frame made similar to a window or door to house the pipe.
Strength and use
Tin can walls are not considered load-bearing using this building method, although two-story circular dome structures have been built. The basic rule is that it can support considerable weight but should not be used to hold up much more than its own form and shape. It would not be wise to attach a heavy timber roof to a tin can wall without support beams or frames. The basic function for can walls is in-fill (filling in the space between support beams or the main structure) and the division of space. They work well to separate a living room from a bedroom, and are also used as insulating walls from the outside.
An earthship tin can wall is both an efficient and economical building method. They are mainly composed of aluminum and cement, and can withstand the test of time. They are made from few materials (the coating method can be more complex than building the wall itself). They use recycled materials and require little or no skill to build.
Alternative methods
The other tin can wall method that will be briefly described is a system developed by a German artist named Michael Hönes. He has led community rebuilding efforts in Lesotho, Africa using tin cans to create housing and opportunities for Aids orphans and foster mothers. Known as the TCV (a.k.a. Tin-Can Villages) project, Hönes has created buildings using tin cans, masonite, paint, and wire. The roof is made out of corrugated metal shingles. In this method the cans are stacked vertically, one on top of the other in rows that are placed side by side and secured with wire. They are left exposed and are arranged in a decorative manner. The structures require no foundation, and are said to be able to withstand the Lesotho storms.
A site for the first village in Maseru has been secured and the funding has been sourced. What is lacking is building permits (as of July 2004). The TCV organization, headed by Hönes, has been prefabricating tin can walls so that when the permits pass about one building a week can be constructed.
So far the TCV organization’s efforts have been concentrated on storehouses, offices, a large weaving workshop for the women of the Elelloang Basali Weavers group in Teyateyaneng, and a solar-powered restaurant that cooks with solar ovens. Michael Hönes also focuses on tin can furniture and has created a stove out of tin cans that uses one-third less wood than what the poor people of the area commonly use, thereby diminishing the firewood crisis in Lesotho.
See also
Bottle wall
Earthship
References
External links
Michael Hönes' Alternative Methods
Ideas For Building/Decorating With Cans
Building engineering | Tin can wall | Engineering | 2,057 |
19,600,416 | https://en.wikipedia.org/wiki/Isotope | Isotopes are distinct nuclear species (or nuclides) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but different nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have similar chemical properties, they have different atomic masses and physical properties.
The term isotope is derived from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in a 1913 suggestion to the British chemist Frederick Soddy, who popularized the term.
The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number.
For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively.
Isotope vs. nuclide
A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number greatly affects nuclear properties, but its effect on chemical properties is negligible for most elements. Even for the lightest elements, whose ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect although it matters in some circumstances (for hydrogen, the lightest element, the isotope effect is large enough to affect biology strongly). The term isotopes (originally also isotopic elements, now sometimes isotopic nuclides) is intended to imply comparison (like synonyms or isomers). For example, the nuclides , , are isotopes (nuclides with the same atomic number but different mass numbers), but , , are isobars (nuclides with the same mass number). However, isotope is the older term and so is better known than nuclide and is still sometimes used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine.
Notation
An isotope and/or nuclide is specified by the name of the particular element (this indicates the atomic number) followed by a hyphen and the mass number (e.g. helium-3, helium-4, carbon-12, carbon-14, uranium-235 and uranium-239). When a chemical symbol is used, e.g. "C" for carbon, standard notation (now known as "AZE notation" because A is the mass number, Z the atomic number, and E for element) is to indicate the mass number (number of nucleons) with a superscript at the upper left of the chemical symbol and to indicate the atomic number with a subscript at the lower left (e.g. , , , , , and ). Because the atomic number is given by the element symbol, it is common to state only the mass number in the superscript and leave out the atomic number subscript (e.g. , , , , , and ). The letter m (for metastable) is sometimes appended after the mass number to indicate a nuclear isomer, a metastable or energetically excited nuclear state (as opposed to the lowest-energy ground state), for example (tantalum-180m).
The common pronunciation of the AZE notation is different from how it is written: is commonly pronounced as helium-four instead of four-two-helium, and as uranium two-thirty-five (American English) or uranium-two-three-five (British) instead of 235-92-uranium.
Radioactive, primordial, and stable isotopes
Some isotopes/nuclides are radioactive, and are therefore referred to as radioisotopes or radionuclides, whereas others have never been observed to decay radioactively and are referred to as stable isotopes or stable nuclides. For example, is a radioactive form of carbon, whereas and are stable isotopes. There are about 339 naturally occurring nuclides on Earth, of which 286 are primordial nuclides, meaning that they have existed since the Solar System's formation.
Primordial nuclides include 35 nuclides with very long half-lives (over 100 million years) and 251 that are formally considered as "stable nuclides", because they have not been observed to decay. In most cases, for obvious reasons, if an element has stable isotopes, those isotopes predominate in the elemental abundance found on Earth and in the Solar System. However, in the cases of three elements (tellurium, indium, and rhenium) the most abundant isotope found in nature is actually one (or two) extremely long-lived radioisotope(s) of the element, despite these elements having one or more stable isotopes.
Theory predicts that many apparently "stable" nuclides are radioactive, with extremely long half-lives (discounting the possibility of proton decay, which would make all nuclides ultimately unstable). Some stable nuclides are in theory energetically susceptible to other known forms of decay, such as alpha decay or double beta decay, but no decay products have yet been observed, and so these isotopes are said to be "observationally stable". The predicted half-lives for these nuclides often greatly exceed the estimated age of the universe, and in fact, there are also 31 known radionuclides (see primordial nuclide) with half-lives longer than the age of the universe.
Adding in the radioactive nuclides that have been created artificially, there are 3,339 currently known nuclides. These include 905 nuclides that are either stable or have half-lives longer than 60 minutes. See list of nuclides for details.
History
Radioactive isotopes
The existence of isotopes was first suggested in 1913 by the radiochemist Frederick Soddy, based on studies of radioactive decay chains that indicated about 40 different species referred to as radioelements (i.e. radioactive elements) between uranium and lead, although the periodic table only allowed for 11 elements between lead and uranium inclusive.
Several attempts to separate these new radioelements chemically had failed. For example, Soddy had shown in 1910 that mesothorium (later shown to be 228Ra), radium (226Ra, the longest-lived isotope), and thorium X (224Ra) are impossible to separate. Attempts to place the radioelements in the periodic table led Soddy and Kazimierz Fajans independently to propose their radioactive displacement law in 1913, to the effect that alpha decay produced an element two places to the left in the periodic table, whereas beta decay emission produced an element one place to the right. Soddy recognized that emission of an alpha particle followed by two beta particles led to the formation of an element chemically identical to the initial element but with a mass four units lighter and with different radioactive properties.
Soddy proposed that several types of atoms (differing in radioactive properties) could occupy the same place in the table. For example, the alpha-decay of uranium-235 forms thorium-231, whereas the beta decay of actinium-230 forms thorium-230. The term "isotope", Greek for "at the same place", was suggested to Soddy by Margaret Todd, a Scottish physician and family friend, during a conversation in which he explained his ideas to her. He received the 1921 Nobel Prize in Chemistry in part for his work on isotopes.
In 1914 T. W. Richards found variations between the atomic weight of lead from different mineral sources, attributable to variations in isotopic composition due to different radioactive origins.
Stable isotopes
The first evidence for multiple isotopes of a stable (non-radioactive) element was found by J. J. Thomson in 1912 as part of his exploration into the composition of canal rays (positive ions). Thomson channelled streams of neon ions through parallel magnetic and electric fields, measured their deflection by placing a photographic plate in their path, and computed their mass to charge ratio using a method that became known as the Thomson's parabola method. Each stream created a glowing patch on the plate at the point it struck. Thomson observed two separate parabolic patches of light on the photographic plate (see image), which suggested two species of nuclei with different mass-to-charge ratios. He wrote "There can, therefore, I think, be little doubt that what has been called neon is not a simple gas but a mixture of two gases, one of which has an atomic weight about 20 and the other about 22. The parabola due to the heavier gas is always much fainter than that due to the lighter, so that probably the heavier gas forms only a small percentage of the mixture."
F. W. Aston subsequently discovered multiple stable isotopes for numerous elements using a mass spectrograph. In 1919 Aston studied neon with sufficient resolution to show that the two isotopic masses are very close to the integers 20 and 22 and that neither is equal to the known molar mass (20.2) of neon gas. This is an example of Aston's whole number rule for isotopic masses, which states that large deviations of elemental molar masses from integers are primarily due to the fact that the element is a mixture of isotopes. Aston similarly showed in 1920 that the molar mass of chlorine (35.45) is a weighted average of the almost integral masses for the two isotopes 35Cl and 37Cl.
Neutrons
After the discovery of the neutron by James Chadwick in 1932, the ultimate root cause for the existence of isotopes was clarified, that is, the nuclei of different isotopes for a given element have different numbers of neutrons, albeit having the same number of protons.
Variation in properties between isotopes
Chemical and molecular properties
A neutral atom has the same number of electrons as protons. Thus different isotopes of a given element all have the same number of electrons and share a similar electronic structure. Because the chemical behaviour of an atom is largely determined by its electronic structure, different isotopes exhibit nearly identical chemical behaviour.
The main exception to this is the kinetic isotope effect: due to their larger masses, heavier isotopes tend to react somewhat more slowly than lighter isotopes of the same element. This is most pronounced by far for protium (), deuterium (), and tritium (), because deuterium has twice the mass of protium and tritium has three times the mass of protium. These mass differences also affect the behavior of their respective chemical bonds, by changing the center of gravity (reduced mass) of the atomic systems. However, for heavier elements, the relative mass difference between isotopes is much less so that the mass-difference effects on chemistry are usually negligible. (Heavy elements also have relatively more neutrons than lighter elements, so the ratio of the nuclear mass to the collective electronic mass is slightly greater.) There is also an equilibrium isotope effect.
Similarly, two molecules that differ only in the isotopes of their atoms (isotopologues) have identical electronic structures, and therefore almost indistinguishable physical and chemical properties (again with deuterium and tritium being the primary exceptions). The vibrational modes of a molecule are determined by its shape and by the masses of its constituent atoms; so different isotopologues have different sets of vibrational modes. Because vibrational modes allow a molecule to absorb photons of corresponding energies, isotopologues have different optical properties in the infrared range.
Nuclear properties and stability
Atomic nuclei consist of protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert an attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to bind into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph at right). For example, although the neutron:proton ratio of is 1:2, the neutron:proton ratio of is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (Z = N). The nuclide (calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons.
Numbers of isotopes per element
Of the 80 elements with a stable isotope, the largest number of stable isotopes observed for any element is ten (for the element tin). No element has nine or eight stable isotopes. Five elements have seven stable isotopes, eight have six stable isotopes, ten have five stable isotopes, nine have four stable isotopes, five have three stable isotopes, 16 have two stable isotopes (counting as stable), and 26 elements have only a single stable isotope (of these, 19 are so-called mononuclidic elements, having a single primordial stable isotope that dominates and fixes the atomic weight of the natural element to high precision; 3 radioactive mononuclidic elements occur as well). In total, there are 251 nuclides that have not been observed to decay. For the 80 elements that have one or more stable isotopes, the average number of stable isotopes is 251/80 ≈ 3.14 isotopes per element.
Even and odd nucleon numbers
The proton:neutron ratio is not the only factor affecting nuclear stability. It depends also on evenness or oddness of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron emission), electron capture, or other less common decay modes such as spontaneous fission and cluster decay.
Most stable nuclides are even-proton-even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton-even-neutron, and even-proton-odd-neutron nuclides. Stable odd-proton-odd-neutron nuclides are the least common.
Even atomic number
The 146 even-proton, even-neutron (EE) nuclides comprise ~58% of all stable nuclides and all have spin 0 because of pairing. There are also 24 primordial long-lived even-even nuclides. As a result, each of the 41 even-numbered elements from 2 to 82 has at least one stable isotope, and most of these elements have several primordial isotopes. Half of these even-numbered elements have six or more stable isotopes. The extreme stability of helium-4 due to a double pairing of 2 protons and 2 neutrons prevents any nuclides containing five (, ) or eight () nucleons from existing long enough to serve as platforms for the buildup of heavier elements via nuclear fusion in stars (see triple alpha process).
Only five stable nuclides contain both an odd number of protons and an odd number of neutrons. The first four "odd-odd" nuclides occur in low mass nuclides, for which changing a proton to a neutron or vice versa would lead to a very lopsided proton-neutron ratio (, , , and ; spins 1, 1, 3, 1). The only other entirely "stable" odd-odd nuclide, (spin 9), is thought to be the rarest of the 251 stable nuclides, and is the only primordial nuclear isomer, which has not yet been observed to decay despite experimental attempts.
Many odd-odd radionuclides (such as the ground state of tantalum-180) with comparatively short half-lives are known. Usually, they beta-decay to their nearby even-even isobars that have paired protons and paired neutrons. Of the nine primordial odd-odd nuclides (five stable and four radioactive with long half-lives), only is the most common isotope of a common element. This is the case because it is a part of the CNO cycle. The nuclides and are minority isotopes of elements that are themselves rare compared to other light elements, whereas the other six isotopes make up only a tiny percentage of the natural abundance of their elements.
Odd atomic number
53 stable nuclides have an even number of protons and an odd number of neutrons. They are a minority in comparison to the even-even isotopes, which are about 3 times as numerous. Among the 41 even-Z elements that have a stable nuclide, only two elements (argon and cerium) have no even-odd stable nuclides. One element (tin) has three. There are 24 elements that have one even-odd nuclide and 13 that have two odd-even nuclides. Of 35 primordial radionuclides there exist four even-odd nuclides (see table at right), including the fissile . Because of their odd neutron numbers, the even-odd nuclides tend to have large neutron capture cross-sections, due to the energy that results from neutron-pairing effects. These stable even-proton odd-neutron nuclides tend to be uncommon by abundance in nature, generally because, to form and enter into primordial abundance, they must have escaped capturing neutrons to form yet other stable even-even isotopes, during both the s-process and r-process of neutron capture, during nucleosynthesis in stars. For this reason, only and are the most naturally abundant isotopes of their element.
48 stable odd-proton-even-neutron nuclides, stabilized by their paired neutrons, form most of the stable isotopes of the odd-numbered elements; the very few odd-proton-odd-neutron nuclides comprise the others. There are 41 odd-numbered elements with Z = 1 through 81, of which 39 have stable isotopes (technetium () and promethium () have no stable isotopes). Of these 39 odd Z elements, 30 elements (including hydrogen-1 where 0 neutrons is even) have one stable odd-even isotope, and nine elements:
chlorine (),
potassium (),
copper (),
gallium (),
bromine (),
silver (),
antimony (),
iridium (), and
thallium (), have two odd-even stable isotopes each. This makes a total stable odd-even isotopes.
There are also five primordial long-lived radioactive odd-even isotopes, , , , , and . The last two were only recently found to decay, with half-lives greater than 10 years.
Odd neutron number
Actinides with odd neutron number are generally fissile (with thermal neutrons), whereas those with even neutron number are generally not, though they are fissionable with fast neutrons. All observationally stable odd-odd nuclides have nonzero integer spin. This is because the single unpaired neutron and unpaired proton have a larger nuclear force attraction to each other if their spins are aligned (producing a total spin of at least 1 unit), instead of anti-aligned. See deuterium for the simplest case of this nuclear behavior.
Only , , and have odd neutron number and are the most naturally abundant isotope of their element.
Occurrence in nature
Elements are composed either of one nuclide (mononuclidic elements), or of more than one naturally occurring isotopes. The unstable (radioactive) isotopes are either primordial or postprimordial. Primordial isotopes were a product of stellar nucleosynthesis or another type of nucleosynthesis such as cosmic ray spallation, and have persisted down to the present because their rate of decay is very slow (e.g. uranium-238 and potassium-40). Post-primordial isotopes were created by cosmic ray bombardment as cosmogenic nuclides (e.g., tritium, carbon-14), or by the decay of a radioactive primordial isotope to a radioactive radiogenic nuclide daughter (e.g. uranium to radium). A few isotopes are naturally synthesized as nucleogenic nuclides, by some other natural nuclear reaction, such as when neutrons from natural nuclear fission are absorbed by another atom.
As discussed above, only 80 elements have any stable isotopes, and 26 of these have only one stable isotope. Thus, about two-thirds of stable elements occur naturally on Earth in multiple stable isotopes, with the largest number of stable isotopes for an element being ten, for tin (). There are about 94 elements found naturally on Earth (up to plutonium inclusive), though some are detected only in very tiny amounts, such as plutonium-244. Scientists estimate that the elements that occur naturally on Earth (some only as radioisotopes) occur as 339 isotopes (nuclides) in total. Only 251 of these naturally occurring nuclides are stable, in the sense of never having been observed to decay as of the present time. An additional 35 primordial nuclides (to a total of 286 primordial nuclides), are radioactive with known half-lives, but have half-lives longer than 100 million years, allowing them to exist from the beginning of the Solar System. See list of nuclides for details.
All the known stable nuclides occur naturally on Earth; the other naturally occurring nuclides are radioactive but occur on Earth due to their relatively long half-lives, or else due to other means of ongoing natural production. These include the afore-mentioned cosmogenic nuclides, the nucleogenic nuclides, and any radiogenic nuclides formed by ongoing decay of a primordial radioactive nuclide, such as radon and radium from uranium.
An additional ~3000 radioactive nuclides not found in nature have been created in nuclear reactors and in particle accelerators. Many short-lived nuclides not found naturally on Earth have also been observed by spectroscopic analysis, being naturally created in stars or supernovae. An example is aluminium-26, which is not naturally found on Earth but is found in abundance on an astronomical scale.
The tabulated atomic masses of elements are averages that account for the presence of multiple isotopes with different masses. Before the discovery of isotopes, empirically determined noninteger values of atomic mass confounded scientists. For example, a sample of chlorine contains 75.8% chlorine-35 and 24.2% chlorine-37, giving an average atomic mass of 35.5 atomic mass units.
According to generally accepted cosmology theory, only isotopes of hydrogen and helium, traces of some isotopes of lithium and beryllium, and perhaps some boron, were created at the Big Bang, while all other nuclides were synthesized later, in stars and supernovae, and in interactions between energetic particles such as cosmic rays, and previously produced nuclides. (See nucleosynthesis for details of the various processes thought responsible for isotope production.) The respective abundances of isotopes on Earth result from the quantities formed by these processes, their spread through the galaxy, and the rates of decay for isotopes that are unstable. After the initial coalescence of the Solar System, isotopes were redistributed according to mass, and the isotopic composition of elements varies slightly from planet to planet. This sometimes makes it possible to trace the origin of meteorites.
Atomic mass of isotopes
The atomic mass (mr) of an isotope (nuclide) is determined mainly by its mass number (i.e. number of nucleons in its nucleus). Small corrections are due to the binding energy of the nucleus (see mass defect), the slight difference in mass between proton and neutron, and the mass of the electrons associated with the atom, the latter because the electron:nucleon ratio differs among isotopes.
The mass number is a dimensionless quantity. The atomic mass, on the other hand, is measured using the atomic mass unit based on the mass of the carbon-12 atom. It is denoted with symbols "u" (for unified atomic mass unit) or "Da" (for dalton).
The atomic masses of naturally occurring isotopes of an element determine the standard atomic weight of the element. When the element contains N isotopes, the expression below is applied for the average atomic mass :
where m1, m2, ..., mN are the atomic masses of each individual isotope, and x1, ..., xN are the relative abundances of these isotopes.
Applications of isotopes
Purification of isotopes
Several applications exist that capitalize on the properties of the various isotopes of a given element. Isotope separation is a significant technological challenge, particularly with heavy elements such as uranium or plutonium. Lighter elements such as lithium, carbon, nitrogen, and oxygen are commonly separated by gas diffusion of their compounds such as CO and NO. The separation of hydrogen and deuterium is unusual because it is based on chemical rather than physical properties, for example in the Girdler sulfide process. Uranium isotopes have been separated in bulk by gas diffusion, gas centrifugation, laser ionization separation, and (in the Manhattan Project) by a type of production mass spectrometry.
Use of chemical and biological properties
Isotope analysis is the determination of isotopic signature, the relative abundances of isotopes of a given element in a particular sample. Isotope analysis is frequently done by isotope ratio mass spectrometry. For biogenic substances in particular, significant variations of isotopes of C, N, and O can occur. Analysis of such variations has a wide range of applications, such as the detection of adulteration in food products or the geographic origins of products using isoscapes. The identification of certain meteorites as having originated on Mars is based in part upon the isotopic signature of trace gases contained in them.
Isotopic substitution can be used to determine the mechanism of a chemical reaction via the kinetic isotope effect.
Another common application is isotopic labeling, the use of unusual isotopes as tracers or markers in chemical reactions. Normally, atoms of a given element are indistinguishable from each other. However, by using isotopes of different masses, even different nonradioactive stable isotopes can be distinguished by mass spectrometry or infrared spectroscopy. For example, in 'stable isotope labeling with amino acids in cell culture (SILAC)' stable isotopes are used to quantify proteins. If radioactive isotopes are used, they can be detected by the radiation they emit (this is called radioisotopic labeling).
Isotopes are commonly used to determine the concentration of various elements or substances using the isotope dilution method, whereby known amounts of isotopically substituted compounds are mixed with the samples and the isotopic signatures of the resulting mixtures are determined with mass spectrometry.
Use of nuclear properties
A technique similar to radioisotopic labeling is radiometric dating: using the known half-life of an unstable element, one can calculate the amount of time that has elapsed since a known concentration of isotope existed. The most widely known example is radiocarbon dating used to determine the age of carbonaceous materials.
Several forms of spectroscopy rely on the unique nuclear properties of specific isotopes, both radioactive and stable. For example, nuclear magnetic resonance (NMR) spectroscopy can be used only for isotopes with a nonzero nuclear spin. The most common nuclides used with NMR spectroscopy are 1H, 2D, 15N, 13C, and 31P.
Mössbauer spectroscopy also relies on the nuclear transitions of specific isotopes, such as 57Fe.
Radionuclides also have important uses. Nuclear power and nuclear weapons development require relatively large quantities of specific isotopes. Nuclear medicine and radiation oncology utilize radioisotopes respectively for medical diagnosis and treatment.
See also
Abundance of the chemical elements
Bainbridge mass spectrometer
Geotraces
Isotope hydrology
Isotopomer
Nuclear isomer
List of nuclides
List of particles
Mass spectrometry
Reference materials for stable isotope analysis
Table of nuclides
References
External links
The Nuclear Science web portal Nucleonica
The Karlsruhe Nuclide Chart
National Nuclear Data Center Portal to large repository of free data and analysis programs from NNDC
National Isotope Development Center Coordination and management of the production, availability, and distribution of isotopes, and reference information for the isotope community
Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program for isotope production and production research and development
International Atomic Energy Agency Homepage of International Atomic Energy Agency (IAEA), an Agency of the United Nations (UN)
Atomic Weights and Isotopic Compositions for All Elements Static table, from NIST (National Institute of Standards and Technology)
Atomgewichte, Zerfallsenergien und Halbwertszeiten aller Isotope
Exploring the Table of the Isotopes at the LBNL
Current isotope research and information isotope.info
Emergency Preparedness and Response: Radioactive Isotopes by the CDC (Centers for Disease Control and Prevention)
Chart of Nuclides Interactive Chart of Nuclides (National Nuclear Data Center)
Interactive Chart of the nuclides, isotopes and Periodic Table
The LIVEChart of Nuclides – IAEA with isotope data.
Annotated bibliography for isotopes from the Alsos Digital Library for Nuclear Issues
The Valley of Stability (video) – a virtual "flight" through 3D representation of the nuclide chart, by CEA (France)
Nuclear physics | Isotope | Physics,Chemistry | 6,506 |
59,525,996 | https://en.wikipedia.org/wiki/4%2C4%E2%80%B2-%28Hexafluoroisopropylidene%29diphthalic%20anhydride | 4,4′-(Hexafluoroisopropylidene)diphthalic anhydride (6FDA) is an aromatic organofluorine compound and the dianhydride of 4,4′-(hexafluoroisopropylidene)bisphthalic acid (name derived from phthalic acid).
Synthesis
The raw materials for 6FDA are hexafluoroacetone and orthoxylene. With hydrogen fluoride as a catalyst, the compounds react to 4,4′-(hexafluoroisopropylidene)bis(o-xylene). This is oxidized with potassium permanganate to 4,4′-(hexafluoroisopropylidene)bisphthalic acid. Dehydration gives the dianhydride 6FDA.
Applications
6FDA is used as monomer for the synthesis of fluorinated polyimides. These are prepared by the polymerisation of 6FDA with an aromatic diamine such as 3,5-diaminobenzoic acid or 4,4'-diaminodiphenyl sulfide. Such fluorinated polyimides are used in special applications, e. g. used to make gas-permeable polymer membranes, in the field of microelectronics and optics, such as optical lenses from polymers, OLEDs, or high-performance CMOS-contact image sensors (CISs).
These polyimides are typically soluble in common organic solvents, facilitating their production and processing. They have very low water absorption, which makes them particularly suitable for special optical applications.
References
Monomers
Carboxylic anhydrides
Trifluoromethyl compounds | 4,4′-(Hexafluoroisopropylidene)diphthalic anhydride | Chemistry,Materials_science | 373 |
371,575 | https://en.wikipedia.org/wiki/Lebkuchen | (), or () are honey-sweetened German cakes, moulded cookies or bar cookies that have become part of Germany's Christmas traditions. They are similar to gingerbread.
Etymology
The etymology of Leb- in the term is uncertain. Proposed derivations include: from the Latin (flat bread), from the Germanic word Laib (loaf), and from the Germanic word lebbe (very sweet). Another likely possibility is that it comes from the old term , the rather solid crystallized honey taken from the hive, that cannot be used for much beside baking. Folk etymology often associates the name with (life), (body), or (favorite food). means 'cake'.
History
Bakers noticed that honey-sweetened dough would undergo a natural fermentation process when stored in a cool location for several weeks, creating air bubbles that would improve the quality of the bread. was started in November and baked in December after undergoing this fermentation period.
was invented by monks in Franconia, Germany, in the 13th century. bakers were recorded as early as 1296 in Ulm, and 1395 in Nürnberg (Nuremberg). The latter is the most famous exporter today of the product known as (Nuremberg Lebkuchen).
Local history in Nuremberg relates that emperor Friedrich III held a Reichstag there in 1487 and he invited the children of the city to a special event where he presented Lebkuchen bearing his printed portrait to almost four thousand children. Historically, and due to differences in the ingredients, is also known as "honey cake" (Honigkuchen) or "pepper cake" (Pfefferkuchen). Traditionally, the cookies are usually quite large and may be in diameter if round, and larger if rectangular. Unlike other cities where women could bake and sell the holiday cookies at will, in Nuremberg only members of the baker's guild were allowed to bake the cookies.
Since 1808, a variety of Nürnberg Lebkuchen made without flour has been called . It is uncertain whether Elise was the daughter of a gingerbread baker or the wife of a margrave. Her name is associated with some of the Lebkuchen produced by members of the guild. Since 1996, Nürnberger Lebkuchen is a protected designation of origin, meaning that it must be produced within the boundaries of the city.
Types
range in taste from spicy to sweet and come in a variety of shapes with round being the most common. The ingredients usually include honey, spices such as aniseed, cardamom, coriander, cloves, ginger, and allspice, nuts including almonds, hazelnuts, and walnuts, or candied fruit.
In Germany, types of are distinguished by the kind of nuts used and their proportions. Salt of Hartshorn and potash are often used for raising the dough. dough is usually placed on a thin wafer base called an . This was an idea of the monks, who used unleavened communion wafer ingredients to prevent the dough from sticking. Typically, they are glazed or covered with very dark chocolate or a thin sugar coating, but some are left uncoated.
is usually soft, but a harder type of is used to produce (" hearts"), usually inscribed with icing, which are available at many German regional fairs and Christmas fairs. They are also sold as souvenirs at the Oktoberfest and are inscribed with affectionate, sarcastic or obscene messages.
Another form is the "witch's house" ( or ), made popular because of the fairy tales about Hansel and Gretel.
The closest German equivalent of the gingerbread man is the Honigkuchenpferd ("honey cake horse").
The Nuremberg type of is also known as and must contain no less than 25 percent nuts and less than 10 percent wheat flour. The finest artisan bakeries in Nuremberg boast close to 40% nut content. is sometimes packaged in richly decorated tins, chests, and boxes, which have become nostalgic collector items.
Several Swiss regional varieties also exist and have been declared part of the Culinary Heritage of Switzerland, such as the case with Berner Honiglebkuchen.
Gallery
See also
Aachener Printen
Basler Läckerli
Berner Haselnusslebkuchen
Springerle
Speculaas
List of chocolate-covered foods
List of German desserts
Licitar
Pfeffernüsse
Pryanik
References
External links
Lebkuchen on the German Food Guide
Germans fall out of love with Lebkuchen at The Guardian
593 Lebkuchen recipes on Chefkoch.de as of 4 March 2013
Christmas food
Christmas in Germany
Culinary Heritage of Switzerland
Biscuits
Anise
Chocolate-covered foods
Nut dishes
Fermented foods
German breads
Cookies
German cakes
Ginger desserts
Honey dishes
Honey cakes | Lebkuchen | Biology | 999 |
44,535,475 | https://en.wikipedia.org/wiki/Archernis%20fulvalis%20Hampson%2C%201913 | Archernis fulvalis is a moth in the family Crambidae. It was described by George Hampson in 1913. It is found in French Polynesia, where it has been recorded from the Society Islands.
Taxonomy
The name Archernis fulvalis is preoccupied by Archernis fulvalis described by Hampson in 1899.
References
Spilomelinae
Moths described in 1913
Taxa named by George Hampson
Moths of Oceania
Fauna of French Polynesia
Controversial taxa | Archernis fulvalis Hampson, 1913 | Biology | 88 |
2,581,209 | https://en.wikipedia.org/wiki/Sensory%20analysis | Sensory analysis (or sensory evaluation) is a scientific discipline that applies principles of experimental design and statistical analysis to the use of human senses (sight, smell, taste, touch and hearing) for the purposes of evaluating consumer products. This method of testing products is generally used during the marketing and advertising phase. The discipline requires panels of human assessors, on whom the products are tested, and recording the responses made by them. By applying statistical techniques to the results it is possible to make inferences and insights about the products under test. Most large consumer goods companies have departments dedicated to sensory analysis.
Sensory analysis can mainly be broken down into three sub-sections:
Analytical testing (dealing with objective facts about products)
Affective testing (dealing with subjective facts such as preferences)
Perception (the biochemical and psychological aspects of sensation)
Analytical testing
This type of testing is concerned with obtaining objective facts about products. This could range from basic discrimination testing (e.g. Do two or more products differ from each other?) to descriptive analysis (e.g. What are the characteristics of two or more products?). The type of panel required for this type of testing would normally be a trained panel.
There are several types of sensory tests. The most classic is the sensory profile. In this test, each taster describes each product by means of a questionnaire. The questionnaire includes a list of descriptors (e.g., bitterness, acidity, etc.). The taster rates each descriptor for each product depending on the intensity of the descriptor he perceives in the product (e.g., 0 = very weak to 10 = very strong). In the method of Free choice profiling, each taster builds his own questionnaire.
Another family of methods is known as holistic as they focus on the product's overall appearance. This is the case of the categorization and the napping.
Affective testing
Also known as consumer testing, this type of testing concerns obtaining subjective data, or how well products are likely to be accepted. Usually, large (50 or more) panels of untrained personnel are recruited for this type of testing, although smaller focus groups can be utilized to gain insights into products. The range of testing can vary from simple comparative testing (e.g. Which do you prefer, A or B?) to structured questioning regarding the magnitude of acceptance of individual characteristics (e.g. Please rate the "fruity aroma": dislike|neither|like).
Affective testing is generally used by larger companies distributing products on a larger scale, such as cereal brands, clothing brands, and accessories used in daily life. For example, a small company on the verge of a breakthrough for a specific medicine wouldn't use wide-scale affective testing to see if the medicine would work. Companies such as this would use a specific panel of judges who require this medicine to test whether or not it would work.
See also
European Sensory Network
Food Quality and Preference
Journal of Sensory Studies
Just-About-Right scale
Pangborn Sensory Science Symposium
Notes and references
Bibliography
ASTM MNL14 The Role of Sensory Analysis in Quality Control, 1992
ISO 16820 Sensory Analysis - Methodology - Sequential Analysis
ISO 5495 Sensory Analysis - Methodology - Paired Comparisons
ISO 13302 Sensory Analysis - Methods for assessing modifications to the flavour of foodstuffs due to packaging
Sensory Evaluation Techniques- Morten C. Meilgaard, Gail Vance Civille, B. Thomas Carr - 4th edition, 2007
External links
ISO 67.240 – Sensory analysis – A series of ISO standards
Sensory evaluation practice; Herbert Stone, Joel L. Sidel
Product testing
Psychophysics | Sensory analysis | Physics | 745 |
44,856,539 | https://en.wikipedia.org/wiki/Non%20linear%20piezoelectric%20effects%20in%20polar%20semiconductors | Non linear piezoelectric effects in polar semiconductors are the manifestation that the strain induced piezoelectric polarization depends not just on the product of the first order piezoelectric coefficients times the strain tensor components but also on the product of the second order (or higher) piezoelectric coefficients times products of the strain tensor components. The idea was put forward experimentally for zincblende CdTe heterostructures in 1992, It was confirmed in 1996 by the application of a hydrostatic pressure to the same heterostructures, and found to agree with the results of an ab initio approach, but also to a simple calculation using what is currently known as the Harrisson’s Model. The idea was then extended to all commonly used wurtzite and zincblende semiconductors. Given the difficulty of finding direct experimental evidence for the existence of these effects, there are different schools of thought on how one can calculate reliably all the piezoelectric coefficients.
On the other hand, there is widespread agreement on the fact that non linear effects are rather large and comparable to the linear terms (first order). Indirect experimental evidence of the existence of these effects has been also reported in the literature in relation to GaN and InN semiconductor optoelectronic devices.
History
Non linear piezoelectric effects in polar semiconductors were first reported in 1996 by R. André et al. in zincblende cadmium telluride and later on by G.Bester et al. in 2006 and by M.A. Migliorato et al., in relation to zincblende GaAs and InAs. Different methods were used and while the influence of second (and third) order piezoelectric coefficients was generally recognized as being comparable to first order, fully ab initio and simple approaches using the Harrison's model, appeared to predict slightly different results, particularly for the magnitude of the first order coefficients.
Formalism
While first order piezoelectric coefficients are of the form eij, the second and third order coefficients are in the form of a higher rank tensor, expressed as eijk and eijkl. The piezoelectric polarization would then be expressed in terms of products of the piezoelectric coefficients and strain components, products of two strain components, and products of three strain components for the first, second, and third order approximation respectively.
Available Non Linear Piezoelectric Coefficients
Many more articles were published on the subject. Non linear piezoelectric coefficients are now available for many different semiconductor materials and crystal structures:
zincblende CdTe, experiments (under pseudomorphic strain and hydrostatic pressure ), and theory (ab initio and using Harrison's Model )
zincblende GaAs and InAs, under pseudomorphic strain, using Harrison's Model
zincblende GaAs and InAs, for any combination of diagonal strain components, using Harrison's Model
All common III-V semiconductors in the zincblende structure using ab initio
GaN, AlN, InN in the Wurtzite crystal structure, using Harrison's Model
GaN, AlN, InN in the Wurtzite crystal structure, using ab initio
ZnO in the Wurtzite crystal structure, using Harrison's Model
Wurtzite crystal structure GaN, InN, AlN and ZnO, using ab initio
Wurtzite crystal structure GaAs, InAs, GaP and InP, using Harrison's Model
Non linear piezoelectricity in devices
Particularly for III-N semiconductors, the influence of non linear piezoelectricity was discussed in the context of light-emitting diodes:
Influence of external pressure
Increased efficiency
See also
Piezotronics
Piezoelectricity
Light-emitting diode
Wurtzite crystal structure
References
Nanoelectronics
Semiconductor devices
Semiconductors | Non linear piezoelectric effects in polar semiconductors | Physics,Chemistry,Materials_science,Engineering | 798 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.