id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
13,704,501 | https://en.wikipedia.org/wiki/ISO%2016750 | ISO 16750, Road vehicles – Environmental conditions and testing for electrical and electronic equipment, is a series of ISO standards which provide guidance regarding environmental conditions commonly encountered by electrical and electronic systems installed in automobiles and specify requirements and tests.
ISO 16750 has five parts:
ISO 16750-1: General
ISO 16750-2: Electrical loads
ISO 16750-3: Mechanical loads
ISO 16750-4: Climatic loads
ISO 16750-5: Chemical loads
A similar series of ISO standards exists for electrical and electronic equipment for the drive system of electric vehicles, see ISO 19453, now withdrawn, see https://www.iso.org/standard/64930.html
References
16750
Automotive engineering | ISO 16750 | [
"Engineering"
] | 144 | [
"Automotive engineering",
"Mechanical engineering by discipline"
] |
13,705,033 | https://en.wikipedia.org/wiki/Edmonston%20Pumping%20Plant | Edmonston Pumping Plant is a pumping station near the south end of the California Aqueduct, which is the principal feature of the California State Water Project. It lifts water 1,926 feet (600 m) to cross the Tehachapi Mountains where it splits into the west and east branches of the California Aqueduct serving Southern California. It is the most powerful water lifting system in the world, not considering pumped-storage hydroelectricity stations.
There are 14 4-stage 80,000-horsepower centrifugal pumps that push the water up to the top of the mountain. Each motor-pump unit stands 65-feet high and weighs 420 tons. The pumps themselves extend downward six floors. Each unit discharges water into a manifold that connects to the main discharge lines. The two main discharge lines stairstep up the mountain in an 8400-foot-long tunnel. They are 12.5 feet in diameter for the first half and 14 feet in diameter for the last half. They each contain 8.5 million gallons of water at all times. At full capacity, the pumps can fling nearly 2 million gallons per minute up over the Tehachapis. A 68-foot-high, 50-foot-diameter surge tank is located at the top of mountain. This prevents tunnel damage when the valves to the pumps are suddenly open or closed. Near the top of the lift there are valves which can close the discharge lines to prevent backflow into the pumping plant below in event of a rupture. The station consumes up to 787 MW of electricity, delivered through a dedicated 230kV transmission line from the nearby Southern California Edison Pastoria substation.
Characteristics
Number of units: 14 (two galleries of 7)
Normal static head:
Motor rating: each
Total motor rating: )
Flow per motor at design head: 315 ft3/s (9 m3/s)
Total flow at design head: 4410 ft3/s (125 m3/s)
References
External links
DWR Edmonston Pumping Station
A.D. Edmonston Pumping Plant
The Big Lift: A photo tour of the State Water Project’s Edmonston Pumping Plant
California State Water Project
Buildings and structures in Kern County, California
Water supply pumping stations in the United States
Water supply infrastructure in California
Interbasin transfer
San Joaquin Valley
Tehachapi Mountains | Edmonston Pumping Plant | [
"Environmental_science"
] | 474 | [
"Hydrology",
"Interbasin transfer"
] |
13,705,176 | https://en.wikipedia.org/wiki/Evolution%20of%20morality | The concept of the evolution of morality refers to the emergence of human moral behavior over the course of human evolution. Morality can be defined as a system of ideas about right and wrong conduct. In everyday life, morality is typically associated with human behavior rather than animal behavior. The emerging fields of evolutionary biology, and in particular evolutionary psychology, have argued that, despite the complexity of human social behaviors, the precursors of human morality can be traced to the behaviors of many other social animals. Sociobiological explanations of human behavior remain controversial. Social scientists have traditionally viewed morality as a construct, and thus as culturally relative, although others such as Sam Harris argue that there is an objective science of morality.
Animal sociality
Though other animals may not possess what humans may perceive as moral behavior, all social animals have had to modify or restrain their behaviors for group living to be worthwhile. Typical examples of behavioral modification can be found in the societies of ants, bees and termites. Ant colonies may possess millions of individuals. E. O. Wilson argues that the single most important factor that leads to the success of ant colonies is the existence of a sterile worker caste. This caste of females are subservient to the needs of their mother, the queen, and in so doing, have given up their own reproduction in order to raise brothers and sisters. The existence of sterile castes among these social insects significantly restricts the competition for mating and in the process fosters cooperation within a colony. Cooperation among ants is vital, because a solitary ant has an improbable chance of long-term survival and reproduction. However, as part of a group, colonies can thrive for decades. As a consequence, ants are one of the most successful families of species on the planet, accounting for a biomass that rivals that of the human species.
The basic reason that social animals live in groups is that opportunities for survival and reproduction are much better in groups than living alone. The social behaviors of mammals are more familiar to humans. Highly social mammals such as primates and elephants have been known to exhibit traits that were once thought to be uniquely human, like empathy and altruism.
Primate sociality
Humanity's closest living relatives are common chimpanzees and bonobos. These primates share a common ancestor with humans who lived four to six million years ago. It is for this reason that chimpanzees and bonobos are viewed as the best available surrogate for this common ancestor. Barbara King argues that while primates may not possess morality in the human sense, they do exhibit some traits that would have been necessary for the evolution of morality. These traits include high intelligence, a capacity for symbolic communication, a sense of social norms, realization of "self", and a concept of continuity.
Frans de Waal and Barbara King both view human morality as having grown out of primate sociality.
Many social animals such as primates, dolphins, and whales have shown to exhibit what Michael Shermer refers to as premoral sentiments. According to Shermer, the following characteristics are shared by humans and other social animals, particularly the great apes:
attachment and bonding, cooperation and mutual aid, sympathy and empathy, direct and indirect reciprocity, altruism and reciprocal altruism, conflict resolution and peacemaking, deception and deception detection, community concern and caring about what others think about you, and awareness of and response to the social rules of the group.
Shermer argues that these premoral sentiments evolved in primate societies as a method of restraining individual selfishness and building more cooperative groups. For any social species, the benefits of being part of an altruistic group should outweigh the benefits of individualism. For example, lack of group cohesion could make individuals more vulnerable to attack from outsiders. Being part of a group may also improve the chances of finding food. This is evident among animals that hunt in packs to take down large or dangerous prey.
All social animals have societies in which each member knows its own place. Social order is maintained by certain rules of expected behavior and dominant group members enforce order through punishment. However, higher order primates also have a sense of reciprocity. Chimpanzees remember who did them favors and who did them wrong. For example, chimpanzees are more likely to share food with individuals who have previously groomed them. Vampire bats also demonstrate a sense of reciprocity and altruism. They share blood by regurgitation, but do not share randomly. They are most likely to share with other bats who have shared with them in the past or who are in dire need of feeding.
Animals such as Capuchin monkeys and dogs also display an understanding of fairness, refusing to co-operate when presented unequal rewards for the same behaviors.
Chimpanzees live in fission-fusion groups that average 50 individuals. It is likely that early ancestors of humans lived in groups of similar size. Based on the size of extant hunter gatherer societies, recent paleolithic hominids lived in bands of a few hundred individuals. As community size increased over the course of human evolution, greater enforcement to achieve group cohesion would have been required. Morality may have evolved in these bands of 100 to 200 people as a means of social control, conflict resolution and group solidarity. This numerical limit is theorized to be hard coded in our genes since even modern humans have difficulty maintaining stable social relationships with more than 100–200 people. According to Dr. de Waal, human morality has two extra levels of sophistication that are not found in other primate societies. Humans enforce their society's moral codes much more rigorously with rewards, punishments and reputation building. People also apply a degree of judgment and reason not seen in the animal kingdom.
Adaptive valley of disgust at cruel individual altruism
Some evolutionary biologists and game theorists argue that since gradual evolutionary models of morality require incremental evolution of altruism in populations where egoism and cruelty initially reigned, any sense of occasional altruism otherwise egoistic and cruel individuals being worse than consistent cruelty would have made evolution of morality impossible due to early stages of moral evolution being selected against by such sentiments causing the individuals with some morality to be treated worse than those with no morality. This would have caused low degree morality to become an adaptive valley that would preclude the early steps away from the no morality condition, precluding an early necessary condition for later evolution of higher degrees of morality. These scientists argue that while this rules out evolutionary explanations of the specific type of morality that feels disgust at some empathy from rarely empathic individuals by assuming it to be psychopathic manipulation, it does not rule out evolution of other types of morality that accept a little altruism as better than no altruism at all.
Punishment problems
While groups may benefit from avoiding certain behaviors, those harmful behaviors have the same effect regardless of whether the offending individuals are aware of them or not. Since the individuals themselves can increase their reproductive success by doing many of them, any characteristics that entail impunity are positively selected by evolution. Specifically punishing individuals aware of their breach of rules would select against the ability to be aware of it, precluding any coevolution of both conscious choice and a sense of it being the basis for moral and penal liability in the same species.
Human social intelligence
The social brain hypothesis, detailed by R.I.M Dunbar in the article The Social Brain Hypothesis and Its Implications for Social Evolution, supports the fact that the brain originally evolved to process factual information. The brain allows an individual to recognize patterns, perceive speech, develop strategies to circumvent ecologically-based problems such as foraging for food, and also permits the phenomenon of color vision. It is said that in humans and primates the neocortex is responsible for reasoning and consciousness.
Furthermore, having a large brain is a reflection of the large cognitive demands of complex social systems. Therefore, in social animals, the neocortex came under intense selection to increase in size to improve social cognitive abilities. Social animals, such as humans, are capable of two important concepts, coalition formation, or group living, and tactical deception, which is a tactic of presenting false information to others. The fundamental importance of animal social skills lies within the ability to manage relationships and in turn, the ability to not just commit information to memory, but manipulate it as well.
An adaptive response to the challenges of social interaction and living is theory of mind. Theory of mind as defined by Martin Brüne, is the ability to infer another individual's mental states or emotions. Having a strong theory of mind is tied closely with possessing advanced social intelligence. Collectively, group living requires cooperation and generates conflict. Social living puts strong evolutionary selection pressures on acquiring social intelligence due to the fact that living in groups has advantages. Such advantages include protection from predators and the fact that groups in general outperform the sum of an individual's performance. But, from an objective point of view, group living also has disadvantages, such as competition within the group for resources and mates. This sets the stage for something of an evolutionary arms race within the species.
Within populations of social animals, altruism, or acts of behavior that are disadvantageous to one individual while benefiting other group members, has evolved. This notion seems to be contradictory to evolutionary thought, due to the fact that an organism's fitness and success is defined by its ability to pass genes on to the next generation. According to E. Fehr, in the article, The Nature of Human Altruism, the evolution of altruism can be accounted for when kin selection and inclusive fitness are taken into account; meaning reproductive success is not just dependent on the number of offspring an individual produces, but also the number of offspring that related individuals produce. Outside of familial relationships altruism is also seen, but in a different manner typically defined by the prisoner's dilemma, theorized by John Nash. The prisoner's dilemma serves to define cooperation and defecting with and against individuals driven by incentive, or in Nash's proposed case, years in jail. In evolutionary terms, the best strategy to use for the prisoner's dilemma is tit-for-tat, where an individual should cooperate as long others are cooperating, and not defect until another individual defects against them. At their core, complex social interactions are driven by the need to distinguish sincere cooperation and defection.
Brune details that theory of mind has been traced back to primates, but it is not observed to the extent that it is in the modern human. The emergence of this unique trait is perhaps where the divergence of the modern human begins, along with our acquisition of language. Humans use metaphors and imply much of what we say. Phrases such as, "You know what I mean?" are not uncommon and are direct results of the sophistication of the human theory of mind. Failure to understand another's intentions and emotions can yield inappropriate social responses and are often associated with human mental conditions such as autism, schizophrenia, bipolar disorder, some forms of dementia, and psychopathy. This is especially true for autism spectrum disorders, where social disconnect is evident, but non-social intelligence can be preserved or even in some cases augmented, such as in the case of a savant. The need for social intelligence surrounding theory of mind is a possible answer to the question as to why morality has evolved as a part of human behavior.
Evolution of religion
Psychologist Matt J. Rossano muses that religion emerged after morality and built upon morality by expanding the social scrutiny of individual behavior to include supernatural third-party agents. By including ever watchful ancestors, spirits and gods in the social realm, humans discovered an effective strategy for restraining selfishness and building more cooperative groups. The adaptive value of religion would have enhanced group survival.
Wason selection task
In an experiment where subjects must demonstrate abstract, complex reasoning, researchers have found that humans (as has been seen in other animals) have a strong innate ability to reason about social exchanges. This ability is believed to be intuitive, since the logical rules do not seem to be accessible to the individuals for use in situations without moral overtones.
Emotion
Disgust, one of the basic emotions, may have an important role in certain forms of morality. Disgust is argued to be a specific response to certain things or behaviors that are dangerous or undesirable from an evolutionary perspective. One example is things that increase the risk of an infectious disease such as spoiled foods, dead bodies, other forms of microbiological decomposition, a physical appearance suggesting sickness or poor hygiene, and various body fluids such as feces, vomit, phlegm, and blood. Another example is disgust against evolutionary disadvantageous mating such as incest (the incest taboo) or unwanted sexual advances. Still another example are behaviors that may threaten group cohesion or cooperation such as cheating, lying, and stealing. MRI studies have found that such situations activate areas in the brain associated with disgust.
See also
Animal faith
Evolutionary ethics
The Origins of Virtue
Moral foundations theory
Moral progress
Moral realism
Science of morality
Triune ethics theory
Veneer theory
References
Further reading
External links
Evolution of Morality on PhilPapers
Richard Dawkins video clip on morality
Marc Hauser, Evolution of a Universal Moral Grammar, Part 1, Part 2, Part 3
Is morality innate? Brief video clip that examines whether infants have a sense or morality. This video is no longer available because the YouTube account associated with this video has been terminated.
Sam Harris: Can Science Help Determine what is Moral? Part 1, Part 2
Jonathan Haidt on the Five foundations of morality
Peter Swirski. "You'll Never Make a Monkey Out of Me or Altruism, Proverbial Wisdom, and Bernard Malamud's God's Grace." American Utopia and Social Engineering in Literature, Social Thought, and Political History. New York, Routledge, 2011.
Evolutionary biology
Evolutionary psychology
Sociobiology
Morality
Moral psychology | Evolution of morality | [
"Biology"
] | 2,847 | [
"Evolutionary biology",
"Behavior",
"Behavioural sciences",
"Sociobiology"
] |
13,705,381 | https://en.wikipedia.org/wiki/HD%20197036 | HD 197036 is a single star in the northern constellation Cygnus. It has an absolute magnitude of −1.15 and an apparent magnitude of 6.61, below the max naked eye visibility. Located 1,310 light years away, it is approaching Earth with a heliocentric radial velocity of .
HD 197036 is a bluish white subgiant star of the spectral type B5IV, and has an angular diameter of . This yields a radius of at its estimated distance. At present it has 4.21 times the mass of the Sun and shines at 379 times the luminosity of the Sun from its photosphere at an effective temperature of 13,399 K, giving it a bluish white hue. Like many hot stars, it spins rapidly with a projected rotational velocity of 135 km/s−1 and has a near solar metallicity.
References
197036
Cygnus (constellation)
B-type subgiants
7912
101934
Durchmusterung objects | HD 197036 | [
"Astronomy"
] | 205 | [
"Cygnus (constellation)",
"Constellations"
] |
1,630,395 | https://en.wikipedia.org/wiki/Nokia%206680 | The Nokia 6680 is a high-end 3G mobile phone running Symbian operating system, with Series 60 2nd Edition user interface. It was announced on 14 February 2005, and was released the next month. The 6680 was Nokia's first device with a front camera, and was specifically marketed for video calling. It was also Nokia's first with a camera flash. It was the forerunner of the Nseries, which was released in April 2005; its successor being the N70.
Features
The device features Bluetooth, a 1.3-megapixel fixed-focus camera, front VGA (0.3-megapixel) video call camera, hot swappable Dual Voltage Reduced Size MMC (DV-RS-MMC) memory expansion card support, stereo audio playback and a 2.1", 176x208, 18-bit (262,144) color display with automatic brightness control based on the environment.
The 6680 is marketed as a high-end 3G device. It is a smartphone offering office and personal management facilities, including Microsoft Office compatible software. The phone initially offered an innovative active standby mode, but this was removed by some network operators (for example, Orange) under their own adapted firmware. The phone, however, has been marred by a more-than-normal number of bugs, which have included crashes and security issues. The phone was also criticised in some reviews for the relatively limited amount of RAM, with Steve Litchfield of All About Symbian unfavourably comparing the 6680 to the otherwise similarly-equipped N70, which had significantly more RAM available for applications and games.
In addition to the standard RS-MMC card, the 6680 can also use Dual Voltage Reduced Size MMC (DV-RS-MMC) cards which are also marketed as MMCmobile. While these cards have the same form factor as RS-MMC, the DV-RS-MMC have a 2nd row of connectors on the bottom.
The phone operates on GSM 900/1800/1900, and UMTS 2100 on 3G networks.
During its development, the 6680 was codenamed Milla.
This handset was similar to its predecessor, the Nokia 6630. Key changes were the new "active standby" feature, the facility for face-to-face video calls, a camera flash, better screen and improved styling.
The hardware application platform of this device is OMAP 1710.
Variants
The Nokia 6681 and Nokia 6682 are GSM handsets by Nokia, running the Series 60 user interface on the Symbian operating system. The phones are GSM-only versions of the Nokia 6680.
The only difference between the 6681 and the 6682 is the fact that the 6681 is targeted at the European market, being a GSM 900/1800/1900 tri-band handset, while the 6682 is sold for North American networks, supporting 850/1800/1900 frequencies.
In turn, both handset's specs are almost identical to those of the 6680, except for the lack of support for 3G networks, which means no UMTS support, or video call, thus the absence of the front video call camera.
Related handsets
Nokia N70
References
External links
Product pages
Nokia 6680 Official product page
Nokia 6681
Nokia 6682
Rui Carmo's 6680 first impressions
OCW's 6680 review
6680 review and specifications roundup
Texas Instruments OMAP 1710
Texas Instruments OMAP 1710
Forum Nokia specifications
Nokia 6681
Nokia 6682
Nokia 6680
Nokia smartphones
Mobile phones introduced in 2005
Discontinued flagship smartphones | Nokia 6680 | [
"Technology"
] | 762 | [
"Discontinued flagship smartphones",
"Flagship smartphones"
] |
1,630,483 | https://en.wikipedia.org/wiki/Prime%20power | In mathematics, a prime power is a positive integer which is a positive integer power of a single prime number.
For example: , and are prime powers, while
, and are not.
The sequence of prime powers begins:
2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 61, 64, 67, 71, 73, 79, 81, 83, 89, 97, 101, 103, 107, 109, 113, 121, 125, 127, 128, 131, 137, 139, 149, 151, 157, 163, 167, 169, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 243, 251, … .
The prime powers are those positive integers that are divisible by exactly one prime number; in particular, the number 1 is not a prime power. Prime powers are also called primary numbers, as in the primary decomposition.
Properties
Algebraic properties
Prime powers are powers of prime numbers. Every prime power (except powers of 2 greater than 4) has a primitive root; thus the multiplicative group of integers modulo pn (that is, the group of units of the ring Z/pnZ) is cyclic.
The number of elements of a finite field is always a prime power and conversely, every prime power occurs as the number of elements in some finite field (which is unique up to isomorphism).
Combinatorial properties
A property of prime powers used frequently in analytic number theory is that the set of prime powers which are not prime is a small set in the sense that the infinite sum of their reciprocals converges, although the primes are a large set.
Divisibility properties
The totient function (φ) and sigma functions (σ0) and (σ1) of a prime power are calculated by the formulas
All prime powers are deficient numbers. A prime power pn is an n-almost prime. It is not known whether a prime power pn can be a member of an amicable pair. If there is such a number, then pn must be greater than 101500 and n must be greater than 1400.
See also
Almost prime
Fermi–Dirac prime
Perfect power
Semiprime
References
Further reading
Jones, Gareth A. and Jones, J. Mary (1998) Elementary Number Theory Springer-Verlag London
Prime numbers
Exponentials
Number theory
Integer sequences | Prime power | [
"Mathematics"
] | 525 | [
"Sequences and series",
"Discrete mathematics",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Prime numbers",
"Mathematical objects",
"Combinatorics",
"E (mathematical constant)",
"Exponentials",
"Numbers",
"Number theory"
] |
1,630,512 | https://en.wikipedia.org/wiki/Rear-view%20mirror | A rear-view mirror (or rearview mirror) is a, usually flat, mirror in automobiles and other vehicles, designed to allow the driver to see rearward through the vehicle's rear window (rear windshield).
In cars, the rear-view mirror is usually affixed to the top of the windshield on a double-swivel mount allowing it to be adjusted to suit the height and viewing angle of any driver and to swing harmlessly out of the way if impacted by a vehicle occupant in a collision.
The rear-view mirror is augmented by one or more side-view mirrors, which serve as the only rear-vision mirrors on trucks, motorcycles and bicycles.
History
Early use of fixed mirrors was described as early as 1906, with a trade magazine noting mirrors for showing what is coming behind were now popular on closed bodied automobiles, and were likely to be widely adopted in a short time. The same year, a Mr. Bilal Ghanty from France patented a "Warning mirror for automobiles". The Argus Dash Mirror, adjustable to any position to see the road behind, appeared in 1908. Earliest known rear-view mirror mounted on a racing vehicle appeared on Ray Harroun's Marmon race car at the inaugural Indianapolis 500 race in 1911. Harroun himself claimed he got the idea from seeing a mirror used for a similar purpose on a horse-drawn vehicle in 1904. Harroun also claimed that the mirror vibrated constantly due to the rough brick surface, and it was rendered largely useless.
Elmer Berger is usually credited with inventing the rear-view mirror, though in fact he was the first to patent it (1921) and develop it for incorporation into production street going automobiles by his Berger and Company.
Augmentations and alternatives
Recently, rear-view video cameras have been built into many new model cars, this was partially in response to the rear-view mirrors' inability to show the road directly behind the car, due to the rear deck or trunk obscuring as much as 3–5 meters (10–15 feet) of road behind the car. As many as 50 small children are killed by SUVs every year in the USA because the driver cannot see them in their rear-view mirrors. Camera systems are usually mounted to the rear bumper or lower parts of the car, allowing for better rear visibility.
Aftermarket secondary rear-view mirrors are available. They attach to the main rear-view mirror and are independently adjustable to view the back seat. This is useful to enable adults to monitor children in the back seat.
Anti-glare
A prismatic rear-view mirror—sometimes called a "day/night mirror"—can be tilted to reduce the brightness and glare of lights, mostly for high-beam headlights of vehicles behind which would otherwise be reflected directly into the driver's eyes at night. This type of mirror is made of a piece of glass that is wedge-shaped in cross-section—its front and rear surfaces are not parallel.
On manual tilt versions, a tab is used to adjust the mirror between "day" and "night" positions. In the day view position, the front surface is tilted and the reflective back side gives a strong reflection. When the mirror is moved to the night view position, its reflecting rear surface is tilted out of line with the driver's view. This view is actually a reflection of the low-reflection front surface; only a much-reduced amount of light is reflected in the driver's eyes.
"Manual tilt" day/night mirrors first began appearing in the 1930s and became standard equipment on most passenger cars and trucks by the early 1970s.
Automatic dimming
In the 1940s, American inventor Jacob Rabinow developed a light-sensitive automatic mechanism for the wedge-type day/night mirror. Several Chrysler Corporation cars offered these automatic mirrors as optional equipment as early as 1959, but few customers ordered them for their cars and the item was soon withdrawn from the option lists. Several automakers began offering rear-view mirrors with automatic dimming again in 1983, and it was in the late 1980s that they began to catch on in popularity.
Current systems usually use photosensors mounted in the rear-view mirror to detect light and dim the mirror by means of electrochromism. This electrochromic feature has also been incorporated into side-view mirrors allowing them to dim and reduce glare as well.
Suspending objects
Objects are sometimes hung from the rear-view mirror, including cross necklaces, prayer beads, good luck charms, decorations like fuzzy dice, and air fresheners like Little Trees. In some jurisdictions such hanging is illegal on the basis that it impairs the driver's forward view and so compromises safety. Black Lives Matter protesters have cited this as an example of the minor violations used as grounds for traffic stops disproportionately targeting black drivers.
Trucks and buses
On trucks and buses, the load often blocks rearward vision out the backlight. In the U.S. virtually all trucks and buses have a side view mirror on each side, often mounted on the doors and viewed out the side windows, which are used for rear vision. These mirrors leave a large unviewable ("blind") area behind the vehicle, which tapers down as the distance increases. This is a safety issue which the driver must compensate for, often with a person guiding the truck back in congested areas, or by backing in a curve. "Spot mirrors", a convex mirror which provides a distorted image of the entire side of the vehicle, are commonly mounted on at least the right side of a vehicle. In the U.S. mirrors are considered "safety equipment", and are not included in width restrictions.
Motorcycles
Depending on the type of motorcycle, the motorcycle may or may not have rear-view mirrors. Street-legal motorcycles are generally required to have rear-view mirrors. Motorcycles for off-road use only normally do not have rear-view mirrors. Rear-view mirrors come in various shapes and designs and have various methods of mounting the mirrors to the motorcycle, most commonly to the handlebars. Rear-view mirrors can also be attached to the rider's motorcycle helmet. The Reevu MSX1 helmet uses an internal periscope that allows the user rear vision.
Bicycles
Some bicycles are equipped with a rear-view mirror mounted on a handlebar. Rear-view mirrors may also be fitted to the bicycle frame, on a helmet, on the arm or the frame of a pair of eyeglasses. This allows what is behind to be checked continuously without turning round. Rear-view mirrors almost never come with a new bicycle and require an additional purchase.
Aircraft
By 1956, the Civil Aeronautics Administration had approved a rear-view mirror for light aircraft. They also predicted periscopes in larger aircraft. Fighter aircraft usually have one or more rear-view mirrors mounted on the front canopy frame to watch out for chasing aircraft.
See also
Automatic parking
Backup collision
Backup camera
Blind spot monitor
Blind spot (vehicle)
Dashcam
Intelligent Parking Assist System
Experimental Safety Vehicle (ESV)
Intelligent car
Lane departure warning system
List of auto parts
Precrash system
Wing mirror
References
1911 introductions
Mirrors
Vehicle parts | Rear-view mirror | [
"Technology"
] | 1,461 | [
"Vehicle parts",
"Components"
] |
1,630,600 | https://en.wikipedia.org/wiki/Ruin%20value | Ruin value () is the concept that a building be designed in such a way that if it eventually collapsed, it would leave behind aesthetically pleasing ruins that would last far longer without any maintenance at all. The idea was pioneered by German architect Albert Speer while planning for the 1936 Summer Olympics and published as "The Theory of Ruin Value" (Die Ruinenwerttheorie), although he was not its original inventor. The intention did not stretch only to the eventual collapse of the buildings, but rather assumed such buildings were inherently better designed and more imposing during their period of use.
The idea was supported by Adolf Hitler, who planned for such ruins to be a symbol of the greatness of the Third Reich, just as Ancient Greek and Roman ruins were symbolic of those civilisations.
Albert Speer
In his memoirs, Albert Speer claimed to have invented the idea, which he referred to as the theory of Ruin Value (Gr. Ruinenwerttheorie). It was supposedly an extension of Gottfried Semper's views about using "natural" materials and the avoidance of iron girders. In reality it was a much older concept, even becoming a Europe-wide Romantic fascination at one point. Predecessors include a "new ruined castle" built by the Landgraf of Hesse-Kassel in the 18th century, and the designs for the Bank of England built in the 19th century produced by Sir John Soane. When he presented the bank's governors with three oil sketches of the planned building one of them depicted it when it would be new, another when it would be weathered, and a third what its ruins would look like a thousand years onward.
Speer's memoirs reveal Hitler's thoughts about Nazi state architecture in relation to Roman imperial architecture:
Hitler accordingly approved Speer's recommendation that, in order to provide a "bridge to tradition" to future generations, modern "anonymous" materials such as steel girders and ferroconcrete should be avoided in the construction of monumental party buildings wherever possible, since such materials would not produce aesthetically acceptable ruins. Thus, the most politically significant buildings of the Reich were intended, to some extent, even after falling into ruins after thousands of years, to resemble their Roman models.
Speer expressed his views on the matter in the Four Year Plan of 1937 in his contribution Stone Not Iron in which he published a photograph of the Parthenon with the subscript: "The stone buildings of antiquity demonstrate in their condition today the permanence of natural building materials." Later, after saying modern buildings rarely last more than fifty years, he continues: "The ages-old stone buildings of the Egyptians and the Romans still stand today as powerful architectural proofs of the past of great nations, buildings which are often ruins only because man's lust for destruction has made them such." Hitler approved Speer's "Law of Ruin Value" (Gr. Ruinengesetz) after Speer had shown him a sketch of the Haupttribüne as an ivy-covered ruin. The drawing pleased Hitler but scandalised his entourage.
However, due to the onset of the Second World War, Nazi German architecture made extensive use of concrete.
Modern planned ruins
A more modern example of intended ruins were the planned warning signs for the proposed nuclear waste repository at Yucca Mountain (see Human Interference Task Force), which were intended to endure for 10,000 years, and yet still convey an enduring (if negative) impression on future generations: "Keep out. Don't dig here."
Architect Charles Jencks mentions "Ruins in the Garden", a section of the Neue Staatsgalerie, as a postmodern subversion of ruin value.
See also
Fascist architecture
Mausoleum
Memorial
Nazi architecture
Time capsule
Folly
References
Building engineering
Nazi architecture
Ruins
Albert Speer | Ruin value | [
"Engineering"
] | 778 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
1,630,673 | https://en.wikipedia.org/wiki/Decay%20heat | Decay heat is the heat released as a result of radioactive decay. This heat is produced as an effect of radiation on materials: the energy of the alpha, beta or gamma radiation is converted into the thermal movement of atoms.
Decay heat occurs naturally from decay of long-lived radioisotopes that are primordially present from the Earth's formation.
In nuclear reactor engineering, decay heat continues to be generated after the reactor has been shut down (see SCRAM and nuclear chain reactions) and power generation has been suspended. The decay of the short-lived radioisotopes such as iodine-131 created in fission continues at high power for a time after shut down. The major source of heat production in a newly shut down reactor is due to the beta decay of new radioactive elements recently produced from fission fragments in the fission process.
Quantitatively, at the moment of reactor shutdown, decay heat from these radioactive sources is still 6.5% of the previous core power if the reactor has had a long and steady power history. About 1 hour after shutdown, the decay heat will be about 1.5% of the previous core power. After a day, the decay heat falls to 0.4%, and after a week, it will be only 0.2%. Because radioisotopes of all half-life lengths are present in nuclear waste, enough decay heat continues to be produced in spent fuel rods to require them to spend a minimum of one year, and more typically 10 to 20 years, in a spent fuel pool of water before being further processed. However, the heat produced during this time is still only a small fraction (less than 10%) of the heat produced in the first week after shutdown.
If no cooling system is working to remove the decay heat from a crippled and newly shut down reactor, the decay heat may cause the core of the reactor to reach unsafe temperatures within a few hours or days, depending upon the type of core. These extreme temperatures can lead to minor fuel damage (e.g. a few fuel particle failures (0.1 to 0.5%) in a graphite-moderated, gas-cooled design) or even major core structural damage (meltdown) in a light water reactor or liquid metal fast reactor. Chemical species released from the damaged core material may lead to further explosive reactions (steam or hydrogen) which may further damage the reactor.
Natural occurrence
Naturally occurring decay heat is a significant input to Earth's internal heat budget. Radioactive isotopes of uranium, thorium and potassium are the primary contributors to this decay heat, and this radioactive decay is the primary source of heat from which geothermal energy derives.
Decay heat has significant importance in astrophysical phenomena. For example, the light curves of Type Ia supernovae are widely thought to be powered by the heating provided by radioactive products from the decay of nickel and cobalt into iron (Type Ia light curve).
Power reactors in shutdown
In a typical nuclear fission reaction, 187 MeV of energy are released instantaneously in the form of kinetic energy from the fission products, kinetic energy from the fission neutrons, instantaneous gamma rays, or gamma rays from the capture of neutrons. An additional 23 MeV of energy are released at some time after fission from the beta decay of fission products. About 10 MeV of the energy released from the beta decay of fission products is in the form of neutrinos, and since neutrinos are very weakly interacting, this 10 MeV of energy will not be deposited in the reactor core. This results in 13 MeV (6.5% of the total fission energy) being deposited in the reactor core from delayed beta decay of fission products, at some time after any given fission reaction has occurred. In a steady state, this heat from delayed fission product beta decay contributes 6.5% of the normal reactor heat output.
When a nuclear reactor has been shut down, and nuclear fission is not occurring at a large scale, the major source of heat production will be due to the delayed beta decay of these fission products (which originated as fission fragments). For this reason, at the moment of reactor shutdown, decay heat will be about 6.5% of the previous core power if the reactor has had a long and steady power history. About 1 hour after shutdown, the decay heat will be about 1.5% of the previous core power. After a day, the decay heat falls to 0.4%, and after a week it will be only 0.2%. The decay heat production rate will continue to slowly decrease over time; the decay curve depends upon the proportions of the various fission products in the core and upon their respective half-lives.
An approximation for the decay heat curve valid from 10 seconds to 100 days after shutdown is
where is the time since reactor startup, is the power at time , is the reactor power before shutdown, and is the time of reactor shutdown measured from the time of startup (in seconds), so that is the elapsed time since shutdown.
For an approach with a more direct physical basis, some models use the fundamental concept of radioactive decay. Used nuclear fuel contains a large number of different isotopes that contribute to decay heat, which are all subject to the radioactive decay law, so some models consider decay heat to be a sum of exponential functions with different decay constants and initial contribution to the heat rate. A more accurate model would consider the effects of precursors, since many isotopes follow several steps in their radioactive decay chain, and the decay of daughter products will have a greater effect longer after shutdown.
The removal of the decay heat is a significant reactor safety concern, especially shortly after normal shutdown or following a loss-of-coolant accident. Failure to remove decay heat may cause the reactor core temperature to rise to dangerous levels and has caused nuclear accidents, including the nuclear accidents at Three Mile Island and Fukushima I. The heat removal is usually achieved through several redundant and diverse systems, from which heat is removed via heat exchangers. Water is passed through the secondary side of the heat exchanger via the essential service water system which dissipates the heat into the 'ultimate heat sink', often a sea, river or large lake. In locations without a suitable body of water, the heat is dissipated into the air by recirculating the water via a cooling tower. The failure of ESWS circulating pumps was one of the factors that endangered safety during the 1999 Blayais Nuclear Power Plant flood.
Spent fuel
After one year, typical spent nuclear fuel generates about 10 kW of decay heat per tonne, decreasing to about 1 kW/t after ten years. Hence effective active or passive cooling for spent nuclear fuel is required for a number of years.
See also
Decay energy
Spent fuel pool
Dry cask storage
Radioisotope thermoelectric generator
References
External links
DOE fundamentals handbook - Decay heat, Nuclear physics and reactor theory - volume 2 of 2, module 4, page 61
Decay Heat Estimates for MNR, page 2.
Spent Nuclear Fuel Explorer Java applet showing activity and decay heat as a function of time
Nuclear technology
Heat transfer
Nuclear reactor safety | Decay heat | [
"Physics",
"Chemistry"
] | 1,462 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Nuclear technology",
"Thermodynamics",
"Nuclear physics"
] |
1,630,817 | https://en.wikipedia.org/wiki/Pollard%27s%20rho%20algorithm%20for%20logarithms | Pollard's rho algorithm for logarithms is an algorithm introduced by John Pollard in 1978 to solve the discrete logarithm problem, analogous to Pollard's rho algorithm to solve the integer factorization problem.
The goal is to compute such that , where belongs to a cyclic group generated by . The algorithm computes integers , , , and such that . If the underlying group is cyclic of order , by substituting as and noting that two powers are equal if and only if the exponents are equivalent modulo the order of the base, in this case modulo , we get that is one of the solutions of the equation . Solutions to this equation are easily obtained using the extended Euclidean algorithm.
To find the needed , , , and the algorithm uses Floyd's cycle-finding algorithm to find a cycle in the sequence , where the function is assumed to be random-looking and thus is likely to enter into a loop of approximate length after steps. One way to define such a function is to use the following rules: Partition into three disjoint subsets , , and of approximately equal size using a hash function. If is in then double both and ; if then increment , if then increment .
Algorithm
Let be a cyclic group of order , and given , and a partition , let be the map
and define maps and by
input: a: a generator of G
b: an element of G
output: An integer x such that ax = b, or failure
Initialise i ← 0, a0 ← 0, b0 ← 0, x0 ← 1 ∈ G
loop
i ← i + 1
xi ← f(xi−1),
ai ← g(xi−1, ai−1),
bi ← h(xi−1, bi−1)
x2i−1 ← f(x2i−2),
a2i−1 ← g(x2i−2, a2i−2),
b2i−1 ← h(x2i−2, b2i−2)
x2i ← f(x2i−1),
a2i ← g(x2i−1, a2i−1),
b2i ← h(x2i−1, b2i−1)
while xi ≠ x2i
r ← bi − b2i
if r = 0 return failure
return r−1(a2i − ai) mod n
Example
Consider, for example, the group generated by 2 modulo (the order of the group is , 2 generates the group of units modulo 1019). The algorithm is implemented by the following C++ program:
#include <stdio.h>
const int n = 1018, N = n + 1; /* N = 1019 -- prime */
const int alpha = 2; /* generator */
const int beta = 5; /* 2^{10} = 1024 = 5 (N) */
void new_xab(int& x, int& a, int& b) {
switch (x % 3) {
case 0: x = x * x % N; a = a*2 % n; b = b*2 % n; break;
case 1: x = x * alpha % N; a = (a+1) % n; break;
case 2: x = x * beta % N; b = (b+1) % n; break;
}
}
int main(void) {
int x = 1, a = 0, b = 0;
int X = x, A = a, B = b;
for (int i = 1; i < n; ++i) {
new_xab(x, a, b);
new_xab(X, A, B);
new_xab(X, A, B);
printf("%3d %4d %3d %3d %4d %3d %3d\n", i, x, a, b, X, A, B);
if (x == X) break;
}
return 0;
}
The results are as follows (edited):
i x a b X A B
------------------------------
1 2 1 0 10 1 1
2 10 1 1 100 2 2
3 20 2 1 1000 3 3
4 100 2 2 425 8 6
5 200 3 2 436 16 14
6 1000 3 3 284 17 15
7 981 4 3 986 17 17
8 425 8 6 194 17 19
..............................
48 224 680 376 86 299 412
49 101 680 377 860 300 413
50 505 680 378 101 300 415
51 1010 681 378 1010 301 416
That is and so , for which is a solution as expected. As is not prime, there is another solution , for which holds.
Complexity
The running time is approximately . If used together with the Pohlig–Hellman algorithm, the running time of the combined algorithm is , where is the largest prime factor of .
References
Logarithms
Number theoretic algorithms | Pollard's rho algorithm for logarithms | [
"Mathematics"
] | 1,086 | [
"E (mathematical constant)",
"Logarithms"
] |
1,630,953 | https://en.wikipedia.org/wiki/Buying%20center | A buying center, also called a decision-making unit (DMU), brings together "all those members of an organization who become involved in the buying process for a particular product or service".
The concept of a DMU was developed in 1967 by Robinson, Farris and Wind (1967). A DMU consists of all the people of an organization who are involved in the buying decision. The decision to purchase involves those with purchasing and financial expertise and those with technical expertise, and (in some cases) an organization's top management. McDonald, Rogers and Woodburn (2000) state that identifying and influencing all the people involved in the buying decision is a prerequisite in the process of selling to an organization.
Modelling buying centers
The concept of a buying center (as a focus of business-to-business marketing, and as a core factor in creating customer value and influence in organisational efficiency and effectiveness) formulates the understanding of purchasing decision-making in complex environments.
Some of the key factors influencing a buying center or DMU's activities include:
Buy class or situation. The "Buygrid" model developed by Robinson et al. in 1967 classified "buy classes" as "straight rebuy", "modified rebuy" or "new task", also referred to as "new task buying". Michelle Bunn extended this range to six basic buying situations in a 1993 article:
Casual purchasing involving no search or analysis
Routine low priority purchasing or rebuying
Simple modified rebuys where selection options are limited
Judgemental new purchasing tasks, e.g. for a special type of equipment
Complex modified rebuys requiring more structured processes for establishing and evaluating options, such as through a competitive tendering process
Strategic new tasks establishing long-term business partnerships and purchasing plans.
Product type (e.g. materials, components, plant and equipment, or maintenance, repair and operations (MRO)
Importance of the purchase.
In some cases the buying center is an informal ad hoc group, but in other cases, it is a formally sanctioned group with a specific mandate. American research undertaken by McWilliams in 1992 found out that the mean size of a buying center mainly consisted of four people. The range in this research was between three and five people. The type of purchase that has to be done and the stage of the buying process influence the size. More recent research found that the structure, including the size, of buying centers depends on the organizational structure, with centralization and formalization driving the development of large buying centers.
Decision-making process
When the DMU wants to purchase a certain product or service the following steps are taken inside the buying center:
Need or problem recognition: the recognition can start for two reasons. The first reason can be to solve a specific problem of the company. The other reason can be to improve a company's current operations/performance or to pursue new market opportunities.
Determining product specification: the specification includes the characteristics and functionality which the product/service that is going to be purchased must contain.
Supplier and product search: this process contains the search for suppliers that can meet a company's product or service needs. First a supplier that matches with the specifications of the company has to be found. The second condition is that the supplier can satisfy the organization's financial and supply requirements.
Evaluation of proposals and selection of suppliers: the different possible suppliers will be evaluated by the different departments of the company.
Selection of order routine: this stadium starts after the selection of the supplier. It mainly consists of negotiating and agreeing with the supplier about certain details.
Performance feedback and evaluation: performance and quality of the purchased goods will be evaluated.
In this process of making decisions different roles can be given to certain members of the center or the unit depending on the importance of the part of the organization.
Robinson et al.'s "Buygrid Framework" saw new task activities, dealing with a problem which has not arisen before, as more complex than the other buy classes, and closer to achieving a general solution applicable in future rebuy activities. McQuiston in 1989 noted mixed empirical findings regarding the framework: "some studies have shown that participation and influence do vary according to the buygrid framework ... but other studies have shown that they do not". Co-author Yoram Wind, looking back at the Buygrid Model 25 years after its publication, held that the model had provided "a very useful framework" whose "underlying dimensions [were] valid", but "its generalizability under a variety of market situations [was] not yet completely understood".
Issues in buying center research
There are several conceptual and methodological issues concerning buying centers which in 1986 were thought to need additional research. These issues can be divided into:
Buying center boundaries and buying center domain
Distinguishing internal buying center processes from the influence of external environmental factors, also defining and delimiting the activities of a particular buying center. Webster and Wind (1972) list a number of environmental factors including physical, economic, legal and cultural aspects of the external environment, and identify physical, technological, economic and cultural aspects with "the [internal] organisational climate". Johnston and Bonoma used interaction theory in a 1981 paper to help analyse the distinction between internal and external factors.
Buying center structure
Understanding how organizational structures may differ from or may shape the structure of the buying center, and examining how a particular buying strategy may serve to mediate the effects of environmental uncertainty on the structure of the buying center.
Process considerations in buying center
Power and conflict issues within the buying center.
Decision making
One stream of research focuses on the number of decision phases and their timing and the other emphasizes the type of decision-making model (or choice routine) utilized.
Communications flow
The informal interactions that emerge during the buying process.
Application to small and medium-sized businesses
Andrews and Rogers noted in 2005 that very little academic discussion had taken place regarding buyer behaviour within small and medium-sized enterprises (SMEs). Thompson and Panayiotopoulos suggest that some purchasing decisions in SMEs, especially in a rebuy context, are made by one person and therefore not really a "group" activity, although in a new-buy situation, "the influence of other people may be greater".
See also
Procurement - formalised organizational procedures for purchasing
References
Business-to-business
Organizational behavior
Procurement | Buying center | [
"Biology"
] | 1,291 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
1,630,997 | https://en.wikipedia.org/wiki/Footedness | In human biology, footedness is the natural preference of one's left or right foot for various purposes. It is the foot equivalent of handedness. While purposes vary, such as applying the greatest force in a certain foot to complete the action of kick as opposed to stomping, footedness is most commonly associated with the preference of a particular foot in the leading position while engaging in foot- or kicking-related sports, such as association football and kickboxing. A person may thus be left-footed, right-footed or ambipedal (able to use both feet equally well).
Ball games
In association football, the ball is predominantly struck by the foot. Footedness may refer to the foot a player uses to kick with the greatest force and skill. Most people are right-footed, kicking with the right leg. Capable left-footed footballers are rare and therefore quite sought after. As rare are "two-footed" players, who are equally capable with both feet. Such players make up only one sixth of players in the top professional leagues in Europe. Two-footedness can be learnt, a notable case being England international Tom Finney, but can only be properly developed in the early years. In Australian Rules Football, several players are equally adept at using both feet to kick the ball, such as Sam Mitchell and Charles Bushnell (footballer, retired).
In basketball, a sport composed almost solely of right-handed players, it is common for most athletes to have a dominant left leg which they would use when jumping to complete a right-hand layup. Hence, left-handed basketball players tend to use their right leg more as they finish a left handed layup (although both right- and left-handed players are usually able to use both hands when finishing near the basket).
In the National Football League (NFL) placekickers and punters who kick with their left leg are a relative rarity. As of the 2023 NFL season, only four of the league's 32 punters were left-footed. The apparent advantage to punting with the left foot is that, because it is not as common, return specialists are not as experienced handling the ball spinning in the opposite direction. Left-footed placekickers are similarly uncommon.
Boardsports
In boardsports (e.g., surfing, skateboarding and snowboarding), one stands erect on a single, lightweight board that slides along the ground or on water. The need for balance causes one to position the body perpendicular to the direction of motion, with one foot leading the other. As with handedness, when this task is repetitively performed, one tends to naturally choose a particular foot for the leading position.
Goofy stance vs. regular stance
Boardsport riders are "footed" in one of two stances, generally called "regular" and "goofy". Riders will generally quickly choose a preferred stance that becomes permanently preferred. A "regular" stance indicates the left foot leading on the board with the right foot pushing, while a "goofy" stance leads with the right foot on the board, pushing with the left. Professionals seem to be evenly distributed between the stances. Practice can yield a high level of ambidexterity between the two stances, such that even seasoned participants of a boardsport have difficulty discerning the natural footedness of an unfamiliar rider in action.
To increase the difficulty, variety, and aesthetic value of tricks, riders can ride "switch stance" (abbreviated to "switch"). For example, a goofy-footed skateboarder normally performs an ollie with the right foot forward, but a "switch ollie" would have the rider standing with the left foot at the front of the board. In sports where switch riding is common and expected, like street skateboarding, riders have the goal of appearing natural at, and performing the same tricks in, both regular and goofy stances. Some sports like kitesurfing and windsurfing generally require the rider to be able to switch stance depending on the wind or travel direction rather than rider preference. Each time direction is changed, the stance changes. Snowboarders who ride switch may adopt a "duck stance", where the feet are mounted turned out, or pointed away from the mid-line of the body, typically at a roughly 15-degree angle. In this position, the rider will have the leading foot facing forward in either regular or switch stance.
Switch, fakie and nollie
When a rider rolls backwards, this is called "riding fakie". A "fakie" trick is performed while riding backwards but taking off on the front foot. Although it is the same foot that jumps in one's traditional stance, it is normally the back foot. A rider can also land in the fakie position.
While there are some parallels between switch stance and fakie, riding switch implies opening the shoulders more to face the direction headed, though not as much as in traditional stance, while fakie stance implies a slightly more backwards facing, closed shoulder posture. "
Nollie (nose ollie) is when the front foot takes off when one is riding in their normal stance, the same foot that jumps when doing tricks switch. In nollie position, the body and shoulders are facing forward as much as when riding in normal stance. Generally fakie and normal are done off the tail, whereas nollie and switch are done off the nose.
In skateboarding, most tricks that are performed riding backwards — with respect to the rider's preferred stance — are exclusively categorized as "switch" (in a switch stance) or as fakie, with the general rule that tricks off the tail are almost always described as fakie, and those off the nose are nollie. For example, a jump using the tail rolling backwards is a "fakie ollie" (not a "switch nollie"), and a jump off the nose is a "nollie" (not a "fakie nollie").
Mongo foot
Mongo foot refers to the use of the rider's front foot for pushing. Normally, a skateboarder will feel more comfortable using their back foot to push, while their front foot remains on the board. In the minority case of mongo-footed skateboarders, the opposite is true. Some skateboarders who do not push mongo in their regular stance may still push mongo when riding in switch stance, rather than push with their weaker back foot. Some well-known skaters who change between mongo and normal when pushing switch include Jacob Vance, Stevie Williams, and Eric Koston.
Although its origins remain uncertain, it is widely believed that the term derives from the pejorative use of "mongoloid".
BMX
In BMX, there is a de facto relationship between footedness and preferences of grinding position and of mid-air turning direction. The terms "regular" and "goofy" do not indicate a foot preference as in boardsports, but rather whether the rider's footedness has the usual relationship with their grinding and mid-air turning preferences. For example, consider the following classes of riders:
right-footed riders who prefer turning counter-clockwise in the air, and grinding on their right
left-footed riders who prefer turning clockwise in the air, and grinding on their left.
Both classes are of equal size and would be considered "regular". "Goofy" would describe riders whose trick preferences do not match their footedness: a rider who prefers to grind on the opposite side as do most is considered a "goofy grinder"; one who prefers to turn the opposite direction in mid-air as do most is considered a "goofy spinner". Few riders have either goofy trait, but some riders may have both.
See also
Handedness
Laterality
Orthodox stance
Southpaw stance
Surefootedness
References
Boardsports
Ball games
Chirality
Motor skills | Footedness | [
"Physics",
"Chemistry",
"Biology"
] | 1,601 | [
"Pharmacology",
"Behavior",
"Origin of life",
"Motor skills",
"Motor control",
"Stereochemistry",
"Chirality",
"Asymmetry",
"Biochemistry",
"Symmetry",
"Biological hypotheses"
] |
1,630,999 | https://en.wikipedia.org/wiki/Registry%20of%20Toxic%20Effects%20of%20Chemical%20Substances | Registry of Toxic Effects of Chemical Substances (RTECS) is a database of toxicity information compiled from the open scientific literature without reference to the validity or usefulness of the studies reported. Until 2001 it was maintained by US National Institute for Occupational Safety and Health (NIOSH) as a freely available publication. It is now maintained by the private company BIOVIA or from several value-added resellers and is available only for a fee or by subscription.
Contents
Six types of toxicity data are included in the file:
Primary irritation
Mutagenic effects
Reproductive effects
Tumorigenic effects
Acute toxicity
Other multiple dose toxicity
Specific numeric toxicity values such as , LC50, TDLo, and TCLo are noted as well as species studied and the route of administration used. For all data the bibliographic source is listed. The studies are not evaluated in any way.
History
RTECS was an activity mandated by the US Congress, established by Section 20(a)(6) of the Occupational Safety and Health Act of 1970 (PL 91-596). The original edition, known as the Toxic Substances List was published on June 28, 1971, and included toxicological data for approximately 5,000 chemicals. The name changed later to its current name Registry of Toxic Effects of Chemical Substances. In January 2001 the database contained 152,970 chemicals. In December 2001 RTECS was transferred from NIOSH to the private company Elsevier MDL. Symyx acquired MDL from Elsevier in 2007 and the Toxicity database was included in the acquisition. The Toxicity database is only accessible for charge on an annual subscription base.
RTECS is available in English, French and Spanish language versions, offered by the Canadian Centre for Occupational Health and Safety. The database subscription is offered on the Web, on CD-ROM and as an Intranet format. The database is also available online from NISC (National Information Services Corporation, RightAnswer.com, and ToxPlanet (Timberlake Ventures, Inc)).
References
External links
RTECS overview
Accelrys website
RightAnswer Website
ToxPlanet Website
Biochemistry databases
Chemical safety
Chemical databases
Health sciences publications
Toxic effects of substances chiefly nonmedicinal as to source | Registry of Toxic Effects of Chemical Substances | [
"Chemistry",
"Biology",
"Environmental_science"
] | 444 | [
"Chemical accident",
"Toxicology",
"Biochemistry databases",
"Toxic effects of substances chiefly nonmedicinal as to source",
"Chemical databases",
"nan",
"Biochemistry",
"Chemical safety"
] |
1,631,010 | https://en.wikipedia.org/wiki/Risk-adjusted%20return%20on%20capital | Risk-adjusted return on capital (RAROC) is a risk-based profitability measurement framework for analysing risk-adjusted financial performance and providing a consistent view of profitability across businesses. The concept was developed by Bankers Trust and principal designer Dan Borge in the late 1970s. Note, however, that increasingly return on risk-adjusted capital (RORAC) is used as a measure, whereby the risk adjustment of Capital is based on the capital adequacy guidelines as outlined by the Basel Committee.
Basic formula
The formula is given by
Broadly speaking, in business enterprises, risk is traded off against benefit. RAROC is defined as the ratio of risk adjusted return to economic capital. The economic capital is the amount of money which is needed to secure the survival in a worst-case scenario, it is a buffer against unexpected shocks in market values. Economic capital is a function of market risk, credit risk, and operational risk, and is often calculated by VaR. This use of capital based on risk improves the capital allocation across different functional areas of banks, insurance companies, or any business in which capital is placed at risk for an expected return above the risk-free rate.
RAROC system allocates capital for two basic reasons:
Risk management
Performance evaluation
For risk management purposes, the main goal of allocating capital to individual business units is to determine the bank's optimal capital structure—that is economic capital allocation is closely correlated with individual business risk. As a performance evaluation tool, it allows banks to assign capital to business units based on the economic value added of each unit.
Decision measures based on regulatory and economic capital
With the financial crisis of 2007, and the introduction of Dodd–Frank Act, and Basel III, the minimum required regulatory capital requirements have become onerous. An implication of stringent regulatory capital requirements spurred debates on the validity of required economic capital in managing an organization's portfolio composition, highlighting that constraining requirements should have organizations focus entirely on the return on regulatory capital in measuring profitability and in guiding portfolio composition. The counterargument highlights that concentration and diversification effects should play a prominent role in portfolio selection – dynamics recognized in economic capital, but not regulatory capital.
It did not take long for the industry to recognize the relevance and importance of both regulatory and economic measures, and eschewed focusing exclusively on one or the other. Relatively simple rules were devised to have both regulatory and economic capital enter into the process. In 2012, researchers at Moody's Analytics designed a formal extension to the RAROC model that accounts for regulatory capital requirements as well as economic risks. In the framework, capital allocation can be represented as a composite capital measure (CCM) that is a weighted combination of economic and regulatory capital – with the weight on regulatory capital determined by the degree to which an organization is a capital constrained.
See also
Enterprise risk management
Omega ratio
Risk return ratio
Risk-return spectrum
Sharpe ratio
Sortino ratio
Notes
References
External links
RAROC & Economic Capital
Between RAROC and a hard place
Actuarial science
Capital requirement
Financial ratios
Financial risk | Risk-adjusted return on capital | [
"Mathematics"
] | 622 | [
"Metrics",
"Applied mathematics",
"Quantity",
"Financial ratios",
"Actuarial science"
] |
1,631,015 | https://en.wikipedia.org/wiki/Aneutronic%20fusion | Aneutronic fusion is any form of fusion power in which very little of the energy released is carried by neutrons. While the lowest-threshold nuclear fusion reactions release up to 80% of their energy in the form of neutrons, aneutronic reactions release energy in the form of charged particles, typically protons or alpha particles. Successful aneutronic fusion would greatly reduce problems associated with neutron radiation such as damaging ionizing radiation, neutron activation, reactor maintenance, and requirements for biological shielding, remote handling and safety.
Since it is simpler to convert the energy of charged particles into electrical power than it is to convert energy from uncharged particles, an aneutronic reaction would be attractive for power systems. Some proponents see a potential for dramatic cost reductions by converting energy directly to electricity, as well as in eliminating the radiation from neutrons, which are difficult to shield against. However, the conditions required to harness aneutronic fusion are much more extreme than those required for deuterium–tritium (D–T) fusion such as at ITER.
History
The first experiments in the field started in 1939, and serious efforts have been continual since the early 1950s.
An early supporter was Richard F. Post at Lawrence Livermore. He proposed to capture the kinetic energy of charged particles as they were exhausted from a fusion reactor and convert this into voltage to drive current. Post helped develop the theoretical underpinnings of direct conversion, later demonstrated by Barr and Moir. They demonstrated a 48 percent energy capture efficiency on the Tandem Mirror Experiment in 1981.
Polywell fusion was pioneered by the late Robert W. Bussard in 1995 and funded by the US Navy. Polywell uses inertial electrostatic confinement. He founded EMC2 to continue polywell research.
A picosecond pulse of a 10-terawatt laser produced hydrogen–boron aneutronic fusions for a Russian team in 2005. However, the number of the resulting α particles (around 103 per laser pulse) was low.
In 2006, the Z-machine at Sandia National Laboratory, a z-pinch device, reached 2 billion kelvins and 300 keV.
In 2011, Lawrenceville Plasma Physics published initial results and outlined a theory and experimental program for aneutronic fusion with the dense plasma focus (DPF). The effort was initially funded by NASA's Jet Propulsion Laboratory. Support for other DPF aneutronic fusion investigations came from the Air Force Research Laboratory.
A French research team fused protons and boron-11 nuclei using a laser-accelerated proton beam and high-intensity laser pulse. In October 2013 they reported an estimated 80 million fusion reactions during a 1.5 nanosecond laser pulse.
In 2016, a team at the Shanghai Chinese Academy of Sciences produced a laser pulse of 5.3 petawatts with the Superintense Ultrafast Laser Facility (SULF) and expected to reach 10 petawatts with the same equipment.
In 2021, TAE Technologies field-reversed configuration announced that its Norman device was regularly producing a stable plasma at temperatures over 50 million degrees.
In 2021, a Russian team reported experimental results in a miniature device with electrodynamic (oscillatory) plasma confinement. It used a ~1–2 J nanosecond vacuum discharge with a virtual cathode. Its field accelerates boron ions and protons to ~ 100–300 keV under oscillating ions' collisions. α-particles of about /4π (~ 10 α-particles/ns) were obtained during the 4 μs of applied voltage.
Australian spin-off company HB11 Energy was created in September 2019. In 2022, they claimed to be the first commercial company to demonstrate fusion.
Definition
Fusion reactions can be categorized according to their neutronicity: the fraction of the fusion energy released as energetic neutrons. The State of New Jersey defined an aneutronic reaction as one in which neutrons carry no more than 1% of the total released energy, although many papers on the subject include reactions that do not meet this criterion.
Coulomb barrier
The Coulomb barrier is the minimum energy required for the nuclei in a fusion reaction to overcome their mutual electrostatic repulsion. Repulsive force between a particle with charge Z1 and one with Z2 is proportional to , where r is the distance between them. The Coulomb barrier facing a pair of reacting, charged particles depends both on total charge and on how equally those charges are distributed; the barrier is lowest when a low-Z particle reacts with a high-Z one and highest when the reactants are of roughly equal charge. Barrier energy is thus minimized for those ions with the fewest protons.
Once the nuclear potential wells of the two reacting particles are within two proton radii of each other, the two can begin attracting one another via nuclear force. Because this interaction is much stronger than electromagnetic interaction, the particles will be drawn together despite the ongoing electrical repulsion, releasing nuclear energy. Nuclear force is a very short-range force, though, so it is a little oversimplified to say it increases with the number of nucleons. The statement is true when describing volume energy or surface energy of a nucleus, less true when addressing Coulomb energy, and does not speak to proton/neutron balance at all. Once reactants have gone past the Coulomb barrier, they're into a world dominated by a force that does not behave like electromagnetism.
In most fusion concepts, the energy needed to overcome the Coulomb barrier is provided by collisions with other fuel ions. In a thermalized fluid like a plasma, the temperature corresponds to an energy spectrum according to the Maxwell–Boltzmann distribution. Gases in this state have some particles with high energy even if the average energy is much lower. Fusion devices rely on this distribution; even at bulk temperatures far below the Coulomb barrier energy, the energy released by the reactions is great enough that capturing some of that can supply sufficient high-energy ions to keep the reaction going.
Thus, steady operation of the reactor is based on a balance between the rate that energy is added to the fuel by fusion reactions and the rate energy is lost to the surroundings. This concept is best expressed as the fusion triple product, the product of the temperature, density and "confinement time", the amount of time energy remains in the fuel before escaping to the environment. The product of temperature and density gives the reaction rate for any given fuel. The rate of reaction is proportional to the nuclear cross section (σ).
Any given device can sustain some maximum plasma pressure. An efficient device would continuously operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is such that σv/T2 is a maximum. This is also the temperature at which the value of the triple product nTτ required for ignition is a minimum, since that required value is inversely proportional to σv/T2. A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.
Because the Coulomb barrier is proportional to the product of proton counts () of the two reactants, varieties of heavy hydrogen, deuterium and tritium (D–T), give the fuel with the lowest total Coulomb barrier. All other potential fuels have higher Coulomb barriers, and thus require higher operational temperatures. Additionally, D–T fuels have the highest nuclear cross-sections, which means the reaction rates are higher than any other fuel. This makes D–T fusion the easiest to achieve.
Comparing the potential of other fuels to the D–T reaction: The table below shows the ignition temperature and cross-section for three of the candidate aneutronic reactions, compared to D–T:
The easiest to ignite of the aneutronic reactions, D–3He, has an ignition temperature over four times as high as that of the D–T reaction, and correspondingly lower cross-sections, while the p–11B reaction is nearly ten times more difficult to ignite.
Candidate reactions
Several fusion reactions produce no neutrons on any of their branches. Those with the largest cross sections are:
Candidate fuels
3He
The 3He–D reaction has been studied as an alternative fusion plasma because it has the lowest energy threshold.
The p–6Li, 3He–6Li, and 3He–3He reaction rates are not particularly high in a thermal plasma. When treated as a chain, however, they offer the possibility of enhanced reactivity due to a non-thermal distribution. The product 3He from the p–6Li reaction could participate in the second reaction before thermalizing, and the product p from 3He–6Li could participate in the former before thermalizing. Detailed analyses, however, do not show sufficient reactivity enhancement to overcome the inherently low cross section.
The 3He reaction suffers from a 3He availability problem. 3He occurs in only minuscule amounts on Earth, so it would either have to be bred from neutron reactions (counteracting the potential advantage of aneutronic fusion) or mined from extraterrestrial sources.
The amount of 3He needed for large-scale applications can also be described in terms of total consumption: according to the US Energy Information Administration, "Electricity consumption by 107 million U.S. households in 2001 totalled 1,140 billion kW·h" (). Again assuming 100% conversion efficiency, 6.7 tonnes per year of 3He would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency. Extracting that amount of pure 3He would entail processing 2 billion tonnes of lunar material per year, even assuming a recovery rate of 100%.
In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium–deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle".
Deuterium
Although the deuterium reactions (deuterium + 3He and deuterium + 6lithium) do not in themselves release neutrons, in a fusion reactor the plasma would also produce D–D side reactions that result in reaction product of 3He plus a neutron. Although neutron production can be minimized by running a plasma reaction hot and deuterium-lean, the fraction of energy released as neutrons is probably several percent, so that these fuel cycles, although neutron-poor, do not meet the 1% threshold. See 3He. The D–3He reaction also suffers from the 3He fuel availability problem, as discussed above.
Lithium
Fusion reactions involving lithium are well studied due to the use of lithium for breeding tritium in thermonuclear weapons. They are intermediate in ignition difficulty between the reactions involving lower atomic-number species, H and He, and the 11B reaction.
The p–7Li reaction, although highly energetic, releases neutrons because of the high cross section for the alternate neutron-producing reaction 1p + 7Li → 7Be + n
Boron
Many studies of aneutronic fusion concentrate on the p–11B reaction, which uses easily available fuel. The fusion of the boron nucleus with a proton produces energetic alpha particles (helium nuclei).
Since igniting the p–11B reaction is much more difficult than D–T, alternatives to the usual tokamak fusion reactors are usually proposed, such as inertial confinement fusion. One proposed method uses one laser to create a boron-11 plasma and another to create a stream of protons that smash into the plasma. The proton beam produces a tenfold increase of fusion because protons and boron nuclei collide directly. Earlier methods used a solid boron target, "protected" by its electrons, which reduced the fusion rate. Experiments suggest that a petawatt-scale laser pulse could launch an 'avalanche' fusion reaction, although this remains controversial. The plasma lasts about one nanosecond, requiring the picosecond pulse of protons to be precisely synchronized. Unlike conventional methods, this approach does not require a magnetically confined plasma. The proton beam is preceded by an electron beam, generated by the same laser, that strips electrons in the boron plasma, increasing the protons' chance to collide with the boron nuclei and fuse.
Residual radiation
Calculations show that at least 0.1% of the reactions in a thermal p–11B plasma produce neutrons, although their energy accounts for less than 0.2% of the total energy released.
These neutrons come primarily from the reaction:
11B + α → 14N + n + 157 keV
The reaction itself produces only 157 keV, but the neutron carries a large fraction of the alpha energy, close to Efusion/3 = . Another significant source of neutrons is:
11B + p → 11C + n − 2.8 MeV.
These neutrons are less energetic, with an energy comparable to the fuel temperature. In addition, 11C itself is radioactive, but quickly decays to 11B with a half life of only 20 minutes.
Since these reactions involve the reactants and products of the primary reaction, it is difficult to lower the neutron production by a significant fraction. A clever magnetic confinement scheme could in principle suppress the first reaction by extracting the alphas as they are created, but then their energy would not be available to keep the plasma hot. The second reaction could in principle be suppressed relative to the desired fusion by removing the high energy tail of the ion distribution, but this would probably be prohibited by the power required to prevent the distribution from thermalizing.
In addition to neutrons, large quantities of hard X-rays are produced by bremsstrahlung, and 4, 12, and 16 MeV gamma rays are produced by the fusion reaction
11B + p → 12C + γ + 16.0 MeV
with a branching probability relative to the primary fusion reaction of about 10−4.
The hydrogen must be isotopically pure and the influx of impurities into the plasma must be controlled to prevent neutron-producing side reactions such as:
11B + d → 12C + n + 13.7 MeV
d + d → 3He + n + 3.27 MeV
The shielding design reduces the occupational dose of both neutron and gamma radiation to a negligible level. The primary components are water (to moderate the fast neutrons), boron (to absorb the moderated neutrons) and metal (to absorb X-rays). The total thickness is estimated to be about one meter, mostly water.
Approaches
HB11 Energy
HB11 Energy uses thousands of merged diode-pumped lasers. This allows mass-produced and less expensive kilojoule lasers to deliver megajoules to a target. The resulting nanosecond and picosecond two-pulse laser system provides next-generation input. The approach uses pulsed power (shots). Fuel pellets burn at a rate of about 1 per second. The energy released drives a conventional steam cycle generator.
Laser technology
Laser power has been increasing at about 103x/decade amid falling costs. Advancements include:
Diode-Pumped Solid-State Lasers (DPSSLs) convert more electrical input into light, reducing waste heat.
Optical Parametric Chirped Pulse Amplification (OPCPA): These systems uses nonlinear optical processes to reduce thermal load and increase efficiency.
Plasma-Based Pulse Compression: Plasma can be used to compress laser pulses, achieving high peak power with minimal energy loss.
Coherent beam combining (CBC) merges multiple beams into a single, more powerful one, spreading thermal load from across multiple beams while coherently combining their energy.
Efficient gas-cooled or cryogenic cooling systems are essential for operating high-power lasers.
Gain media such as ytterbium-doped crystals or ceramics, offer better thermal properties and higher energy storage capabilities.
Researchers prototyped a single-chip titanium:sapphire laser that is 104x smaller and 103x less expensive than earlier models.
Energy capture
Aneutronic fusion produces energy in the form of charged particles instead of neutrons. This means that energy from aneutronic fusion could be captured directly instead of blasting neutrons at a target to boil something. Direct conversion can be either inductive, based on changes in magnetic fields, electrostatic, based on pitting charged particles against an electric field, or photoelectric, in which light energy is captured in a pulsed mode.
Electrostatic conversion uses the motion of charged particles to create voltage that produces current–electrical power. It is the reverse of phenomena that use a voltage to put a particle in motion. It has been described as a linear accelerator running backwards.
Aneutronic fusion loses much of its energy as light. This energy results from the acceleration and deceleration of charged particles. These speed changes can be caused by bremsstrahlung radiation, cyclotron radiation, synchrotron radiation, or electric field interactions. The radiation can be estimated using the Larmor formula and comes in the X-ray, UV, visible, and IR spectra. Some of the energy radiated as X-rays may be converted directly to electricity. Because of the photoelectric effect, X-rays passing through an array of conducting foils transfer some of their energy to electrons, which can then be captured electrostatically. Since X-rays can go through far greater material thickness than electrons, hundreds or thousands of layers are needed to absorb them.
Technical challenges
Many challenges confront the commercialization of aneutronic fusion.
Temperature
The large majority of fusion research has gone toward D–T fusion, which is the easiest to achieve. Fusion experiments typically use deuterium–deuterium fusion (D–D) because deuterium is cheap and easy to handle, being non-radioactive. Experimenting with D–T fusion is more difficult because tritium is expensive and radioactive, requiring additional environmental protection and safety measures.
The combination of lower cross-section and higher loss rates in D–3He fusion is offset to a degree because the reactants are mainly charged particles that deposit their energy in the plasma. This combination of offsetting features demands an operating temperature about four times that of a D–T system. However, due to the high loss rates and consequent rapid cycling of energy, the confinement time of a working reactor needs to be about fifty times higher than D–T, and the energy density about 80 times higher. This requires significant advances in plasma physics.
Proton–boron fusion requires ion energies, and thus plasma temperatures, some nine times higher than those for D–T fusion. For any given density of the reacting nuclei, the reaction rate for proton-boron achieves its peak rate at around 600 keV (6.6 billion degrees Celsius or 6.6 gigakelvins) while D–T has a peak at around 66 keV (765 million degrees Celsius, or 0.765 gigakelvin). For pressure-limited confinement concepts, optimum operating temperatures are about 5 times lower, but the ratio is still roughly ten-to-one.
Power balance
The peak reaction rate of p–11B is only one third that for D–T, requiring better plasma confinement. Confinement is usually characterized by the time τ the energy is retained so that the power released exceeds that required to heat the plasma. Various requirements can be derived, most commonly the Lawson criterion, the product of the density, nτ, and the product with the pressure nTτ. The nτ required for p–11B is 45 times higher than that for D–T. The nTτ required is 500 times higher. Since the confinement properties of conventional fusion approaches, such as the tokamak and laser pellet fusion are marginal, most aneutronic proposals use radically different confinement concepts.
In most fusion plasmas, bremsstrahlung radiation is a major energy loss channel. (See also bremsstrahlung losses in quasineutral, isotropic plasmas.) For the p–11B reaction, some calculations indicate that the bremsstrahlung power will be at least 1.74 times larger than the fusion power. The corresponding ratio for the 3He–3He reaction is only slightly more favorable at 1.39. This is not applicable to non-neutral plasmas, and different in anisotropic plasmas.
In conventional reactor designs, whether based on magnetic or inertial confinement, the bremsstrahlung can easily escape the plasma and is considered a pure energy loss term. The outlook would be more favorable if the plasma could reabsorb the radiation. Absorption occurs primarily via Thomson scattering on the electrons, which has a total cross section of σT = . In a 50–50 D–T mixture this corresponds to a range of . This is considerably higher than the Lawson criterion of ρR > 1 g/cm2, which is already difficult to attain, but might be achievable in inertial confinement systems.
In megatesla magnetic fields a quantum mechanical effect might suppress energy transfer from the ions to the electrons. According to one calculation, bremsstrahlung losses could be reduced to half the fusion power or less. In a strong magnetic field cyclotron radiation is even larger than the bremsstrahlung. In a megatesla field, an electron would lose its energy to cyclotron radiation in a few picoseconds if the radiation could escape. However, in a sufficiently dense plasma (ne > , a density greater than that of a solid), the cyclotron frequency is less than twice the plasma frequency. In this well-known case, the cyclotron radiation is trapped inside the plasmoid and cannot escape, except from a very thin surface layer.
While megatesla fields have not yet been achieved, fields of 0.3 megatesla have been produced with high intensity lasers, and fields of 0.02–0.04 megatesla have been observed with the dense plasma focus device.
At much higher densities (ne > ), the electrons will be Fermi degenerate, which suppresses bremsstrahlung losses, both directly and by reducing energy transfer from the ions to the electrons. If necessary conditions can be attained, net energy production from p–11B or D–3He fuel may be possible. The probability of a feasible reactor based solely on this effect remains low, however, because the gain is predicted to be less than 20, while more than 200 is usually considered to be necessary.
Power density
In every published fusion power plant design, the part of the plant that produces the fusion reactions is much more expensive than the part that converts the nuclear power to electricity. In that case, as indeed in most power systems, power density is an important characteristic. Doubling power density at least halves the cost of electricity. In addition, the confinement time required depends on the power density.
It is, however, not trivial to compare the power density produced by different fusion fuel cycles. The case most favorable to p–11B relative to D–T fuel is a (hypothetical) confinement device that only works well at ion temperatures above about 400 keV, in which the reaction rate parameter σv is equal for the two fuels, and that runs with low electron temperature. p–11B does not require as long a confinement time because the energy of its charged products is two and a half times higher than that for D–T. However, relaxing these assumptions, for example by considering hot electrons, by allowing the D–T reaction to run at a lower temperature or by including the energy of the neutrons in the calculation shifts the power density advantage to D–T.
The most common assumption is to compare power densities at the same pressure, choosing the ion temperature for each reaction to maximize power density, and with the electron temperature equal to the ion temperature. Although confinement schemes can be and sometimes are limited by other factors, most well-investigated schemes have some kind of pressure limit. Under these assumptions, the power density for p–11B is about times smaller than that for D–T. Using cold electrons lowers the ratio to about 700. These numbers are another indication that aneutronic fusion power is not possible with mainline confinement concepts.
See also
CNO cycle
Cold fusion
History of nuclear fusion
Notes
References
External links
Focus Fusion Society
Proton-boron Fusion Prototype
Aneutronic fusion in a degenerate plasma
Lasers trigger cleaner fusion (news@nature.com, 26 August 2005)
Observation of neutronless fusion reactions in picosecond laser plasmas (Physical Review E 72, 2005)
New Opportunities for Fusion in the 21st Century – Advanced Fuels , G.L. Kulcinski and J.F.Santarius, 14th Topical Meeting on the Technology of Fusion Energy, Oct 15–19, 2000.
Fusion power
Nuclear fusion reactions | Aneutronic fusion | [
"Physics",
"Chemistry"
] | 5,161 | [
"Nuclear fusion",
"Fusion power",
"Nuclear fusion reactions",
"Plasma physics"
] |
1,631,102 | https://en.wikipedia.org/wiki/Salt%20cellar | A salt cellar (also called a salt, salt-box) is an article of tableware for holding and dispensing salt. In British English, the term can be used for what in North American English are called salt shakers. Salt cellars can be either lidded or open, and are found in a wide range of sizes, from large shared vessels to small individual dishes. Styles range from simple to ornate or whimsical, using materials including glass and ceramic, metals, ivory and wood, and plastic.
Use of salt cellars is documented as early as ancient Rome. They continued to be used through the first half of the 20th century; however, usage began to decline with the introduction of free-flowing salt in 1911, and they have been almost entirely replaced by salt shakers.
Salt cellars were an early collectible as pieces of silver, pewter, glass, etc. Soon after their role at the table was replaced by the shaker, salt cellars became a popular collectible in their own right.
Etymology
The word salt cellar is attested in English from the 15th century. It combines the English word salt with the Anglo-Norman word (from Latin ), which already by itself meant "salt container".
Salt cellars are known, in various forms, by assorted names including open salt, salt dip, standing salt, master salt, and salt dish. A master salt is the large receptacle from which the smaller, distributed, salt dishes are filled; according to fashion or custom it was lidded, or open, or covered with a cloth. A standing salt is a master salt, so-named because it remained in place as opposed to being passed. A trencher salt is a small salt cellar located next to the trencher (i.e., place setting). Open salt and salt dip refer to salt dishes that are uncovered.
The term salt cellar is also used generally to describe any container for table salt, thus encompassing salt shakers and salt pigs.
History
Greek artifacts from the classical period in the shape of small bowls are often called salt cellars. Their function remains uncertain, though they may have been used for condiments including salt. The Romans had the salinum, a receptacle typically of silver and regarded as essential in every household. The salinum had ceremonial importance as the container of the (salt) offering made during the meal, but it was also used to dispense salt to diners.
During the Middle Ages, elaborate master salt cellars evolved. Placed at the head table, this large receptacle was a sign of status and prosperity, prominently displayed. It was usually made of silver and often decorated in motifs of the sea. In addition to the master salt, smaller, simpler salt cellars were distributed for diners to share; these could take forms as simple as slices of stale bread. The social status of guests could be measured by their positions relative to the master's large salt cellar: high-ranking guests sat above the salt while those of lesser importance sat below the salt.
Large, ornate master salts continued to be made through the Renaissance and Baroque periods, becoming more ceremonial. In England, the ornamental master salt came to be called a standing salt, because it was not passed but remained in place. By 1588, reference is documented in England to the "trencher salt"; by the early 18th century, these had mostly supplanted large salts. Tiny salt spoons appear in the 17th century, and in increasing numbers as the use of trencher salts increased.
The advent of the Industrial Revolution in the late 18th to early 19th centuries rendered both salt and salt cellars commonplace. From about 1825 pressed glass manufacture became an industry and thrived; because they were easy to mold, salt cellars were among the earliest items mass-produced by this method. Similarly, the development of Sheffield plate (18th century), then electroplating (19th century), led to mass production of affordable silver-plated wares, including salt cellars.
Salt shakers began to appear in the Victorian era, and patents show attempts to deal with the problem of salt clumping, but they remained the exception rather than the norm. It was not until after 1911, when anti-caking agents began to be added to table salt, that salt shakers gained favor and open salts began to fall into disuse.
Collectibility
Silver, glass, china, pewter, stoneware, and other media used in the creation of tableware are collectible and have most likely been collected for centuries. By extension, salt cellars first became collectible as pieces of silver, glass, etc. Whether because of their commonness (and hence affordability), or the wide variety of them, or because of their slide into anachronism and quaintness, salt cellars themselves became collectible at latest by the 1930s.
Although antique salt cellars are not difficult to find and can be very affordable, modern manufacturers and artisans continue to make salt cellars. Reproductions are common, as are new designs that reflect current tastes.
The Cracow Saltworks Museum in Wieliczka, Poland, has a large collection of salt cellars. It contains over 1000 objects made of: porcelain, gold, silver, glass, wood, bone, quartz and mother-of-pearl. Those artifacts are on display in the Saltworks Castle ( in Polish).
Salt pig
A salt pig is a container used to hold salt, particularly in a kitchen, to make it easily accessible to pinch or spoon measure into dishes. They are available in many materials, but are generally ceramic, porcelain, earthenware or clay. The earthenware construction of a salt pig can help keep the salt from clumping in humid kitchens. According to the blog Mundane Essays, a blog in which writer Muness Alrubaiehis researched the origin of the term "salt pig", the use of "pig" is found in Scots and northern English dialect meaning an earthenware vessel.
See also
Nef
Salt spoon
Salt cellar (origami)
References
External links
Medieval and Renaissance Saltcellars
Open Salt Collectors website
Cracow Saltworks Museum
Serving vessels
Edible salt | Salt cellar | [
"Chemistry"
] | 1,269 | [
"Edible salt",
"Salts"
] |
1,631,144 | https://en.wikipedia.org/wiki/Biolex | Biolex Therapeutics was a biotechnology firm in the Research Triangle of North Carolina which was founded in 1997 and raised $190 million from investors. It filed for Chapter 7 bankruptcy on July 5, 2012.
The company focused on expression of difficult-to-synthesize recombinant proteins in its LEX platform, which used Lemna, a duckweed. The duckweeds are a family of small aquatic plants that can be grown in sterile culture. Biolex developed recombinant DNA technology for efficiently producing pharmaceutical proteins in Lemna. Therapeutic glycosylated proteins, including monoclonal antibodies and interferon (IFN-alpha2b) have been produced using the LEX platform.
Biolex acquired Epicyte Pharmaceutical Inc. on May 6, 2004, and acquired the LemnaGene SA of Lyon, France in 2005. Biolex was a privately held company, originally backed by Quaker BioVentures, The Trelys Funds, and Polaris Venture Partners. The term "plantibody" is trademarked by Biolex. In May 2012 Biolex announced that it sold the LEX System to Synthon, a Netherlands-based specialty pharmaceutical company. The sale included two preclinical biologics made with the LEX System, BLX-301, a humanized and glyco-optimized anti-CD20 antibody for non-Hodgkin's B-cell lymphoma and other B-cell malignancies and BLX-155, a direct-acting thrombolytic. The financial terms of the sale were not disclosed.
References
External links
Official site
Background information on production of therapeutic proteins in 'Lemna'
Biotechnology companies of the United States
Defunct pharmaceutical companies of the United States
Life sciences industry
Biotechnology companies established in 1997
Biotechnology companies disestablished in 2012
Pharmaceutical companies disestablished in 2012
1997 establishments in North Carolina
2012 disestablishments in North Carolina
American companies established in 1997
American companies disestablished in 2012 | Biolex | [
"Biology"
] | 417 | [
"Life sciences industry"
] |
1,631,288 | https://en.wikipedia.org/wiki/Direct-drive%20mechanism | A direct-drive mechanism is a mechanism design where the force or torque from a prime mover is transmitted directly to the effector device (such as the drive wheels of a vehicle) without involving any intermediate couplings such as a gear train or a belt.
History
In the late 19th century and early 20th century, some of the earliest locomotives and cars used direct drive transmissions at higher speeds. Direct-drive mechanisms for industrial arms began to be possible in the 1980s, with the use of rare-earth magnetic materials. The first direct-drive arm was built in 1981 at Carnegie Mellon University.
Today the most commonly used magnets are neodymium magnets.
Design
Direct-drive systems are characterized by smooth torque transmission, and nearly-zero backlash.
The main benefits of a direct-drive system are increased efficiency (due to reduced power losses from the drivetrain components) and being a simpler design with fewer moving parts. Major benefits also include the ability to deliver high torque over a wide range of speeds, fast response, precise positioning, and low inertia.
The main drawback is that a special type of electric motor is often needed to provide high torque outputs at low rpm. Compared with a multi-speed transmission, the motor is usually operating in its optimal power band for a smaller range of output speeds for the system (e.g., road speeds in the case of a motor vehicle).
Direct-drive mechanisms also need a more precise control mechanism. High-speed motors with speed reduction have relatively high inertia, which helps smooth the output motion. Most motors exhibit positional torque ripple known as cogging torque. In high-speed motors, this effect is usually negligible, as the frequency at which it occurs is too high to significantly affect system performance; direct-drive units will suffer more from this phenomenon unless additional inertia is added (i.e. by a flywheel) or the system uses feedback to actively counter the effect.
Applications
Direct-drive mechanisms are used in applications ranging from low speed operation (such as phonographs, telescope mounts, video game racing wheels and gearless wind turbines) to high speeds (such as fans, computer hard drives, VCR heads, sewing machines, CNC machines and washing machines.)
Some electric railway locomotives have used direct-drive mechanisms, such as the 1919 Milwaukee Road class EP-2 and the 2007 East Japan Railway Company E331. Several cars from the late 19th century used direct-drive wheel hub motors, as did some concept cars in the early 2000s; however, most modern electric cars use inboard motor(s), where drive is transferred to the wheels, via the axles.
Some automobile manufacturers have managed to create their own unique direct-drive transmissions, such as the one Christian von Koenigsegg invented for the Koenigsegg Regera.
See also
Belt-drive
Chain-drive
Direct-drive sim racing wheel
Drive shaft
Hubless wheel
Linear motor
Individual wheel drive
References
Mechanisms (engineering)
Gearless electric drive | Direct-drive mechanism | [
"Engineering"
] | 613 | [
"Mechanical engineering",
"Mechanisms (engineering)"
] |
1,631,391 | https://en.wikipedia.org/wiki/Common%20hepatic%20duct | The common hepatic duct is the first part of the biliary tract. It joins the cystic duct coming from the gallbladder to form the common bile duct.
Structure
The common hepatic duct is the first part of the biliary tract. It is formed by the union of the right hepatic duct (which drains bile from the right functional lobe of the liver) and the left hepatic duct (which drains bile from the left functional lobe of the liver).
The duct is about 3 cm long. The common hepatic duct is about 6 mm in diameter in adults, with some variation.
Termination
The common hepatic duct typically unites with the cystic duct some 1–2 cm superior to the duodenum and anterior to the right hepatic artery, with the cystic duct approaching the common hepatic duct from the right.
Relations
The right branch of the hepatic artery proper usually passes posterior to the duct, but may rarely pass anterior to it instead.
Histology
The inner surface is covered in a simple columnar epithelium.
Variation
Accessory hepatic ducts
Around 1.7% of people have additional accessory hepatic ducts that opens into the common hepatic duct. Accessory hepatic ducts may also instead open into the cystic duct or gallbladder.
Termination
Occasionally, the cystic duct may first run along the right side of the common bile duct for some distance before joining it, or may pass posteriorly around to the common hepatic duct to unite with it from the left side.
Rarely, the common hepatic duct and gallbladder join directly (with the cystic duct being absent), leading to illness.
Function
The hepatic duct is part of the biliary tract that transports secretions from the liver into the intestines.
Clinical significance
Cholecystectomy
The common hepatic ducts carries a higher volume of bile in people who have had their gallbladder removed.
The common hepatic duct is an important anatomic landmark during surgeries such as cholecystectomy. It forms one edge of Calot's triangle, along with the cystic duct and the cystic artery. All constituents of this triangle must be identified to avoid cutting or clipping the wrong structure.
Cholestasis
A diameter of more than 8 mm is regarded as abnormal dilatation, and is a sign of cholestasis.
Mirizzi's syndrome
Mirizzi's syndrome occurs when the common hepatic duct is blocked by gallstones.
Additional images
References
External links
- "Stomach, Spleen and Liver: Contents of the Hepatoduodenal Ligament"
Illustration
Digestive system
Hepatology | Common hepatic duct | [
"Biology"
] | 566 | [
"Digestive system",
"Organ systems"
] |
1,631,410 | https://en.wikipedia.org/wiki/Salicylaldoxime | Salicylaldoxime is an organic compound described by the formula C6H4CH=NOH-2-OH. It is the oxime of salicylaldehyde. This crystalline, colorless solid is a chelator and sometimes used in the analysis of samples containing transition metal ions, with which it often forms brightly coloured coordination complexes.
Reactions
Salicylaldoxime is the conjugate acid of a bidentate ligand:
2 C6H4CH=NOH-2-OH + Cu2+ → Cu(C6H4CH=NOH-2-O)2 + 2 H+
In highly acidic media, the ligand protonates, and the metal aquo complex and aldoxime are liberated. In this way the ligand is used as a recyclable extractant.
It typically forms charge-neutral complexes with divalent metal ions.
Analytical chemistry
In the era when metals were analysed by spectrophotometry, many chelating ligands were developed that selectively formed brightly coloured complexes with particular metal ions. This methodology has been eclipsed with the introduction of inductively coupled plasma methodology. Salicylaldoxime can be used to selectively precipitate metal ions for gravimetric determination. It forms a greenish-yellow precipitate with copper at a pH of 2.6 in the presence of acetic acid. Under these conditions, this is the only metal that precipitates; at pH 3.3, nickel also precipitates. Iron (III) will interfere.
It has been used as an ionophore in ion selective electrodes, with good response to Pb2+ and Ni2+.
Extraction of metals
Saloximes are used in the extraction and separation of metals from their ores. In one application of hydrometallurgy, Cu2+ is extracted into organic solvents as its saloxime complex.
References
External links
Chemical data at NIST Chemistry WebBook
2-Hydroxyphenyl compounds
Analytical chemistry
Aldoximes | Salicylaldoxime | [
"Chemistry"
] | 426 | [
"nan"
] |
1,631,459 | https://en.wikipedia.org/wiki/Brown%20Mountain%20lights | The Brown Mountain lights are purported ghost lights near Brown Mountain in North Carolina. The earliest published references to strange lights there are from around 1910, at about the same time electric lighting was becoming widespread in the area. In 1922, a USGS scientist, George R. Mansfield, used a map and an alidade telescope to prove that the lights that were being seen were trains, car headlights, and brush fires, which ended widespread public concern.
With the original sightings of the early 20th century having been explained, storytellers have been creating imaginary pre-electrification histories of the lights ever since, and the nature of claimed encounters with the lights appears to have changed over the years to suit changing cultural expectations.
History
Origin and explanation of the lights
The earliest published mentions of the lights begin in 1912, on the heels of the first publication of Jules Verne's 1906 novel Master of the World in English in 1911. An important plot point in the novel consists of a mad scientist constructing an airship inside his secret lair in Table Rock, near Morganton, North Carolina, activities which cause strange lights to appear on the summit of the mountain. The rapidly expanding electrification of the Linville Gorge area from the 1890s through the 1910s, seems to be the origin of the Brown Mountain lights legend, possibly helped by Verne's novel. A number of travelogues, including accounts of mysterious happenings and ghost stories, were published about the region prior to 1900; but there is no mention of unexplained lights in any of these historical sources. Mansfield's investigation found many locals were unaware of any strange lights until 1910 or later. Joseph Loven, who lived next to Loven's Hotel, said he had first noticed the lights in 1897, but took no interest in them, and didn't hear anyone else talking about them, until his neighbor, C. E. Gregory, began trying to draw public attention to them around 1910. Also, Southern Railway had begun upgrading their locomotive headlamps to 600,000 candlepower systems in 1909, rendering their trains' light output greater than that of some lighthouses that were in operation at the time.
One early account of the lights dates from September 24, 1913, as reported in the Charlotte Daily Observer. It described “mysterious lights seen just above the horizon every night,” red in color, appearing “punctually” at 7:30 PM and again at 10 PM; attributing the information to Anderson Loven, “an old and reliable resident”.
As in Verne's novel, locals asked their Congressmen for a government investigation; in 1913 United States Geological Survey employee, D.B. Sterrett, was dispatched to the area and quickly found that the headlights of westbound Southern Railway locomotives would have been visible from Loven's Hotel, and the train schedules he consulted left him no doubt that these were the cause of the lights that were being reported. In July 1916, a flood caused train activity around Brown Mountain to cease for several weeks, which provided an opportunity for some to doubt Sterrett's conclusions. George Anderson Loven, whose hotel was doing a good business from all the visitors keen to see the light, told the Lenoir News that September that it was still being seen nightly, although it isn't clear whether it was one specific light that he referred to, or many different lights, or possibly even every nighttime light visible from his hotel that he considered mysterious. It was never required that train headlights be the only mystery light source, as car headlights were another likely contributor, but this argument is often repeated today.
With Sterrett's investigation being considered inadequate by locals, the USGS sent Mansfield to investigate in 1922. As part of his investigation, Mansfield set up an alidade telescope near Loven's Hotel, at the former home of C. E. Gregory. Accompanied by members of the Loven family, he recorded a number of nighttime lights, one of which appeared to move and flare in brightness, making Joseph Loven call it a true Brown Mountain light, but which through the telescope proved to be stationary throughout the entire evening, despite repeated azimuth readings being taken. Another series of lights that were seen were found to plot on a map to a curve in a Southern Railway track, and the time of appearance of that light corresponded exactly to that of a scheduled train. At the end of the observing session, Robert Loven said that he didn't believe that the lights they had seen were bright enough to be genuine Brown Mountain lights, but Joseph Loven said that he considered the lights they saw to have been an average display of the phenomenon. After Mansfield's investigation found the lights to be distant car and train headlights, and brush fires, Joseph Loven, who had inherited the hotel from his father and had been one of the main commentators and local experts on the Brown Mountain lights in the newspapers, seemed to disappear from further commentary on the lights in print for the rest of his life. Ed Speer interprets this as a sign that Mansfield's investigation might have solved the mystery for Loven, as it did for most people.
Ever since Mansfield demonstrated that the Brown Mountain lights that were being seen in 1922 were nothing stranger than distant electric lights, local writers have tried to protect the mystery by inventing new tales of the lights' origins; one apparent goal of this storytelling is to try to insinuate the idea that the lights had been seen before trains and electrification. The first publication of a claim that the lights were in any way referenced by a Native American culture was an article in the Asheville Citizen in 1938; it was merely asserted as fact with no sources being provided. Experts on historical Native American traditions state that this is a myth that was invented by white people to justify their own beliefs in the lights. A new variant of an older ghost story about a woman and baby murdered in the Jonas Ridge community became the first published ghost story to incorporate the lights in 1936. Other ghost stories in similar vein were devised through the rest of the 20th century to the present. Newer ghost stories about the lights include one about a Revolutionary War soldier that was first published in 1982; while a story linking the lights to Civil War ghosts was first seen in 2012, on the internet. The UFO movement began to influence the Brown Mountain lore in the mid-20th century. Ralph Lael, who used to display what he claimed was a mummified alien in his rock shop, self published a book detailing his claimed extraterrestrial encounters on Brown Mountain, and his trip with the aliens to their home planet of Venus, in 1965. At some point, a transition took place between the distant lights that were the foundation of the original legend, and the much more recent stories that feature 'close encounters' with glowing spheres that float in the air, stories that are notably absent from the early 20th century reports, even though thousands of loggers were working in the Brown Mountain area then. Currently, the lights draw people interested in the popular 'ghost hunting' hobby; providing them a supposedly haunted location to investigate. The lights, as a cultural phenomenon, evolved over time to suit the desires and changing expectations of the people who participate in that culture, including borrowings from outside.
Fate Wiseman's light
Josiah Lafayette "Fate" Wiseman (1842–1932) was the great uncle of Scotty Wiseman, whose song, The Legend of the Brown Mountain Lights (1961), greatly popularized the Brown Mountain lights, making them the most popular ghost story in North Carolina. His is also the oldest report of a strange light near Brown Mountain, though it wasn't well known at the time, and played no role in founding the legend. According to a tradition that was passed down through the Wiseman family, but that wasn't published until 1971, some time "around 1854" young Fate Wiseman was camping at Wiseman's View with his father when he first noticed a flash of light in the distance. Wiseman found that the same momentary flash would appear at the same place on the horizon at about the same time (varying by at most half an hour) each and every night. He would often return to the place and, at the expected time, he would stare into the distance until he saw a glimpse of the light.
It's unknown how reliable or precise the date of "around 1854" is, because the source, Scotty Wiseman, was remembering things that 79-year-old Fate had told him when he was 13 years old, 50 years prior. However, the description of the light is consistent with a distant train headlight turning a corner, the first train in western North Carolina began nightly service between Morganton and Salisbury in 1858, and trains are known for following consistent and regular schedules, as the light did.
John William Gerard de Brahm
In 1771, military engineer, cartographer, mystic, and "eccentric genius" John William Gerard de Brahm presented a report to King George III, Report of the General Survey in the Southern District of North America, primarily describing the geography of East Florida, with sections on Georgia and South Carolina. An inaccurate claim that de Brahm made a reference to Brown Mountain lights in this report is often repeated. The text in question is quoted here:
Although these Mountains transpire through their Tops sulphurueaous and arsenical Sublimations, yet they are too light, as precipitate so their Sublimitories, but are carried away by the Winds to distant Regions. In a heavy Atmosphere, the nitrous Vapours are swallowed up through the Spiraculs of the Mountains, and thus the Country is cleared from their Corrosion; when the Atmosphere is light, these nitrous Vapours rise up to the arsenical and sulphureous (subliming through the Expiraculs of the Mountains), and when they meet with each other in Contact, the Niter inflames, vulgurates, and detonates, whence the frequent Thunders, in which a most votalized Spirit of Niter ascends to purify inspire the upper Air, and a phlogiston Regeneratum (the metallic Seed) descends to impregnate the Bowels of the Earth; and as all these Mountains form so many warm Athanors which draw and absorb, especially in foggy Seasons, all corrosive Effluvia along with the heavy Air through the Registers (Spiracles) and thus cease not from the Perpetual Circulation of the Air, corroding Vapours are no sooner raised, than that they are immediately disposed of, consequently the Air in the Appalachian Mountains is extreamely pure and healthy.
Clearly, de Brahm is using mystical or alchemical language to speculate about the causes of thunderstorms and clean mountain air, and this has nothing to do with Brown Mountain lights. It's also not known if de Brahm (who lived in Florida, Georgia, and South Carolina at various times) ever set foot in North Carolina; de Brahm never describes anything in North Carolina in his report, and this passage is found in a chapter about South Carolina. The first attempt to link de Brahm to Brown Mountain lights was an article in the Gastonia Daily Gazette, which is no longer in print and not to be confused for modern day newspapers with similar names, in 1927.
Research
Appalachian State University installed two low light cameras on rooftops that overlook Brown Mountain and Linville Gorge; by 2014 these cameras had produced 6,300 viewing hours worth of data without any unexplainable lights being recorded.
Viewing locations
There are roadside locations for observing the purported lights on the Blue Ridge Parkway at mile posts 310 (Brown Mountain overlook) and 301 (Green Mountain overlook) and from the Brown Mountain Overlook along North Carolina Highway 181 (NC 181), near Jonas Ridge, North Carolina. Additionally, lights have been reported from the top of Table Rock and Wiseman's View, both located in the Linville Gorge Wilderness.
In popular culture
The lights are the inspiration for the bluegrass song “Brown Mountain Lights,” by Scotty Wiseman, later performed by The Hillmen (Vern Gosdin – vocals) and also The Kingston Trio and the Country Gentlemen. In this version, the light is being carried by "a faithful old slave/come back from the grave" who is searching for his lost master. The song was also recorded by the progressive bluegrass band Acoustic Syndicate and performed by Yonder Mountain String Band. This song was also performed and recorded by Sonny James, Tommy Faile, and Tony Rice.
The 1999 episode "Field Trip" of the paranormal drama show The X-Files centered around a mysterious case of missing hikers that were found dead in the vicinity of the Brown Mountains of North Carolina; the show mentions the Brown Mountain lights (the show's main character Fox Mulder believed it was due to UFOs).
It was featured in episodes of Weird or What?, Ancient Aliens, and Mystery Hunters.
It is described as the basis for the 2014 feature film Alien Abduction.
The mountains and the lights are featured in Speaking in Bones (2015) by Kathy Reichs.
See also
Chir Batti
Hessdalen lights
Invented tradition
Longdendale lights
Maco light
Marfa lights
Min Min light
Naga fireball
Paulding Light
References
Sources
Jerome Clark, Unexplained! 347 Strange Sightings, Incredible Occurrences, and Puzzling Physical Phenomena, Visible Ink Press, 1993.
External links
Website by faculty and students at Appalachian State
Weather lore
North Carolina culture
Reportedly haunted locations in North Carolina
Tourist attractions in Burke County, North Carolina | Brown Mountain lights | [
"Physics"
] | 2,778 | [
"Weather",
"Physical phenomena",
"Weather lore"
] |
1,631,617 | https://en.wikipedia.org/wiki/Zamiaceae | The Zamiaceae are a family of cycads that are superficially palm or fern-like. They are divided into two subfamilies with eight genera and about 150 species in the tropical and subtropical regions of Africa, Australia and North and South America.
The Zamiaceae, sometimes known as zamiads, are perennial, evergreen, and dioecious. They have subterranean to tall and erect, usually unbranched, cylindrical stems, and stems clad with persistent leaf bases (in Australian genera).
Their leaves are simply pinnate, spirally arranged, and interspersed with cataphylls. The leaflets are sometimes dichotomously divided. The leaflets occur with several sub-parallel, dichotomously branching longitudinal veins; they lack a mid rib. Stomata occur either on both surfaces or undersurface only.
Their roots have small secondary roots. The coralloid roots develop at the base of the stem at or below the soil surface.
Male and female sporophylls are spirally aggregated into determinate cones that grow along the axis. Female sporophylls are simple, appearing peltate, with a barren stipe and an expanded and thickened lamina with 2 (rarely 3 or more) sessile ovules inserted on the inner (axis facing) surface and directed inward. The seeds are angular, with the inner coat hardened and the outer coat fleshy. They are often brightly colored, with 2 cotyledons.
One subfamily, the Encephalartoideae, is characterized by spirally arranged sporophylls (rather than spirally orthostichous), non-articulate leaflets and persistent leaf bases. It is represented in Australia, with two genera and 40 species.
As with all cycads, members of the Zamiaceae are poisonous, producing poisonous glycosides known as cycasins.
The former family Stangeriaceae (which contained Bowenia and Stangeria) has been shown to be nested within Zamiaceae by phylogenetic analysis.
The family first began to diversify during the Cretaceous period.
Genera
Dioon (14 species)
Macrozamia (42 species)
Lepidozamia (2 species)
Encephalartos (66 species)
Bowenia Hook. ex Hook.f. (2 extant species)
Ceratozamia (27 species)
Stangeria T.Moore (1 species)
Zamia (76 species)
Microcycas (1 species)
†Eostangeria (3 species, Cenozoic, Europe, North America)
†Eobowenia (1 species, Early Cretaceous, Argentina)
†Wintucycas (2 species, Late Cretaceous-Paleocene, Argentina)
†Restrepophyllum (1 species, Early Cretaceous, Argentina)
†Skyttegaardia (2 species Early Cretaceous, Denmark, Late Cretaceous, United States)
Gallery
References
The Cycad Pages: Zamiaceae
Flora of North America
New York Botanical Garden: Vascular Plant Type Catalog, some Zamiaceae genera and species.
Cycads
Plant families | Zamiaceae | [
"Biology"
] | 627 | [
"Plant families",
"Plants"
] |
1,631,654 | https://en.wikipedia.org/wiki/List%20of%20mathematical%20identities | This article lists mathematical identities, that is, identically true relations holding in mathematics.
Bézout's identity (despite its usual name, it is not, properly speaking, an identity)
Binet-cauchy identity
Binomial inverse theorem
Binomial identity
Brahmagupta–Fibonacci two-square identity
Candido's identity
Cassini and Catalan identities
Degen's eight-square identity
Difference of two squares
Euler's four-square identity
Euler's identity
Fibonacci's identity see Brahmagupta–Fibonacci identity or Cassini and Catalan identities
Heine's identity
Hermite's identity
Lagrange's identity
Lagrange's trigonometric identities
List of logarithmic identities
MacWilliams identity
Matrix determinant lemma
Newton's identity
Parseval's identity
Pfister's sixteen-square identity
Sherman–Morrison formula
Sophie Germain identity
Sun's curious identity
Sylvester's determinant identity
Vandermonde's identity
Woodbury matrix identity
Identities for classes of functions
Exterior calculus identities
Fibonacci identities: Combinatorial Fibonacci identities and Other Fibonacci identities
Hypergeometric function identities
List of integrals of logarithmic functions
List of topics related to
List of trigonometric identities
Inverse trigonometric functions
Logarithmic identities
Summation identities
Vector calculus identities
See also
External links
A Collection of Algebraic Identities
Matrix Identities
Identities | List of mathematical identities | [
"Mathematics"
] | 304 | [
"Mathematical theorems",
"Mathematical identities",
"Mathematical problems",
"Algebra"
] |
1,631,732 | https://en.wikipedia.org/wiki/Ian%20Wilmut | Sir Ian Wilmut (7 July 1944 – 10 September 2023) was a British embryologist and the chair of the Scottish Centre for Regenerative Medicine at the University of Edinburgh. He is best known as the leader of the research group that in 1996 first cloned a mammal from an adult somatic cell, a Finnish Dorset lamb named Dolly.
Wilmut was appointed OBE in 1999 for services to embryo development and knighted in the 2008 New Year Honours. He, Keith Campbell and Shinya Yamanaka jointly received the 2008 Shaw Prize for Medicine and Life Sciences for their work on cell differentiation in mammals.
Early life and education
Wilmut was born in Hampton Lucy, Warwickshire, England, on 7 July 1944. Wilmut's father, Leonard Wilmut, was a mathematics teacher who suffered from diabetes for fifty years, which eventually caused him to become blind. The younger Wilmut attended the Boys' High School in Scarborough, where his father taught. His early desire was to embark on a naval career, but he was unable to do so due to his colour blindness. As a schoolboy, Wilmut worked as a farm hand on weekends, which inspired him to study Agriculture at the University of Nottingham.
In 1966, Wilmut spent eight weeks working in the laboratory of Christopher Polge, who is credited with developing the technique of cryopreservation in 1949. The following year Wilmut joined Polge's laboratory to undertake a Doctor of Philosophy degree at the University of Cambridge, from where he graduated in 1971 with a thesis on semen cryopreservation. During this time he was a postgraduate student at Darwin College.
Career and research
After completing his PhD, he was involved in research focusing on gametes and embryogenesis, including working at the Roslin Institute.
Wilmut was the leader of the research group that in 1996 first cloned a mammal, a lamb named Dolly. She died of a respiratory disease in 2003. In 2008 Wilmut announced that he would abandon the technique of somatic cell nuclear transfer by which Dolly was created in favour of an alternative technique developed by Shinya Yamanaka. This method has been used in mice to derive pluripotent stem cells from differentiated adult skin cells, thus circumventing the need to generate embryonic stem cells. Wilmut believed that this method holds greater potential for the treatment of degenerative conditions such as Parkinson's disease and to treat stroke and heart attack patients.
Wilmut led the team that created Dolly, but in 2006 admitted his colleague Keith Campbell deserved "66 per cent" of the invention that made Dolly's birth possible, and that the statement "I did not create Dolly" was accurate. His supervisory role is consistent with the post of principal investigator held by Wilmut at the time of Dolly's creation.
Wilmut was an Emeritus Professor at the Scottish Centre for Regenerative Medicine at the University of Edinburgh and in 2008 was knighted in the New Year Honours for services to science.
Wilmut and Campbell, in conjunction with Colin Tudge, published The Second Creation in 2000.
In 2006 Wilmut's book After Dolly: The Uses and Misuses of Human Cloning was published, co-authored with Roger Highfield.
Death
Wilmut died from complications of Parkinson's disease on 10 September 2023, aged 79.
Awards and honours
In 1998 he received the Lord Lloyd of Kilgerran Award and the Golden Plate Award of the American Academy of Achievement.
Wilmut was appointed Officer of the Order of the British Empire (OBE) in the 1999 Birthday Honours "for services to Embryo Development" and a Fellow of the Royal Society (FRS) in 2002. He was also an elected Fellow of the Academy of Medical Sciences in 1999 and Fellow of the Royal Society of Edinburgh in 2000. He was elected an EMBO Member in 2003.
In 1997 Wilmut was Time magazine man of the year runner up. He was knighted in the 2008 New Year Honours for services to science.
Publications
References
External links
1944 births
2023 deaths
People from Warwickshire
Alumni of the University of Nottingham
Cloning
Members of the European Molecular Biology Organization
English atheists
English inventors
English geneticists
Academics of the University of Edinburgh
Alumni of Darwin College, Cambridge
Fellows of the Royal Society
Fellows of the Academy of Medical Sciences (United Kingdom)
Fellows of the Royal Society of Edinburgh
Knights Bachelor
Officers of the Order of the British Empire
Foreign associates of the National Academy of Sciences
British embryologists
People educated at Scarborough High School for Boys
Deaths from Parkinson's disease | Ian Wilmut | [
"Engineering",
"Biology"
] | 906 | [
"Cloning",
"Genetic engineering"
] |
1,631,772 | https://en.wikipedia.org/wiki/Topological%20graph%20theory | In mathematics, topological graph theory is a branch of graph theory. It studies the embedding of graphs in surfaces, spatial embeddings of graphs, and graphs as topological spaces. It also studies immersions of graphs.
Embedding a graph in a surface means that we want to draw the graph on a surface, a sphere for example, without two edges intersecting. A basic embedding problem often presented as a mathematical puzzle is the three utilities problem. Other applications can be found in printing electronic circuits where the aim is to print (embed) a circuit (the graph) on a circuit board (the surface) without two connections crossing each other and resulting in a short circuit.
Graphs as topological spaces
To an undirected graph we may associate an abstract simplicial complex C with a single-element set per vertex and a two-element set per edge. The geometric realization |C| of the complex consists of a copy of the unit interval [0,1] per edge, with the endpoints of these intervals glued together at vertices. In this view, embeddings of graphs into a surface or as subdivisions of other graphs are both instances of topological embedding, homeomorphism of graphs is just the specialization of topological homeomorphism, the notion of a connected graph coincides with topological connectedness, and a connected graph is a tree if and only if its fundamental group is trivial.
Other simplicial complexes associated with graphs include the Whitney complex or clique complex, with a set per clique of the graph, and the matching complex, with a set per matching of the graph (equivalently, the clique complex of the complement of the line graph). The matching complex of a complete bipartite graph is called a chessboard complex, as it can be also described as the complex of sets of nonattacking rooks on a chessboard.
Example studies
John Hopcroft and Robert Tarjan derived a means of testing the planarity of a graph in time linear to the number of edges. Their algorithm does this by constructing a graph embedding which they term a "palm tree". Efficient planarity testing is fundamental to graph drawing.
Fan Chung et al studied the problem of embedding a graph into a book with the graph's vertices in a line along the spine of the book. Its edges are drawn on separate pages in such a way that edges residing on the same page do not cross. This problem abstracts layout problems arising in the routing of multilayer printed circuit boards.
Graph embeddings are also used to prove structural results about graphs, via graph minor theory and the graph structure theorem.
See also
Crossing number (graph theory)
Genus
Planar graph
Real tree
Toroidal graph
Topological combinatorics
Voltage graph
Notes | Topological graph theory | [
"Mathematics"
] | 570 | [
"Topology",
"Topological graph theory",
"Mathematical relations",
"Graph theory"
] |
1,631,878 | https://en.wikipedia.org/wiki/Write%20%28Unix%29 | In Unix and Unix-like operating systems, is a utility used to send messages to another user by writing a message directly to another user's TTY.
History
The write command was included in the First Edition of the Research Unix operating system. A similar command appeared in Compatible Time-Sharing System.
Sample usage
The syntax for the write command is:
$ write user [tty]
message
The write session is terminated by sending EOF, which can be done by pressing Ctrl+D. The tty argument is only necessary when a user is logged into more than one terminal.
A conversation initiated between two users on the same machine:
$ write root pts/7
test
Will show up to the user on that console as:
Message from root@wiki on pts/8 at 11:19 ...
test
See also
List of Unix commands
talk (Unix)
wall (Unix)
References
Unix user management and support-related utilities
Standard Unix programs
Unix SUS2008 utilities | Write (Unix) | [
"Technology"
] | 198 | [
"Computing commands",
"Standard Unix programs"
] |
1,631,889 | https://en.wikipedia.org/wiki/Equivalent%20weight | In chemistry, equivalent weight (also known as gram equivalent or equivalent mass) is the mass of one equivalent, that is the mass of a given substance which will combine with or displace a fixed quantity of another substance. The equivalent weight of an element is the mass which combines with or displaces 1.008 gram of hydrogen or 8.0 grams of oxygen or 35.5 grams of chlorine.
The equivalent weight of an element is the mass of a mole of the element divided by the element's valence. That is, in grams, the atomic weight of the element divided by the usual valence. For example, the equivalent weight of oxygen is 16.0/2 = 8.0 grams.
For acid–base reactions, the equivalent weight of an acid or base is the mass which supplies or reacts with one mole of hydrogen cations (). For redox reactions, the equivalent weight of each reactant supplies or reacts with one mole of electrons (e−) in a redox reaction.
Equivalent weight has the units of mass, unlike atomic weight, which is now used as a synonym for relative atomic mass and is dimensionless. Equivalent weights were originally determined by experiment, but (insofar as they are still used) are now derived from molar masses. The equivalent weight of a compound can also be calculated by dividing the molecular mass by the number of positive or negative electrical charges that result from the dissolution of the compound.
In history
The first equivalent weights were published for acids and bases by Carl Friedrich Wenzel in 1777. A larger set of tables was prepared, possibly independently, by Jeremias Benjamin Richter, starting in 1792. However, neither Wenzel nor Richter had a single reference point for their tables, and so had to publish separate tables for each pair of acid and base.
John Dalton's first table of atomic weights (1808) suggested a reference point, at least for the elements: taking the equivalent weight of hydrogen to be one unit of mass. However, Dalton's atomic theory was far from universally accepted in the early 19th century. One of the greatest problems was the reaction of hydrogen with oxygen to produce water. One gram of hydrogen reacts with eight grams of oxygen to produce nine grams of water, so the equivalent weight of oxygen was defined as eight grams. Since Dalton supposed (incorrectly) that a water molecule consisted of one hydrogen and one oxygen atom, this would imply an atomic weight of oxygen equal to eight. However, expressing the reaction in terms of gas volumes following Gay-Lussac's law of combining gas volumes, two volumes of hydrogen react with one volume of oxygen to produce two volumes of water, suggesting (correctly) that the atomic weight of oxygen is sixteen. The work of Charles Frédéric Gerhardt (1816–56), Henri Victor Regnault (1810–78) and Stanislao Cannizzaro (1826–1910) helped to rationalise this and many similar paradoxes, but the problem was still the subject of debate at the Karlsruhe Congress (1860).
Nevertheless, many chemists found equivalent weights to be a useful tool even if they did not subscribe to atomic theory. Equivalent weights were a useful generalisation of Joseph Proust's law of definite proportions (1794) which enabled chemistry to become a quantitative science. French chemist Jean-Baptiste Dumas (1800–84) became one of the more influential opponents of atomic theory, after having embraced it earlier in his career, but was a staunch supporter of equivalent weights.
Equivalent weights were not without problems of their own. For a start, the scale based on hydrogen was not particularly practical, as most elements do not react directly with hydrogen to form simple compounds. However, one gram of hydrogen reacts with 8 grams of oxygen to give water or with 35.5 grams of chlorine to give hydrogen chloride: hence 8 grams of oxygen and 35.5 grams of chlorine can be taken to be equivalent to one gram of hydrogen for the measurement of equivalent weights. This system can be extended further through different acids and bases.
Much more serious was the problem of elements which form more than one oxide or series of salts, which have (in today's terminology) different oxidation states. Copper will react with oxygen to form either brick red cuprous oxide (copper(I) oxide, with 63.5 g of copper for 8 g of oxygen) or black cupric oxide (copper(II) oxide, with 32.7 g of copper for 8 g of oxygen), and so has two equivalent weights. Supporters of atomic weights could turn to the Dulong–Petit law (1819), which relates the atomic weight of a solid element to its specific heat capacity, to arrive at a unique and unambiguous set of atomic weights. Most supporters of equivalent weights - which included the great majority of chemists prior to 1860 — simply ignored the inconvenient fact that most elements exhibited multiple equivalent weights. Instead, these chemists had settled on a list of what were universally called "equivalents" (H = 1, O = 8, C = 6, S = 16, Cl = 35.5, Na = 23, Ca = 20, and so on). However, these nineteenth-century "equivalents" were not equivalents in the original or modern sense of the term. Since they represented dimensionless numbers that for any given element were unique and unchanging, they were in fact simply an alternative set of atomic weights, in which the elements of even valence have atomic weights one-half of the modern values. This fact was not recognized until much later.
The final death blow for the use of equivalent weights for the elements was Dmitri Mendeleev's presentation of his periodic table in 1869, in which he related the chemical properties of the elements to the approximate order of their atomic weights. However, equivalent weights continued to be used for many compounds for another hundred years, particularly in analytical chemistry. Equivalent weights of common reagents could be tabulated, simplifying analytical calculations in the days before the widespread availability of electronic calculators: such tables were commonplace in textbooks of analytical chemistry.
Use in general chemistry
The use of equivalent weights in general chemistry has largely been superseded by the use of molar masses. Equivalent weights may be calculated from molar masses if the chemistry of the substance is well known:
sulfuric acid has a molar mass of 98.078(5) , and supplies two moles of hydrogen ions per mole of sulfuric acid, so its equivalent weight is 98.078(5) /2 = 49.039(3) .
potassium permanganate has a molar mass of 158.034(1) , and reacts with five moles of electrons per mole of potassium permanganate, so its equivalent weight is 158.034(1) /5 = 31.6068(3) .
Historically, the equivalent weights of the elements were often determined by studying their reactions with oxygen. For example, 50 g of zinc will react with oxygen to produce 62.24 g of zinc oxide, implying that the zinc has reacted with 12.24 g of oxygen (from the Law of conservation of mass): the equivalent weight of zinc is the mass which will react with eight grams of oxygen, hence 50 g × 8 g/12.24 g = 32.7 g.
Some contemporary general chemistry textbooks make no mention of equivalent weights. Others explain the topic, but point out that it is merely an alternate method of doing calculations using moles.
Use in volumetric analysis
When choosing primary standards in analytical chemistry, compounds with higher equivalent weights are generally more desirable because weighing errors are reduced. An example is the volumetric standardisation of a solution of sodium hydroxide which has been prepared to approximately 0.1 . It is necessary to calculate the mass of a solid acid which will react with about 20 cm3 of this solution (for a titration using a 25 cm3 burette): suitable solid acids include oxalic acid dihydrate, potassium hydrogen phthalate and potassium hydrogen iodate. The equivalent weights of the three acids 63.04 g, 204.23 g and 389.92 g respectively, and the masses required for the standardisation are 126.1 mg, 408.5 mg and 779.8 mg respectively. Given that the measurement uncertainty in the mass measured on a standard analytical balance is ±0.1 mg, the relative uncertainty in the mass of oxalic acid dihydrate would be about one part in a thousand, similar to the measurement uncertainty in the volume measurement in the titration. However the measurement uncertainty in the mass of potassium hydrogen iodate would be five times lower, because its equivalent weight is five times higher: such an uncertainty in the measured mass is negligible in comparison to the uncertainty in the volume measured during the titration (see example below).
As an example, assume that 22.45±0.03 cm3 of the sodium hydroxide solution reacts with 781.4±0.1 mg of potassium hydrogen iodate. As the equivalent weight of potassium hydrogen iodate is 389.92 g, the measured mass is 2.004 milliequivalents. The concentration of the sodium hydroxide solution is therefore 2.004 meq/0.02245 L = 89.3 meq/L. In analytical chemistry, a solution of any substance which contains one equivalent per litre is known as a normal solution (abbreviated N), so the example sodium hydroxide solution would be 0.0893 N. The relative uncertainty (ur) in the measured concentration can be estimated by assuming a Gaussian distribution of the measurement uncertainties:
This sodium hydroxide solution can be used to measure the equivalent weight of an unknown acid. For example, if it takes 13.20±0.03 cm3 of the sodium hydroxide solution to neutralise 61.3±0.1 mg of an unknown acid, the equivalent weight of the acid is:
Because each mole of acid can only release an integer number of moles of hydrogen ions, the molar mass of the unknown acid must be an integer multiple of 52.0±0.1 g.
Use in gravimetric analysis
The term “equivalent weight” had a distinct meaning in gravimetric analysis: it meant the mass of precipitate produced from one gram of analyte (the species of interest). The different definitions came from the practice of quoting gravimetric results as mass fractions of the analyte, often expressed as a percentage. A related term was the equivalence factor, one gram divided by equivalent weight, which was the numerical factor by which the mass of precipitate had to be multiplied to obtain the mass of analyte.
For example, in the gravimetric determination of nickel, the molar mass of the precipitate bis(dimethylglyoximate)nickel [Ni(dmgH)2] is 288.915(7) , while the molar mass of nickel is 58.6934(2) : hence 288.915(7)/58.6934(2) = 4.9224(1) grams of [Ni(dmgH)2] precipitate is equivalent to one gram of nickel and the equivalence factor is 0.203151(5). For example, 215.3±0.1 mg of [Ni(dmgH)2] precipitate is equivalent to (215.3±0.1 mg) × 0.203151(5) = 43.74±0.2 mg of nickel: if the original sample size was 5.346±0.001 g, the nickel content in the original sample would be 0.8182±0.0004%.
Gravimetric analysis is one of the most precise of the common methods of chemical analysis, but it is time-consuming and labour-intensive. It has been largely superseded by other techniques such as atomic absorption spectroscopy, in which the mass of analyte is read off from a calibration curve.
Use in polymer chemistry
In polymer chemistry, the equivalent weight of a reactive polymer is the mass of polymer which has one equivalent of reactivity (often, the mass of polymer which corresponds to one mole of reactive side-chain groups). It is widely used to indicate the reactivity of polyol, isocyanate, or epoxy thermoset resins which would undergo crosslinking reactions through those functional groups.
It is particularly important for ion-exchange polymers (also called ion-exchange resins): one equivalent of an ion-exchange polymer will exchange one mole of singly charged ions, but only half a mole of doubly charged ions.
Nevertheless, given the decline in use of the term "equivalent weight" in the rest of chemistry, it has become more usual to express the reactivity of a polymer as the inverse of the equivalent weight, that is in units of mmol/g or meq/g.
References
Stoichiometry
Amount of substance
Polymer chemistry
Equivalent units
Chemistry
es:Peso equivalente | Equivalent weight | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,702 | [
"Scalar physical quantities",
"Chemical reaction engineering",
"Stoichiometry",
"Equivalent quantities",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Materials science",
"Amount of substance",
"nan",
"Polymer chemistry",
"Wikipedia categories named after physical quantities"
] |
1,631,920 | https://en.wikipedia.org/wiki/Surface%20bundle | In mathematics, a surface bundle is a bundle in which the fiber is a surface. When the base space is a circle the total space is three-dimensional and is often called a surface bundle over the circle.
See also
Mapping torus
Geometric topology | Surface bundle | [
"Mathematics"
] | 51 | [
"Topology stubs",
"Topology",
"Geometric topology"
] |
1,631,931 | https://en.wikipedia.org/wiki/Seifert%20fiber%20space | A Seifert fiber space is a 3-manifold together with a decomposition as a disjoint union of circles. In other words, it is a -bundle (circle bundle) over a 2-dimensional orbifold. Many 3-manifolds are Seifert fiber spaces, and they account for all compact oriented manifolds in 6 of the 8 Thurston geometries of the geometrization conjecture.
Definition
A Seifert manifold is a closed 3-manifold together with a decomposition into a disjoint union of circles (called fibers) such that each fiber has a tubular neighborhood that forms a standard fibered torus.
A standard fibered torus corresponding to a pair of coprime integers with is the surface bundle of the automorphism of a disk given by rotation by an angle of (with the natural fibering by circles). If the middle fiber is called ordinary, while if the middle fiber is called exceptional. A compact Seifert fiber space has only a finite number of exceptional fibers.
The set of fibers forms a 2-dimensional orbifold, denoted by B and called the base —also called the orbit surface— of the fibration.
It has an underlying 2-dimensional surface , but may have some special orbifold points corresponding to the exceptional fibers.
The definition of Seifert fibration can be generalized in several ways.
The Seifert manifold is often allowed to have a boundary (also fibered by circles, so it is a union of tori). When studying non-orientable manifolds, it is sometimes useful to allow fibers to have neighborhoods that look like the surface bundle of a reflection (rather than a rotation) of a disk, so that some fibers have neighborhoods looking like fibered Klein bottles, in which case there may be one-parameter families of exceptional curves. In both of these cases, the base B of the fibration usually has a non-empty boundary.
Classification
Herbert Seifert classified all closed Seifert fibrations in terms of the following invariants. Seifert manifolds are denoted by symbols
where:
is one of the 6 symbols: , (or Oo, No, NnI, On, NnII, NnIII in Seifert's original notation) meaning:
if B is orientable and M is orientable.
if B is orientable and M is not orientable.
if B is not orientable and M is not orientable and all generators of preserve orientation of the fiber.
if B is not orientable and M is orientable, so all generators of reverse orientation of the fiber.
if B is not orientable and M is not orientable and and exactly one generator of preserves orientation of the fiber.
if B is not orientable and M is not orientable and and exactly two generators of preserve orientation of the fiber.
Here
g is the genus of the underlying 2-manifold of the orbit surface.
b is an integer, normalized to be 0 or 1 if M is not orientable and normalized to be 0 if in addition some is 2.
are the pairs of numbers determining the type of each of the r exceptional orbits. They are normalized so that when M is orientable, and when M is not orientable.
The Seifert fibration of the symbol
can be constructed from that of symbol
by using surgery to add fibers of types b and .
If we drop the normalization conditions then the symbol can be changed as follows:
Changing the sign of both and has no effect.
Adding 1 to b and subtracting from has no effect. (In other words, we can add integers to each of the rational numbers provided that their sum remains constant.)
If the manifold is not orientable, changing the sign of has no effect.
Adding a fiber of type (1,0) has no effect. Every symbol is equivalent under these operations to a unique normalized symbol. When working with unnormalized symbols, the integer b can be set to zero by adding a fiber of type .
Two closed Seifert oriented or non-orientable fibrations are isomorphic as oriented or non-orientable fibrations if and only if they have the same normalized symbol. However, it is sometimes possible for two Seifert manifolds to be homeomorphic even if they have different normalized symbols, because a few manifolds (such as lens spaces) can have more than one sort of Seifert fibration. Also an oriented fibration under a change of orientation becomes the Seifert fibration whose symbol has the sign of all the bs changed, which after normalization
gives it the symbol
and it is homeomorphic to this as an unoriented manifold.
The sum is an invariant of oriented fibrations,
which is zero if and only if the fibration becomes trivial after taking a finite cover of B.
The orbifold Euler characteristic of the orbifold B is given by
,
where is the usual Euler characteristic of the underlying topological surface of the orbifold B. The behavior of M depends largely on the sign of the orbifold Euler characteristic of B.
Fundamental group
The fundamental group of M fits into the exact sequence
where is the orbifold fundamental group of B (which is not the same as the fundamental group of the underlying topological manifold). The image of group is cyclic, normal, and generated by the element h represented by any regular fiber, but the map from π1(S1) to π1(M) is not always injective.
The fundamental group of M has the following presentation by generators and relations:
B orientable:
where ε is 1 for type o1, and is −1 for type o2.
B non-orientable:
where εi is 1 or −1 depending on whether the corresponding generator vi preserves or reverses orientation of the fiber.
(So εi are all 1 for type n1, all −1
for type n2, just the first one is one
for type n3,
and
just the first two are one for type n4.)
Positive orbifold Euler characteristic
The normalized symbols of Seifert fibrations with positive orbifold Euler characteristic are given in the list below. These Seifert manifolds often have many different Seifert fibrations. They have a spherical Thurston geometry if the fundamental group is finite, and an S2×R Thurston geometry if the fundamental group is infinite. Equivalently, the geometry is S2×R if the manifold is non-orientable or if b + Σbi/ai= 0, and spherical geometry otherwise.
{b; (o1, 0);} (b integral)
is S2×S1 for b=0, otherwise a lens space L(b,1). In particular, {1; (o1, 0);} =L(1,1) is the 3-sphere.
{b; (o1, 0);(a1, b1)} (b integral) is the lens space L(ba1+b1,a1).
{b; (o1, 0);(a1, b1), (a2, b2)} (b integral)
is S2×S1 if ba1a2+a1b2+a2b1 = 0, otherwise the lens space L(ba1a2+a1b2+a2b1, ma2+nb2) where ma1 − n(ba1 +b1) = 1.
{b; (o1, 0);(2, 1), (2, 1), (a3, b3)} (b integral)
This is the prism manifold with fundamental group of order 4a3|(b+1)a3+b3|
and first homology group of order 4|(b+1)a3+b3|.
{b; (o1, 0);(2, 1), (3, b2), (3, b3)} (b integral)
The fundamental group is a central extension of the tetrahedral group of order 12 by a cyclic group.
{b; (o1, 0);(2, 1), (3, b2), (4, b3)} (b integral)
The fundamental group is the product of a cyclic group of order |12b+6+4b2 + 3b3| and a double cover of order 48 of the octahedral group of order 24.
{b; (o1, 0);(2, 1), (3, b2), (5, b3)} (b integral)
The fundamental group is the product of a cyclic group of order m=|30b+15+10b2 +6b3| and the order 120 perfect double cover of the icosahedral group. The manifolds are
quotients of the Poincaré homology sphere by cyclic groups of order m. In particular, {−1; (o1, 0);(2, 1), (3, 1), (5, 1)} is the Poincaré sphere.
{b; (n1, 1);} (b is 0 or 1.)
These are the non-orientable 3-manifolds with S2×R geometry.
If b is even this is homeomorphic to
the projective plane times the circle, otherwise it is homeomorphic to a surface bundle associated to an orientation reversing automorphism of the 2-sphere.
{b; (n1, 1);(a1, b1)} (b is 0 or 1.)
These are the non-orientable 3-manifolds with S2×R geometry.
If ba1+b1 is even this is homeomorphic to
the projective plane times the circle, otherwise it is homeomorphic to a surface bundle associated to an orientation reversing automorphism of the 2-sphere.
{b; (n2, 1);} (b integral.)
This is the prism manifold with fundamental group of order 4|b| and first homology group of order 4, except for b=0 when it is a sum of two copies of real projective space, and |b|=1 when it is the lens space with fundamental group of order 4.
{b; (n2, 1);(a1, b1)} (b integral.)
This is the (unique) prism manifold with fundamental group of order
4a1|ba1 + b1| and first homology group of order 4a1.
Zero orbifold Euler characteristic
The normalized symbols of Seifert fibrations with zero orbifold Euler characteristic are given in the list below. The manifolds have Euclidean Thurston geometry if they are non-orientable or if b + Σbi/ai= 0, and nil geometry otherwise. Equivalently, the manifold has Euclidean geometry if and only if its fundamental group has an abelian group of finite index. There are 10 Euclidean manifolds, but four of them have two different Seifert fibrations. All surface bundles associated to automorphisms of the 2-torus of trace 2, 1, 0, −1, or −2 are Seifert fibrations with zero orbifold Euler characteristic (the ones for other (Anosov) automorphisms are not Seifert fiber spaces, but have sol geometry). The manifolds with nil geometry all have a unique Seifert fibration, and are characterized by their fundamental groups. The total spaces are all acyclic.
{b; (o1, 0); (3, b1), (3, b2), (3, b3)} (b integral, bi is 1 or 2)
For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 3 (trace −1) rotation of the 2-torus.
{b; (o1, 0); (2,1), (4, b2), (4, b3)} (b integral, bi is 1 or 3)
For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 4 (trace 0) rotation of the 2-torus.
{b; (o1, 0); (2, 1), (3, b2), (6, b3)} (b integral, b2 is 1 or 2, b3 is 1 or 5)
For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 6 (trace 1) rotation of the 2-torus.
{b; (o1, 0); (2, 1), (2, 1), (2, 1), (2, 1)} (b integral)
These are oriented 2-torus bundles for trace −2 automorphisms of the 2-torus. For b=−2 this is an oriented Euclidean 2-torus bundle over the circle (the surface bundle associated to an order 2 rotation of the 2-torus) and is homeomorphic to {0; (n2, 2);}.
{b; (o1, 1); } (b integral)
This is an oriented 2-torus bundle over the circle, given as the surface bundle associated to a trace 2 automorphism of the 2-torus. For b=0 this is Euclidean, and is the 3-torus (the surface bundle associated to the identity map of the 2-torus).
{b; (o2, 1); } (b is 0 or 1)
Two non-orientable Euclidean Klein bottle bundles over the circle. The first homology is Z+Z+Z/2Z if b=0, and Z+Z if b=1.
The first is the Klein bottle times S1 and other is the surface bundle associated to a Dehn twist of the Klein bottle.
They are homeomorphic to the torus bundles {b; (n1, 2);}.
{0; (n1, 1); (2, 1), (2, 1)}
Homeomorphic to the non-orientable Euclidean Klein bottle bundle {1; (n3, 2);}, with first homology Z + Z/4Z.
{b; (n1, 2); } (b is 0 or 1)
These are the non-orientable Euclidean surface bundles associated with orientation reversing order 2 automorphisms of a 2-torus with no fixed points.
The first homology is Z+Z+Z/2Z if b=0, and Z+Z if b=1.
They are homeomorphic to the Klein bottle bundles {b; (o2, 1);}.
{b; (n2, 1); (2, 1), (2, 1)} (b integral)
For b=−1 this is oriented Euclidean.
{b; (n2, 2); } (b integral)
For b=0 this is an oriented Euclidean manifold, homeomorphic to the 2-torus bundle {−2; (o1, 0); (2, 1), (2, 1), (2, 1), (2, 1)} over the circle associated to an order 2 rotation of the 2-torus.
{b; (n3, 2); } (b is 0 or 1)
The other two non-orientable Euclidean Klein bottle bundles. The one with b = 1 is homeomorphic to {0; (n1, 1); (2, 1), (2, 1)}. The first homology is Z+Z/2Z+Z/2Z if b=0, and Z+Z/4Z if b=1. These two Klein bottle bundle are surface bundles associated to the y-homeomorphism and the product of this and the twist.
Negative orbifold Euler characteristic
This is the general case. All such Seifert fibrations are determined up to isomorphism by their fundamental group. The total spaces are aspherical (in other words all higher homotopy groups vanish). They have Thurston geometries of type the universal cover of SL2(R), unless some finite cover splits as a product, in which case they have Thurston geometries of type H2×R.
This happens if the manifold is non-orientable or b + Σbi/ai= 0.
References
Herbert Seifert, Topologie dreidimensionaler gefaserter Räume, Acta Mathematica 60 (1933) 147–238 (There is a translation by W. Heil, published by Florida State University in 1976 and found in: Herbert Seifert, William Threlfall, Seifert and Threllfall: a textbook of topology, Pure and Applied Mathematics, Academic Press Inc (1980), vol. 89.)
Peter Orlik, Seifert manifolds, Lecture Notes in Mathematics 291, Springer (1972).
Frank Raymond, Classification of the actions of the circle on 3-manifolds, Transactions of the American Mathematical Society 31, (1968) 51–87.
William H. Jaco, Lectures on 3-manifold topology
William H. Jaco, Peter B. Shalen, Seifert Fibered Spaces in Three Manifolds: Memoirs Series No. 220 (Memoirs of the American Mathematical Society; v. 21, no. 220)
John Hempel, 3-manifolds, American Mathematical Society,
Peter Scott, The geometries of 3-manifolds. (errata), Bull. London Math. Soc. 15 (1983), no. 5, 401–487.
Fiber bundles
3-manifolds
Geometric topology | Seifert fiber space | [
"Mathematics"
] | 3,750 | [
"Topology",
"Geometric topology"
] |
1,632,023 | https://en.wikipedia.org/wiki/Live%20export | Live export is the commercial transport of livestock across national borders. The trade involves a number of countries with the Australian live export industry being one of the largest exporters in the global trade. According to the Australian Bureau of Statistics, exports of live sheep rose 21.4% and live calves increased 9.7% between March 2017 and March 2018. During 2017 alone, Australia exported 2.85 million living animals in shipping containers and airplanes. The expansion of the trade has been supported by the introduction of purpose-built ships which carry large numbers of animals. The amount of livestock exported from the European Union grew to nearly 586m kilograms between 2014 and 2017, a 62.5% increase during the time period.
The rising global demand for meat has resulted in the quadrupling of the export of live farm animals in the last half century, with two billion being exported in 2017, up from one billion in 2007. Roughly five million animals are in transit every day.
There has been strong criticism of the industry on animal rights grounds by animal rights organizations and the media. New Zealand has effectively phased out live exports for slaughter purposes since 2007 due to concerns about animals.
Australia
Market and legislation
Australia is one of the world's largest exporters of sheep and cattle. According to Meat and Livestock Australia, 2.44 million sheep were exported to markets in Asia and the Middle East in 2012, reduced from 4.2 million in 2008. The total number of cattle exported in 2012 was 617, 301, down 11% from the previous year. Indonesia accounted for 45% of total live cattle exports from Australia in 2012. Total cattle exports to Indonesia reduced by 33% from 2011.
The reduction in cattle exports to Indonesia in 2012 was partly due to the newly imposed ESCAS (Exporter Supply Chain Assurance Scheme) from 2011, and partly due to Indonesia's move to become self-sufficient in beef production. Most of the livestock are for human consumption but there is also an active trade in breeding stock, including dairy cattle.
The Department of Agriculture and Water Resources controls the Australian Standards for the Export of Livestock. The standards were amended in April 2011 (version 2.3). The Department also introduced ESCAS (Exporter Supply Chain Assurance Scheme), in 2011 — a system requiring exporters to provide evidence of compliance with internationally agreed animal welfare standards, and to demonstrate traceability and control through the supply chain. According to the Department, ESCAS was developed in response to evidence of cruelty to Australian cattle in Indonesia, and then extended to all livestock exports for the purpose of slaughter. See Animal Welfare section.
AQIS (Australian Quarantine and Inspection Service) manages quarantine controls to minimise the risk of exotic pests and diseases entering the country. AQIS also provides import and export inspection and certification to help retain Australia's highly favourable animal, plant and human health status and wide access to overseas export markets.
Other key markets include Israel, Malaysia, Japan, Mexico and China. The major markets for Australian sheep are Kuwait and Jordan. Other key markets are Bahrain, the UAE, Oman and Qatar. Australia's main market competitors are from China, South America and North Africa.
Campaigns
Australia's live export industry has experienced significant scrutiny by animal welfare groups since 2003. The RSPCA is opposed to live export. Over 550,000 animals are reported to have died en route during live export journeys between 2000–2012. A 2006 Freedom of Information report revealed sheep died on route due to several factors including heat stress, septicaemia and acute pneumonia. Dr Lynn Simpson, a former on-board vet for the live export industry, made a submission to the Department of Agriculture in March 2013 condemning animal welfare conditions on live export ships. A group of former live export vets - Veterinarians Against Live Export (VALE) has formed to oppose the trade. Prominent former live export veterinarians who have spoken out against the conditions on ships include Dr Lloyd Reeve Johnson, who expressed his concern about the conflict of interest involved in live export companies paying for animal welfare advice, Dr Tony Hill, who was allegedly pressured to report 105 mortalities when in fact 2000 sheep had died, and Dr Kerkenezov, who has urged an end to an industry he calls "cruel."
In March 2011, Animals Australia investigators collected footage which showed Australian cattle being slaughtered at 11 abattoirs in Indonesia with practices that infringed upon OIE standards for animal welfare. Animals Australia and RSPCA Australia jointly complained to the federal Department of Agriculture, Fisheries and Forestry, calling for a ban on live exports. In response to the footage, live exports to Indonesia were suspended by the Australian Minister for Agriculture from June 7, 2011 until July 6, 2011. The suspension was lifted with the new Exporter Supply Chain Assurance System in place, outlining mandatory compliance for all importing markets of Australian live animals for slaughter, with international standards for animal welfare. The ESCAS regulatory framework is applicable across all importing markets as of December 31, 2012.
The footage was the subject of a separate investigation conducted by ABC program, Four Corners, shown on 30 May 2011. The report entitled "A Bloody Business" was the winner of a Logie Award for "Most Outstanding Public Affairs Report" as well as the 2011 Gold Walkley Award.
In November 2012, another widely reported investigation by Animals Australia brought attention to the slaughter of 22,000 Australian sheep in an ESCAS-approved feedlot in Pakistan, after weeks at sea upon the initial consignment being rejected by Bahrain due to claimed fears of disease. The video footage of the cull, obtained by Animals Australia "shows absolute chaos with animals being dragged, beaten, having their throats sawn at with blunt knives and thrown into mass graves -- some of them still alive hours later." Animals Australia's Campaign Director, Lyn White, stated of the incident:
On 6 May 2013 a report aired on Australia's ABC 7.30, revealing footage of cruelty to Australian cattle in at least one Egyptian abattoir. The footage, provided to the Australian Department of Agriculture by animal protection group Animals Australia, led to a suspension of live trade to Egypt. Australian Agriculture Minister Joe Ludwig described the footage as "sickening", and the Australian Live Exporters' Council chief executive, Alison Penfold, said she was "distraught and disgusted".
In May 2013, evidence was provided to the Department of Agriculture showing what is alleged to be Australian goats being roughly handled and sold outside of approved facilities in Malaysia. The footage also allegedly showed breaches of required animal welfare standards during slaughter of Australian cattle. The Department confirmed it had reviewed the footage and launched an investigation.
Animals Australia report to have conducted a total of 30 separate investigations into the live export industry between May 2003 and April 2014.
Research into Australian live export
In 2009, the World Animal Protection live export campaign commissioned economic think tank ACIL Tasman to undertake economic research into the live export trade. This research found that there are potential value adding opportunities being lost in Australia due to trade distortions in the live export trade. The report analysed the economics and policy settings of the live sheep export trade from Western Australia and demonstrated that a sheep processed domestically is worth 20% more to the Australian economy than one exported live.
In October 2012, World Animal Protection published a further piece of research into the live export trade. This research found that if a cattle processing facility was built in the Northern Territory or North-Western Australia, in conjunction with live export, there would potentially be an increase of 245% or more in gross earnings for Australian cattle producers, more than 1,300 jobs for unemployed Australians and gross regional product growth of $204 million per annum.
In August 2011, two bills were presented to Australian Parliament calling for an end to live exports on animal welfare grounds, by Independent Senator Nick Xenophon and the Australian Greens Party. Both bills were rejected by the House of Representatives.
European Union
The EU introduced new legislation in 2004, which was planned to come into force in 2007. Agriculture Ministers from individual states who make up the Council however, have deferred decisions on a package of reforms, including journey times, until 2011. They have adopted some reforms that offer more training and certification for drivers by 2009.
The legislation was also written with the aim of covering better loading and unloading facilities.
In September 2020, Dutch Agriculture Minister Carola Schouten requested the EU Agriculture and Fisheries Council to adjust animal welfare regulations and limit the transport of livestock for slaughter; a special EU committee on animal transport commenced hearings in October.
Germany
In late 2020, a regional court in Germany prohibited the live exportation of 132 breeding heifers because the conditions under which they would be slaughtered in Morocco would be "inhumane".
New Zealand
In 2005, New Zealand exported NZ$217 million worth of live animals, mainly for breeding purposes. Exports included cattle, sheep, horses, deer, goats and day-old chicks. Because New Zealand is free from most exotic diseases most livestock shipments are for breeding or finishing purposes. Cattle are not exported for slaughter and the last export of sheep for slaughter was in 2003.
In November 2007, the New Zealand Government introduced the new Customs Exports Prohibition (Livestock for Slaughter) Order. Although not a blanket ban, the new legal requirement restricts live animal exports for slaughter unless the risks to animals and New Zealand's trade reputation can be adequately managed. There has been no export of livestock for slaughter purposes since that date. New Zealand does still export live finfish and shellfish.
In September 2020, the New Zealand Government suspended live cattle exports after the Gulf Livestock 1 transport ship capsized with 43 crew members and nearly 6,000 cattle on board. The ship was carrying cattle for breeding from the country to China. On 14 April 2021, the Government of New Zealand announced that, in order to raise animal welfare standards, it had decided to phase out the export of livestock by sea by 2023 after a transition period of up to two years. It was the first country in history to do so; activists called on Australia and other states to follow suit.
New Zealand has partaken in no live animal exports for slaughter since 2003, after 4000 sheep died on a ship bound for Saudi Arabia. In 2012, the President of the Federated Farmers of New Zealand was quoted as saying:
Animal welfare groups in New Zealand continue to call for a ban on live export of animals for breeding purposes. SAFE has stated that live exports pose "potential for serious suffering."
United Kingdom
Between 15 July 2002 and January 2004, around 200,000 lambs and sheep were exported for slaughter or further fattening abroad, mainly to France and Italy.
The Animal and Plant Health Agency (APHA) was responsible for conducting inspections of animals at the point of loading and at ports. Trading Standards also had powers to inspect animals during transport, and was responsible for carrying out any prosecutions under the regulations.
The Port of Ramsgate temporarily halted live transport after an incident in 2012, when 40 sheep were euthanised after being badly injured during transit. The decision was overturned by the High Court on the grounds that the port could not ban live animal exports on the grounds of freedom of movement with the EU and UK legislation.
After his appointment as Secretary of State for the Environment, Farming and Rural Affairs, Michael Gove indicated in July 2017 that Brexit would offer the opportunity to ban live animal export for slaughter.
UK Environment Secretary George Eustice unveiled plans to ban the export of live animals for slaughter and fattening from England and Wales on 3 December 2020. The plans still had to be finalised, would exclude poultry and not affect Northern Ireland (under EU law), but Scotland would probably follow the example of England and Wales.
On 4th December 2023 the UK Government announced new legislation set to ban live animal exports. On Monday 20th May 2024, the Bill to ban the export of live animals from the United Kingdom to countries overseas received royal assent and officially became law.
Campaigns
On 1 February 1995, English animal rights activist Jill Phipps was crushed to death under a lorry during a protest to stop the air export of live calves for veal near Coventry Airport.
The Battle of Brightlingsea refers to a series of protests by animal rights supporters held in Brightlingsea, England, between 16 January and 30 October 1995, to prevent the export of livestock through the town.
The end the trade was ended in 1996 due to an outbreak of mad cow disease. In 2006 this ban was lifted after which most UK live animal exports left out of the Port of Ramsgate and Dover. In June 2011 a Ramsgate Town Councillor, Ian Driver, spoke out in opposition to live exports.
Following a suspension of the trade a consignment of sheep left Dover for Calais during the night of 21-22 December 2010. On September 12, 2012, 46 sheep were euthanised after injuries inflicted due to transportation faults at the Port of Ramsgate. A temporary suspension on live exports from the Port was called by the Thanet Council.
See also
Animal–industrial complex
Commodity status of animals
References
Further reading
Australian Department of Agriculture Live animal export trade
November 2012 Independent Poll of Australian public attitude to live export by Essential Media
Hassall & Associates Australia: Live Export Industry: Value, Outlook and Contribution to the Economy
Meat & Livestock Australia
Live export investigation by Mercy for Animals
The biggest animal welfare crisis you've never heard of. Vox. January 16, 2023
Animal welfare
Intensive farming
Livestock
Meat industry
Export
Animal trade | Live export | [
"Chemistry"
] | 2,736 | [
"Eutrophication",
"Intensive farming"
] |
1,632,111 | https://en.wikipedia.org/wiki/Hydrobromide | In chemistry, a hydrobromide is an acid salt resulting, or regarded as resulting, from the reaction of hydrobromic acid with an organic base (e.g. an amine). The compounds are similar to hydrochlorides.
Some drugs are formulated as hydrobromides, e.g. eletriptan hydrobromide.
See also
Bromide, inorganic salts of hydrobromic acid
Bromine, the element Br
Free base (chemistry)
Acid salts
Salts
Bromides | Hydrobromide | [
"Chemistry"
] | 103 | [
"Bromides",
"Acid salts",
"Salts"
] |
1,632,375 | https://en.wikipedia.org/wiki/Policy-based%20routing | In computer networking, policy-based routing (PBR) is a technique used to make routing decisions based on policies set by the network administrator.
When a router receives a packet it normally decides where to forward it based on the destination address in the packet, which is then used to look up an entry in a routing table. However, in some cases, there may be a need to forward the packet based on other criteria. For example, a network administrator might want to forward a packet based on the source address, not the destination address. This permits routing of packets originating from different sources to different networks even when the destinations are the same and can be useful when interconnecting several private networks.
Policy-based routing may also be based on the size of the packet, the protocol of the payload, or other information available in a packet header or payload.
In the Cisco IOS, PBR is implemented using route maps. Linux supports multiple routing tables since version 2.2. FreeBSD supports PBR using either IPFW, IPFilter or OpenBSD's PF.
Examples
PBR can be used to redirect traffic to a proxy server by using a PBR-aware L3-switch (router). In such deployment, specific source traffic (e.g. HTTP, FTP) can be redirected to a cache engine. This is known as virtual inline deployment.
Notes
References
External links
Policy routing Cisco Press article
Policy based routing with Linux (ONLINE edition)
Network overview by Rami Rosen
Routing | Policy-based routing | [
"Technology"
] | 314 | [
"Computing stubs",
"Computer network stubs"
] |
1,632,445 | https://en.wikipedia.org/wiki/List%20of%20BBS%20software | This is a list of notable bulletin board system (BBS) software packages.
Multi-platform
Citadel – originally written for the CP/M operating system, had many forks for different systems under different names.
CONFER – CONFER II on the MTS, CONFER U on Unix and CONFER V on VAX/VMS, written by Robert Parnes starting in 1975.
Mystic BBS – written by James Coyle with versions for Windows/Linux/ARM Linux/OSX. Past versions: MS-DOS and OS/2.
Synchronet – Windows/Linux/BSD, past versions: MS-DOS and OS/2.
WWIV – WWIV v5.x is supported on both Windows 7+ 32bit as well as Linux 32bit and 64bit. Written by Wayne Bell, included WWIVNet. Past versions: MS-DOS and OS/2.
Altos 68000
PicoSpan
Amiga
Ami-Express – aka "/X", very popular in the crackers/warez software scene.
C-Net – aka "Cnet"
Apple II
Diversi-Dial (DDial) – Chat-room atmosphere supporting up to 7 incoming lines allowing links to other DDial boards.
GBBS – Applesoft and assembler-based BBS program by Greg Schaeffer.
GBBS Pro – based on the ACOS or MACOS (modified ACOS) language.
Net-Works II – by Nick Naimo.
SBBS – Sonic BBS by Patrick Sonnek.
Atari 8-bit computers
Atari Message Information System – and derivatives
Commodore 64
Blue Board – by Martin Sikes.
C*Base – by Gunther Birznieks, Jerome P. Yoner, and David Weinehall.
C-Net DS2 – by Jim Selleck.
Color64 – by Greg Pfountz.
McBBS – by Derek E. McDonald.
CP/M
CBBS – The first ever BBS software, written by Ward Christensen.
Citadel
RBBS
TBBS
Macintosh
Citadel – including Macadel, MacCitadel.
FirstClass (SoftArc)
Hermes
Second Sight
TeleFinder
Microsoft Windows
Excalibur BBS
Maximus
Mystic BBS
MS-DOS and compatible
Citadel – including DragCit, Cit86, TurboCit, Citadel+
FBB (F6FBB) – packet radio BBS system, still in use.
GBBS (Graphics BBS) – used in the Melbourne area.
GT-Power
L.S.D. BBS – written by The Slavelord of The Humble Guys (THG).
The Major BBS
Maximus
McBBS – by Derek E. McDonald.
Opus-CBCS – first written by Wynn Wagner III.
PCBoard
PegaSys
ProBoard BBS – written by Philippe Leybaert (Belgium)
QuickBBS – written by Adam Hudson, with assistance by Phil Becker.
RBBS-PC
RemoteAccess – written by Andrew Milner.
Renegade – written by Cott Lang until 1997. Currently maintained by T.J. McMillen since 2003.
RoboBOARD/FX – written by Seth Hamilton.
Searchlight BBS (SLBBS)
Spitfire
SuperBBS – by Aki Antman and Risto Virkkala.
TBBS
TCL
Telegard
TriBBS
TAG
Virtual Advanced – also known as VBBS.
Waffle – written by Tom Dell, and supported UUCP (and Fidonet through extensions).
Wildcat! – originally by Mustang Software.
OS/2
Maximus
PCBoard
Virtual Advanced – also known as VBBS.
TRS-80
Forum 80
TBBS - by Phil Becker, for the Model III/4
Unix and compatible
Citadel – including Citadel/UX, Dave's Own Citadel.
Falken – Linux versions by Chris Whitacre, past MS-DOS versions written by Herb Rose.
Firebird BBS – Linux-based.
LysKOM
Major BBS
Maple BBS
Maximus
PCBoard v16 – formerly by CDC, now by MP Solutions, LLC.
PicoSpan
Waffle (BBS software)
References
BBS
Bulletin board system | List of BBS software | [
"Technology"
] | 839 | [
"Computing-related lists",
"Mobile content",
"Lists of software",
"Social software"
] |
1,632,568 | https://en.wikipedia.org/wiki/Darning | Darning is a sewing technique for repairing holes or worn areas in fabric or knitting using needle and thread alone. It is often done by hand, but using a sewing machine is also possible. Hand darning employs the darning stitch, a simple running stitch in which the thread is "woven" in rows along the grain of the fabric, with the stitcher reversing direction at the end of each row, and then filling in the framework thus created, as if weaving. Darning is a traditional method for repairing fabric damage or holes that do not run along a seam, and where patching is impractical or would create discomfort for the wearer, such as on the heel of a sock.
Darning also refers to any of several needlework techniques that are worked using darning stitches:
Pattern darning is a type of embroidery that uses parallel rows of straight stitches of different lengths to create a geometric design.
Net darning, also called filet lace, is a 19th-century technique using stitching on a mesh foundation fabric to imitate lace.
Needle weaving is a drawn thread work embroidery technique that involves darning patterns into barelaid warp or weft thread.
Darning cloth
In its simplest form, darning consists of anchoring the thread in the fabric on the edge of the hole and carrying it across the gap. It is then anchored on the other side, usually with a running stitch or two. If enough threads are criss-crossed over the hole, the hole will eventually be covered with a mass of thread.
Fine darning, sometimes known as Belgian darning, attempts to make the repair as invisible and neat as possible. Often the hole is cut into a square or darn blends into the fabric.
There are many varieties of fine darning. Simple over-and-under weaving of threads can be replaced by various fancy weaves, such as twills, chevrons, etc., achieved by skipping threads in regular patterns.
Invisible darning is the epitome of this attempt at restoring the fabric to its original integrity. Threads from the original weaving are unraveled from a hem or seam and used to effect the repair. Invisible darning is appropriate for extremely expensive fabrics and items of apparel.
In machine darning, lines of machine running stitch are run back and forth across the hole, then the fabric is rotated and more lines run at right angles.
Tools
There are special tools for darning socks or stockings:
A darning egg is an egg-shaped tool, made of stone, porcelain, wood, or similar hard material, which is inserted into the toe or heel of the sock to hold it in the proper shape and provide a foundation for repairs. A shell of the tiger cowry Cypraea tigris, a popular ornament in Europe and elsewhere, was also sometimes used as a ready-made darning egg.
A darning mushroom is a mushroom-shaped tool usually made of wood. The sock is stretched over the curved top of the mushroom, and gathered tightly around the stalk to hold it in place for darning.
A darning gourd is a hollow dried gourd with a pronounced neck. The sock can be stretched over the full end of the gourd and held in place around the neck for darning.
Specialty tools aside, anything that is round that can stretch and secure the fabric is also effective. Other examples include lacrosse balls, light bulbs, and lemons.
A darning needle is typically as blunt-tipped as possible, to avoid splitting the threads as it is passed through the item being repaired. This is especially true of larger needles for darning coarse knitted cloth.
A darning loom is a very small hand-held loom for weaving patches into the original cloth. They have an egg portion which goes inside the cloth and is grooved; the rest of the loom goes on the outside, and the two parts are held together by an elastic band. The loom is warped and woven upon with a needle, which also serves as a beater batten. Darning looms typically have heddles made of flip-flopping rotating hooks, which raise and lower the warp, creating sheds to make weaving the patch easier. The hooks, when vertical, have the weft threads looped around them horizontally. If the hooks are flopped over one side or the other, the loop of weft twists, raising one or the other side of the loop, which creates the shed and countershed. The spacing of the hooks generally doesn't match the threadcount of the cloth. Other devices sold as darning looms are just a darning egg and a separate comb-like piece with teeth to hook the warp over; these are used for repairing knitted garments and are like a linear knitting spool. Darning looms were sold during World War Two clothing rationing in the United Kingdom and in Canada, and some are homemade.
Pattern darning
Pattern darning is a simple and ancient embroidery technique in which contrasting thread is woven in and out of the ground fabric using rows of running stitches which reverse direction at the end of each row. The length of the stitches may be varied to produce geometric designs. Traditional embroidery using pattern darning is found in Africa, Japan, Northern and Eastern Europe, the Middle East, Mexico and Peru.
Pattern darning is also used as a filling stitch in blackwork embroidery.
Around the world
Iran
Rofoogari is an old traditional skill or technique used to fix or repair historic textiles and woven materials and fabrics in Iran. Having an old history in weaving and textile making, the culture of rofoo, or "vasleh- Pineh" arose among the poor and unwealthy communities. They used patches to cover the damaged parts and go over the space by running stitches and sometimes decorative ones. In some communities due to lack of resources, they repeated the process as they needed, which is why we find very colorful, different patterned vasleh- pineh in galleries and museums.
India
Rafoogari is the name for the art of darning in India and neighbouring countries of the subcontinent, where this art of healing the cloth is used for practical and traditional reasons. Though wearing restored clothes is associated with poverty and thus seen as shameful, this technique has been used by highly skilled "rafoogars" to restore some priceless clothes such as Pashmina shawls, silks, woolen clothes, and even fine cotton. Kashmiris are considered the best rafoogars, who have imparted their knowledge to artists all over India. Rafoogars still exist across India.
The Foundation of Indian Contemporary Art has been trying to preserve this art, and some artists in India still practice it as a hereditary art form, passed down for over sixteen generations.
See also
Mending
Conservation and restoration of textiles
Boro
Invisible mending
References
Further reading
Reader's Digest Oxford Dictionary p. 1001.CS.
Sewing stitches
Embroidery stitches
Maintenance | Darning | [
"Engineering"
] | 1,423 | [
"Maintenance",
"Mechanical engineering"
] |
1,632,662 | https://en.wikipedia.org/wiki/List%20of%20FM%20Towns%20games | The FM Towns is a fourth generation home computer developed and manufactured by Fujitsu, first released only in Japan on 28 February 1989. It was the fourth computer to be released under the Fujitsu brand, succeeding the FM-7 series. The following list contains all of the known games released commercially for the FM Towns platform.
Featuring an operating system based on MS-DOS called Towns OS, the FM Towns operates with both 3.5" floppy disks and CD-ROMs. Many add-ons were released including networking, SCSI, memory upgrades and CPU enhancements, among others that increased the performance of the system. A fifth-generation home video game console based on the FM Towns computer called FM Towns Marty was released exclusively for the Japanese market on 20 February 1993, featuring backward-compatibility with older FM Towns titles. Multiple revisions were later released that included several changes compared to the original model, with the last model being released in 1995 before being officially discontinued in the market on Summer 1997. A total of 500,000 FM Towns units were reportedly sold during its commercial life span, while 45,000 FM Towns Marty consoles were sold as of 31 December 1993.
Games
There are currently games on this list.
See also
Lists of video games
Notes
References
External links
List of FM Towns games at MobyGames
FM Towns games
FM Towns
FM Towns games | List of FM Towns games | [
"Technology"
] | 271 | [
"Computing-related lists",
"Fujitsu lists"
] |
1,632,713 | https://en.wikipedia.org/wiki/CFB%20Gagetown | 5th Canadian Division Support Base (5 CDSB) Gagetown, formerly known as and commonly referred to as CFB Gagetown, is a large Canadian Forces Base covering an area over , located in southwestern New Brunswick. It is the biggest facility in Eastern Canada, and Canada's second-largest facility.
Construction of the base
At the beginning of the Cold War, Canadian defence planners recognized the need for providing the Canadian Army with a suitable training facility where brigade and division-sized armoured, infantry, and artillery units could exercise in preparation for their role in defending western Europe under Canada's obligations to the North Atlantic Treaty. The facility would need to be located relatively close to an all-season Atlantic port and have suitable railway connections.
Existing training facilities dating from the First and Second World Wars in eastern Canada were relatively small (Camp Debert, Camp Aldershot, Sussex Military Camp, Camp Valcartier, Camp Petawawa, Camp Utopia), thus a new facility was considered. At the same time, regional economic development planners saw an opportunity for a military base to benefit the economy of southwestern New Brunswick.
The area under consideration was an expansive plateau west of the Saint John River between the cities of Saint John and Fredericton, measuring approximately in length and in width; more accurately it runs between Oromocto in the north to Welsford in the south, and between the Saint John River in the east and the south branch of the Oromocto River in the west.
Over 900 families inhabited the area primarily engaged in agriculture and forestry industries. The terrain was variable, providing mixed Acadian forest, swamp and marshland, as well as open farming areas similar to the North European Plain. The influence of the St. Croix Highlands, part of the Appalachian Mountain range, creates hilly terrain and valleys in the southern and western part of the region close to the Nerepis and Oromocto rivers.
The expropriation of lands began in the early 1950s, much to the surprise of local residents who had been kept in the dark about the expropriation until the last minute. In total, between 2,000 and 3,000 residents were forced to move. An additional 44 cemeteries were within the expropriated land. The base was surveyed so as to not affect some of the historic communities along the western bank of the Saint John River such as Arcadia, Hampstead, and Browns Flat; the expropriation began several kilometres west of the river and eliminated the communities of Petersville, Hibernia, New Jerusalem, North Clones and others. This remains the largest single land expropriation in the history of New Brunswick.
The base headquarters were chosen for the northern part of the base adjacent to the (then) small village of Oromocto. In preparation for the influx of service personnel, Oromocto was redesigned as a "planned" town, with buried electrical utilities and residential and commercial clustering typical of larger planned towns such as Richmond Hill, Ontario.
Construction of the base facilities in Oromocto benefited from convenient railway connections provided by Canadian National and Canadian Pacific railways. A new alignment of the Trans-Canada Highway was built on the eastern bank of the Saint John River, opposite from Oromocto in the early 1960s (see Route 2) and a new highway bridge across the Saint John River connected the Trans-Canada Highway to the village of Burton, just south of Oromocto and near the east gate for the base.
The Gagetown Military Camp (or Camp Gagetown) opened in 1956 and was named after the village of Gagetown, although the base was located west of this historic village and was headquartered to its north in Oromocto. The base's territory measured and included numerous live-fire ranges for infantry, armoured, and artillery units, as well as aerial weapons ranges.
At the time of its opening in 1956, until the opening of Shoalwater Bay Military Training Area in Australia in 1965, Camp Gagetown was the largest military training facility in the Commonwealth of Nations. By comparison, CFB Suffield has with usable by the military, and designated as the Suffield National Wildlife Area.
The training area has been heavily "landscaped" over the years by military foresters and many woodlines have been sculpted to form shapes recognizable from the air, including:
Scotty Dog Woods
Square Woods
Flag Woods
The "CTC" cutting
The "Maple Leaf" cutting
Operations
Initially, Camp Gagetown was the home base for many army regiments, including The Black Watch and The Royal Canadian Regiment; however, defence cutbacks in the 1960s saw a gradual reduction, and the demise of their parent formation, 3 Brigade Group. On February 1, 1968, the Canadian Army, the Royal Canadian Air Force, and the Royal Canadian Navy were merged to form the unified Canadian Forces. Following this unification, Camp Gagetown was renamed Canadian Forces Base Gagetown (CFB Gagetown).
In the post-unification armed forces, CFB Gagetown functioned as the primary combat training centre for Force Mobile Command (renamed Land Force Command in the 1990s). In the early 1970s Combat Training Centre Gagetown (CTC Gagetown) was established as a unit at CFB Gagetown comprising armour, artillery, and infantry training schools. In the early 1970s 422 and 403 helicopter squadrons were relocated to CFB Gagetown. Their helipad is located at the end of Champlain Road. In the 1990s the Canadian Forces School of Military Engineering was relocated to CFB Gagetown from CFB Chilliwack. The base is still widely referred to as Camp Gagetown.
Increased defence spending in the 1980s saw numerous new training facilities built and ranges modernized, and this continued into the 1990s as the Canadian Forces closed smaller bases in response to further defence budget cuts. A large training building housing much of CTC was opened in late 1992. CFB Gagetown continues to function as the army's primary training facility, although due to risk of forest fires in recent years, live-fire training has been pushed primarily to the fall-winter-spring seasons.
Units and formations
Principal units and formations of the CFB Gagetown are:
5th Canadian Division
Combat Training Centre (Royal Canadian Armoured Corps School, Royal Canadian Artillery School, Canadian Forces School of Military Engineering, Infantry School and Tactics School)
5th Canadian Division Support Group
42 Health Services
1 Dental Unit Detachment Gagetown and the Joint Personnel Support Unit
2nd Battalion, The Royal Canadian Regiment
4th Artillery Regiment (General Support), RCA
4 Engineer Support Regiment
403 Helicopter Operational Training Squadron
C Squadron, The Royal Canadian Dragoons
Joint Meteorological Centre
Canadian Army Trials and Evaluation Unit
5th Canadian Division Training Centre
3 Military Police Regiment Detachment Gagetown
Argonaut Army Cadet Summer Training Centre
Defoliant testing
Portions of the training area were subject to testing of defoliants during the 1960s. The use of Agent Orange and Agent Purple has led to an inquiry as to its long-term effects upon the soldiers and civilian base personnel who were exposed to it. The affected areas had soil tests that measured dioxin levels at 143 times the Canadian Council of Ministers of the Environment guidelines for maximum exposure.
St. Mary's Chapel
CFB Gagetown has a chapel that is administered by the Military Ordinariate of Canada. Services at the chapel are available for all military persons and the civilian personnel of the base. During the week, the chapel organizes Mass in French and English.
Economic facts
The base and its lodger units provide full-time employment to approximately 6,500 military members and 1,000 civilians.
The base contributes over to the local economy annually.
The base contributes more than to the provincial economy annually.
References
Further reading
Parr, Joy (2010). Sensing Changes: Technologies, Environments, and the Everyday, 1953-2003, UBC Press.
External links
5th Canadian Division Support Base Gagetown
Places of Our Hearts, a Community Memories, Virtual Museum of Canada Exhibition
Military museum of the CFB Gagetown
Buildings and structures in Queens County, New Brunswick
Buildings and structures in Sunbury County, New Brunswick
Canadian Forces bases in New Brunswick
Heliports in Canada
Military airbases in New Brunswick
Royal Canadian Regiment
Defoliants | CFB Gagetown | [
"Chemistry"
] | 1,654 | [
"Defoliants",
"Chemical weapons"
] |
1,632,806 | https://en.wikipedia.org/wiki/Acoustical%20Society%20of%20America | The Acoustical Society of America (ASA) is an international scientific society founded in 1929 dedicated to generating, disseminating and promoting the knowledge of acoustics and its practical applications. The Society is primarily a voluntary organization of about 7500 members and attracts the interest, commitment, and service of many professionals.
History
In the summer of 1928, Floyd R. Watson and Wallace Waterfall (1900–1974), a former doctoral student of Watson, were invited by UCLA's Vern Oliver Knudsen to an evening dinner at Knudsen's beach club in Santa Monica. The three physicists decided to form a society of acoustical engineers interested in architectural acoustics. In the early part of December 1928, Wallace Waterfall sent letters to sixteen people inquiring about the possibility of organizing such a society. Harvey Fletcher offered the use of the Bell Telephone Laboratories at 463 West Street in Manhattan as a meeting place for an organizational, initial meeting to be held on December 27, 1928. The meeting was attended by forty scientists and engineers who started the Acoustical Society of America (ASA). Temporary officers were elected: Harvey Fletcher as president, V. O. Knudsen as vice-president, Wallace Waterfall as secretary, and Charles Fuller Stoddard (1876–1958) as treasurer. A constitution and by-laws were drafted. The first issue of the Journal of the Acoustical Society of America was published in October 1929.
Technical committees
The Society has 13 technical committees that represent specialized interests in the field of acoustics. The committees organize technical sessions at conferences and are responsible for the representation of their sub-field in ASA publications. The committees include:
Acoustical oceanography
Animal bioacoustics
Architectural acoustics
Biomedical acoustics
Computational acoustics (Technical Specialty Group)
Acoustical engineering
Musical acoustics
Noise
Physical acoustics
Psychoacoustics
Signal processing in acoustics
Speech communication
Structural acoustics and vibration
Underwater acoustics
Founding members
The first meeting was attended by forty scientists and engineers who started the Acoustical Society of America (ASA). Some of those members include:
Edward Joseph Schroeter
Harvey Fletcher
Floyd K. Richtmyer
Dayton Miller
Harold D. Arnold
Frederick Albert Saunders
Floyd R. Watson
Irving Wolff
Publications
The Acoustical Society of America publishes a wide variety of material related to the knowledge and practical application of acoustics in physics, engineering, architecture, noise, oceanography, biology, speech and hearing, psychology and music.
The Journal of the Acoustical Society of America (JASA) - founded in 1929, this is a peer-reviewed academic journal operating on the traditional subscription model.
JASA Express Letters (2021–present) online archive- this is a peer-reviewed academic journal operating on the open access model.
Proceedings of Meetings on Acoustics (POMA) (2007–present) online archive - repository for conference proceedings.
Acoustics Today (2005–present) online archive a general interest magazine on acoustics.
In 2021, the ASA Publications' Office began producing Across Acoustics, a podcast to highlight authors' research from these four publications.
Discontinued publications
Echoes (1991-2013) online archive - Quarterly newsletter.
Acoustics Research Letters Online (2000-2005) online archive - Launched as an open access journal. It became a section of the Journal of the Acoustical Society of America from 2006 to 2020, then in 2021 became the current journal JASA Express Letters.
Noise Control (1955-1961) online archive
Sound: Its Uses and Control (1962-1963) online archive - A continuation of Noise Control, with broadened scope.
Awards
The ASA presents awards and prizes to individuals for contributions to the field of Acoustics. These include:
Gold Medal
Silver Medal
Interdisciplinary Silver Medal – Helmholtz-Rayleigh Interdisciplinary Silver Medal
R. Bruce Lindsay Award
Wallace Clement Sabine Medal
Pioneers of Underwater Acoustics Medal
A. B. Wood Medal and Prize of the Institute of Acoustics
Trent-Crede Medal
von Békésy Medal
Honorary Fellows
Distinguished Service Citation
Science Communication Award
Rossing Prize in Acoustics Education
David T. Blackstock Mentor Award
Medwin Prize in Acoustical Oceanography
William and Christine Hartmann Prize in Auditory Neuroscience
Most technical committees also sponsor awards for best student or early career presenter at each conference.
Student activity
The ASA offers membership and conference attendance to students at a substantially reduced rate. Conference attendance is further promoted by travel subsidies and formal and informal student meetings and social activities. The ASA also expanded services to students in 2004 by introducing regional student chapters.
References
External links
ASA Home Page'
ASA Standards
ASA Publications
ASA students
ASA Press Room
Archival collections
Acoustical Society of America miscellaneous publications, 1934-2016, Niels Bohr Library & Archives
ASA Office of the President Edward Christopher Wente records, 1929-1946, Niels Bohr Library & Archives
Professional associations based in the United States
Acoustics
Learned societies of the United States | Acoustical Society of America | [
"Physics"
] | 975 | [
"Classical mechanics",
"Acoustics"
] |
1,632,880 | https://en.wikipedia.org/wiki/Legal%20guardian | A legal guardian is a person who has been appointed by a court or otherwise has the legal authority (and the corresponding duty) to make decisions relevant to the personal and property interests of another person who is deemed incompetent, called a ward. For example, a legal guardian might be granted the authority to make decisions regarding a ward's housing or medical care or manage the ward's finances. Guardianship is most appropriate when an alleged ward is functionally incapacitated, meaning they have a lagging skill critical to performing certain tasks, such as making important life decisions. Guardianship intends to serve as a safeguard to protect the ward.
Anyone can petition for a guardianship hearing if they believe another individual cannot make rational decisions on their own behalf. In a guardianship hearing, a judge ultimately decides whether guardianship is appropriate and, if so, will appoint a guardian. Guardians are typically used in four situations: guardianship for an incapacitated elderly person (due to old age or infirmity), guardianship for a minor, and guardianship for developmentally disabled adults and for adults found to be incompetent. A family member is most commonly appointed guardian, though a professional guardian or public trustee may be appointed if a suitable family member is not available.
Guardianship for incapacitated elderly
Guardianship for an incapacitated elderly person typically arises when someone determines that an elderly person has become unable to care for their own person and/or property. In fact, most alleged wards are elderly (Ms = 76–82 years), many of whom resided in a care facility and had been diagnosed with a neurological impairment such as dementia. Typically, a precipitating incident prompts a professional, family member, health care worker, or clergyman to initiate guardianship proceedings. While guardianship intends to protect and support incapacitated elderly people unable to care themselves or engage in the activities of daily living without assistance, guardianship sometimes results in financial exploitation of wards.
The process will generally start with a determination whether the alleged incapacitated person is actually incapacitated. There will often be an evidentiary hearing. A systematic review of guardianship studies from the United States, Sweden, and Australia found that the most commonly used evidence in guardianship hearings was the alleged ward's medical condition; perhaps surprisingly, descriptions of the alleged ward's cognitive abilities, functional abilities and psychiatric symptoms are much less common.
If the court determines an individual is incapacitated, the court then determines whether a guardian is necessary, the extent of the guardian's legal authority, (e.g. a guardian may be needed for the person's finances but not for the person) and, if so, who the guardian should be. The determination of whether a guardianship is necessary may consider a number of factors, including whether there is a lesser restrictive alternative, such as the use of an already existing power of attorney and health care proxy. In some cases, a guardianship dispute can become quite contentious and can result in litigation between a parent and adult children or between different siblings against each other in what is essentially a pre-probate dispute over a parent's wealth.
Abuses
A report published in 2010 by the U.S. Government Accountability Office looked at 20 selected closed cases in which guardians stole or otherwise improperly obtained assets from clients. In 6 of these 20 cases, the courts failed to adequately screen guardians ahead of time and appointed individuals with criminal convictions or significant financial problems, and in 12 of 20 cases, the courts failed to oversee guardians once they had been appointed.
In October 2017, The New Yorker published an article looking at the situation in Nevada in which professional guardians sometimes have a number of clients, and argued toward the conclusion that in a number of cases the courts did not properly oversee these arrangements. In 2018, the investigative documentary "The Guardians" was published, alleging "legal kidnapping of elderly people" in Nevada by private guardianship businesses with no familial or other preexisting relations to their wards, seeking to economically profit from seniors' savings.
Guardianship for minors
Natural guardian
A minor child's parents are the child's natural guardians.
Legal guardian
Most jurisdictions recognise that the parents of a child are the natural guardians of the child, and that the parents may designate who shall become the child's legal guardian in the event of death, typically subject to the approval of the court. The court may appoint a guardian for a minor if their parents are disabled or deceased or if the minor's parents cannot properly manage their child's safety and well-being. If a non-parent is appointed as guardian, the court will determine how the parents' parental rights are impacted by the appointment (e.g., establishing visitation schedules).
Guardianship for disabled adults
Legal guardians may be appointed in guardianship cases for adults (see also conservatorship). For example, because parents are not automatically appointed to serve as the guardian of their mentally or physical disabled child who reaches adulthood, parents may start a guardianship action to become the legal guardians when the child reaches the age of majority.
A famous example of such an arrangement is the situation involving Britney Spears, who was placed into a conservatorship under the supervision of her father, Jamie Spears, and attorney Andrew Wallet in 2008, following a series of highly publicized personal struggles and issues with mental health.
Rules applicable to all guardians
Courts generally have the power to appoint a guardian for an individual in need of special protection. A guardian with responsibility for both the personal well-being and the financial interests of the ward is a general guardian. A person may also be appointed as a special guardian, having limited powers over the interests of the ward. A special guardian may, for example, be given the legal right to determine the disposition of the ward's property without being given any authority over the ward's person.
Depending on the jurisdiction, a legal guardian may be called a "conservator", "tutor", "custodian", or curator. Many jurisdictions and the Uniform Probate Code distinguish between a "guardian" or "guardian of the person" who is an individual with authority over and fiduciary responsibilities for the physical person of the ward, and a "conservator" or "guardian of the property" of a ward who has authority over and fiduciary responsibilities for significant property (often an inheritance or personal injury settlement) belonging to the ward. Some jurisdictions provide for public guardianship programs serving incapacitated adults or children.
A guardian is a fiduciary and is held to a very high standard of care in exercising their powers. If the ward owns substantial property, then the guardian may be required to give a surety bond to protect the ward in case dishonesty or incompetence on their part causes financial loss to the ward.
Guardian ad litem
The Latin legal term ad litem means "for the lawsuit" or "for the legal proceeding". A guardian ad litem is thus someone appointed to represent in court the interests of a person too vulnerable to represent themselves, typically due to youth or mental incapacity.
Guardianship is not federally regulated in the United States; therefore, states vary widely in how they address and manage guardianship cases.
Family law and dependency courts
Guardians ad litem (GsAL) are persons appointed by the court to represent "the best interests of the child" in court proceedings. They are not the same as "legal guardians" and are often appointed in under-age-children cases, many times to represent the interests of the minor children. Guardians ad litem may be called, in some U.S. states, Court Appointed Special Advocates (CASA). In New York State, they are known as attorneys-for-the-child (AFCs). They are the voice of the child and may represent the child in court, with many judges adhering to any recommendation given by a GAL. GALs may assist where a child is removed from a hostile environment and custody given to the relevant state or county family services agency, and in those cases assists in the protection of the minor child.
Qualifications vary by state, ranging from no experience or qualification, volunteers to social workers to attorneys to others. The GAL's only job is to represent the minor children's best interest and advise the court. A guardian ad litem is an officer of the court, does not represent the parties in the suit, and often enjoys quasi-judicial immunity from any action from the parties involved in a particular case. Qualifications for becoming recognized as a GAL could differ in some states. In, for instance, North Carolina, an applicant (volunteer) must go through a background check and complete 30 hours of training. In Minnesota, the minimum qualifications to become a GAL are Bachelor's degree in psychology, social work, education, nursing, criminal justice, law or child-related discipline and some experience working with families and children or an equivalent combination of education and relevant experience. In addition, experience as a Guardian ad Litem with completion of the Guardian ad Litem pre-service orientation requirements is requested.
Although a guardian ad litem working through a CASA program volunteers their services, some guardians ad litem are paid for their services. They must submit detailed time and expense reports to the court for approval. Their fees are taxed as costs in the case. Courts may order all parties to share in the cost, or the court may order a particular party to pay the fees. Volunteer guardians ad litem and those that volunteer though a CASA program need to make sure that they do not engage in the unauthorized practice of law. Therefore, when they appear in court (even if they are an attorney) as a volunteer GAL, it is best practice to be represented by an attorney and have attorneys file motions on their behalf.
Guardians ad litem are also appointed in cases where there has been an allegation of child abuse, child neglect, PINS, juvenile delinquency, or dependency. In these situations, the guardian ad litem is charged to represent the best interests of the minor child, which can differ from the position of the state or government agency as well as the interest of the parent or guardian. These guardians ad litem vary by jurisdiction and can be volunteer advocates or attorneys. For example, in North Carolina, trained GAL volunteers are paired with attorney advocates to advocate for the best interest of abused and neglected children. The program defines a child's best interest as a safe, permanent home.
Mental health and probate courts
Guardians ad litem can be appointed by the court to represent the interests of mentally ill or disabled persons. For example, the Code of Virginia requires that the court appoint a "discreet and competent attorney-at-law" or "some other discreet and proper person" to serve as guardian ad litem to protect the interests of a person under a disability.
Estates and financial decision making
Guardians ad litem are sometimes appointed in probate matters to represent the interests of unknown or unlocated heirs to an estate.
Settlement guardians ad litem
When a settlement is reached in personal injury or medical malpractice cases involving claims brought on behalf of a minor or an incapacitated plaintiff, courts normally appoint a guardian ad litem to review the terms of the settlement, and to ensure it is fair and in the best interests of the claimant. The settlement guardian ad litem thoroughly investigates the case, to determine whether the settlement amount is fair and reasonable.
Alternatives to guardianship
Because guardianship limits a ward's autonomy and ability to make certain life decisions, guardianship has the potential to damage a ward's health and well-being. As a result, individuals considering guardianship to support a loved one with functional incapacities might consider whether there are less restrictive alternatives that can achieve the same objectives. Three examples of alternatives include establishing advance directives, relying on supported decision-making, or taking advantage of community-related services that support individuals with functional limitations.
Advance directives allow a competent individual to provide their input as to what actions should be taken should they become incompetent. For example, in a healthcare setting, an advance directive would allow a patient to voice what treatment options they prefer and who they would like to make decisions on their behalf should they become incompetent. The establishment of advance directives is a common practice among seniors in the United States.
Further, some individuals with limited functional capacities might maintain their autonomy by relying on family or friends who can help that individual informally or formally navigate important life decisions without formal guardianship, called "supported decision-making". For example, these support individuals can provide suggestions on where their loved one should live or recommend certain treatment options in medical settings. This support system can also help the individual modify their environment to promote their success. For example, if a family member is concerned that their loved one with reduced functional capacity might engage in an unsafe behavior (e.g., leaving the gas stove on), this family member can reduce the opportunity for this behavior (e.g., removing the gas stove) without court involvement. This technique allows individuals to support and empower loved ones who are cognitively impaired.
Finally, employing community services that will alleviate stressors of daily living may allow an alleged ward to maintain their autonomy. For example, certain volunteer organizations provide services such as telephone check-ins and home visits, and many medical or mental health professionals offer in-home services.
In summary, while guardianship sometimes offers the best solution to supporting an individual who demonstrates functional incapacity, one might consider exploring alternative solutions before seeking legal guardianship.
Guardianship by country
Republic of Korea
Types of Guardians under Korean Guardianship Law
Adult guardian (성년후견인): If an adult chronically lacks the mental competence to manage their own matters due to illness, disability, old age, or other conditions, a Korean court may appoint an adult guardian. This type of guardianship in Korea gives near total power over the ward to the Adult Guardian.
Limited guardian (한정후견인): A person may also be designated as a "special guardian", entrusted with restricted authority over the ward's interests. For example, a special guardian may be granted the legal authority in Korea to decide how to handle the ward's assets without being granted any control over the ward's person.
Specified guardian (특정후견인): A specified guardian is a person appointed to represent a person's interests in relation to a particular court proceeding or process.
The process of appointing a guardian through Korean courts
The Korean Family Courts, typically, has the authority to appoint a guardian in Korea. A general adult guardian is one who is in charge of both the ward's financial interests and personal welfare. The Korean family court, or one of its branches, has authority over the ward's address and will hear the guardianship case. When the Family Court is not present in the ward's address, typically, a district court or a branch court has jurisdiction over the matter.
Typically, after an evaluation of the ward's health by a doctor, the court proceedings begin. The court will often question the ward and hear his/her testimony regarding the guardianship. So that the ward can make the most use of his or her remaining capacity and choose a suitable guardian. The court has the power to decide the beginning of guardianship, the choice of a guardian, change of guardian, cessation of guardianship, the extent of the legal representative's authority, etc.
England and Wales
Guardians ad litem are employed by Children and Family Court Advisory and Support Service (CAFCASS), a non-departmental public body, to represent the interests of children in cases where the child's wishes differ from those of either parent, known as a Section 16.4 case. The posts are filled by senior social workers with experience in family law proceedings.
In 2006, a legal status of "special guardianship" was introduced (using powers delegated by the Adoption and Children Act 2002) to allow for a child to be cared for by a person with rights similar to a traditional legal guardian, but without absolute legal separation from the child's birth parents. These are not to be confused with court-appointed special guardians in other jurisdictions.
Prisoners
See section 13 of the Prison Act 1952.
In section 4 of the Official Secrets Act 1989, the expression "legal custody" includes detention in pursuance of any enactment or any instrument made under an enactment.
Children
See section 86 of the Children Act 1975.
Mental patients
Any person required or authorized by or by virtue of the Mental Health Act 1983 to be conveyed to any place or to be kept in custody or detained in a place of safety or at any place to which he is taken under section 42(6) of that Act is, while being so conveyed, detained or kept, as the case may be, deemed to be in legal custody. In England and Wales, only an Approved Mental Health Professional has the power to detain a person under the Act. For this purpose "convey" includes any other expression denoting removal from one place to another.
Germany
The German guardianship law with regard to adults was completely changed in 1990. Guardianship (Vormundschaft) of an adult was renamed 'curatorship' (Betreuung), although it remains Vormundschaft for minors. When a person of full age who, as a result of mental disease or physical, mental or psychological handicap or otherwise is incapable of managing his own affairs, a guardian (Betreuer) can be appointed (section 1,896, German Civil Code). An adult guardian is responsible for personal and estate matters, as well as for medical treatment. However, the ward has normally full capacity with all human rights such as those to marry, vote or make a will. The ward's legal capacity can be lost as a result of a court judgment or order (section 1903, German Civ. C.; Einwilligungsvorbehalt). Every guardian has to report annually to the guardianship court (Betreuungsgericht). Professional guardians (Berufsbetreuer) normally hold university degrees in law or social work.
Israel
In Israel, over 50,000 adults have had legal guardians appointed for them; 85% of them have family members as their guardians, and 15% have professional guardians. Until 2014, guardians (the term there is "Apotropos") were supervised by the Office of the Administrator General at the Ministry of Justice in matters of property only. However, changes in Israel and other countries along with public pressure, appeals to the courts by social organizations, academic studies and the State Comptroller's 2004 report led to the decision to broaden the scope of supervision to include personal matters as well, to ensure that the guardians take care of all areas of life, including medical care, personal care, suitable housing, work and employment, social and recreational activities, etc., taking account of the person's wishes and acting accordingly. The Office of the Administrator General (public guardian) at the Ministry of Justice is now implementing a system to supervise guardians in regard to personal matters in order to help identify situations in which guardians are not performing their duties adequately.
Republic of Ireland
The court-appointed guardian system in the Republic of Ireland was brought into law on the proposal of the noted gay activist and member of Seanad Éireann (the Irish Senate), David Norris. The Children Acts Advisory Board which was set up to advise the ministers of the government on policy development under the Child Care Act 1991 was then abolished in September 2011. Judges are responsible for appointing child guardians and can choose guardians from Barnardo's a children's charitable service or from among the self-employed guardians, who are mostly former social workers who have gone into private business since the legislation.
Saudi Arabia
Saudi Arabia has edited the law, and women in Saudi Arabia are no longer required to have a male guardian (Wali) to give permission for various government and economic transactions, as well as some personal life and health decisions.
Sweden
Swedish parental law (the Parental Code) regulates legal guardianship for both children and disabled adults. Legal guardianship for unaccompanied minors is regulated by a law of its own. Except for normal parenthood, the guardianship is assigned by the district court and supervised by the Chief Guardian, a municipal authority that is mandatory in every Swedish municipality. What is included in the field of guardianship is decided by the district court. The responsibility for health care and nursing is never included in the guardianship for adults, but is always so for minors. The guardianship for adults can take two legal forms, "conservator" or "administrator". The main difference between these two is that an "administrator" has the sole permission to take legal actions within the field of the guardianship. A guardianship can have different legal forms for different parts of the guardianship. Such things as basic human rights is never denied the ward by this law, but some of them can be denied by other laws. A conservator is normally assigned with the approval of the ward. But if the physical conditions of the ward does not permit him to give such approval, a conservator can be assigned anyhow. Everything a conservator does for his ward have to be approved by him, or can be assumed to be approved by him. For more complex situations, like taking loans or selling of a house, he or she needs approval from the local authorities. Once a year a legally assigned guardian have to send his accounting to the Chief Guardian for review.
Since 2017, the ward can, while she still have her mental abilities, write a special future letter of attorney (Framtidsfullmakt) which later can be used when she loses her abilities. How such a letter should be written is described in detail in the paternal law, and normally follows the principles of a will. This law was created since in Sweden, it is unclear if a normal letter of attorney is valid after the ward has lost her abilities.
See also
Conservatorship
Custodial account
Foster care
Receivership
Wali (Islamic legal guardian)
Apotropos - the term in Jewish law and Israeli law
References
External links
National Guardianship Association (USA)
Mental Capacity Act 2005 (England and Wales)
National Association to Stop Guardian Abuse (NASGA) United States
German guardianship law (English translation)
Guardian
Child custody
Environmental personhood
Human rights abuses | Legal guardian | [
"Environmental_science"
] | 4,614 | [
"Environmental personhood",
"Environmental ethics"
] |
1,632,972 | https://en.wikipedia.org/wiki/Nuclear%20transfer | Nuclear transfer is a form of cloning. The step involves removing the DNA from an oocyte (unfertilised egg), and injecting the nucleus which contains the DNA to be cloned. In rare instances, the newly constructed cell will divide normally, replicating the new DNA while remaining in a pluripotent state. If the cloned cells are placed in the uterus of a female mammal, a cloned organism develops to term in rare instances. This is how Dolly the Sheep and many other species were cloned. Cows are commonly cloned to select those that have the best milk production. On 24 January 2018, two monkey clones were reported to have been created with the technique for the first time.
Despite this, the low efficiency of the technique has prompted some researchers, notably Ian Wilmut, creator of Dolly the cloned sheep, to abandon it.
Tools and reagents
Nuclear transfer is a delicate process that is a major hurdle in the development of cloning technology. Materials used in this procedure are a microscope, a holding pipette (small vacuum) to keep the oocyte in place, and a micropipette (hair-thin needle) capable of extracting the nucleus of a cell using a vacuum. For some species, such as mouse, a drill is used to pierce the outer layers of the oocyte.
Various chemical reagents are used to increase cloning efficiency. Microtubule inhibitors, such as nocodazole, are used to arrest the oocyte in M phase, during which its nuclear membrane is dissolved. Chemicals are also used to stimulate oocyte activation. When applied the membrane is completely dissolved.
Somatic cell nuclear transfer
Somatic Cell Nuclear Transfer (SCNT) is the process by which the nucleus of an oocyte (egg cell) is removed and is replaced with the nucleus of a somatic (body) cell (examples include skin, heart, or nerve cell). The two entities fuse to become one and factors in the oocyte cause the somatic nucleus to reprogram to a pluripotent state. The cell contains genetic information identical to the donated somatic cell. After stimulating this cell to begin dividing, in the proper conditions an embryo will develop. Stem cells can be extracted 5–6 days later and used for research.
Reprogramming
Genomic reprogramming is the key biological process behind nuclear transfer. Currently unidentified reprogramming factors present in oocytes are capable of initiating a cascade of events that can reset the mature, specialized cell back to an undifferentiated, embryonic state. These factors are thought to be mainly proteins of the nucleus.
See also
Induced stem cells
Renucleation
three-parent baby#Ethics
References
Cloning
Developmental biology
Cell biology
Stem cells
Biotechnology
Induced stem cells
es:Transferencia nuclear celular | Nuclear transfer | [
"Engineering",
"Biology"
] | 585 | [
"Behavior",
"Cell biology",
"Developmental biology",
"Stem cell research",
"Reproduction",
"Cloning",
"Genetic engineering",
"Biotechnology",
"nan",
"Induced stem cells"
] |
1,632,974 | https://en.wikipedia.org/wiki/Anoxic%20event | An anoxic event describes a period wherein large expanses of Earth's oceans were depleted of dissolved oxygen (O2), creating toxic, euxinic (anoxic and sulfidic) waters. Although anoxic events have not happened for millions of years, the geologic record shows that they happened many times in the past. Anoxic events coincided with several mass extinctions and may have contributed to them. These mass extinctions include some that geobiologists use as time markers in biostratigraphic dating. On the other hand, there are widespread, various black-shale beds from the mid-Cretaceous which indicate anoxic events but are not associated with mass extinctions. Many geologists believe oceanic anoxic events are strongly linked to the slowing of ocean circulation, climatic warming, and elevated levels of greenhouse gases. Researchers have proposed enhanced volcanism (the release of CO2) as the "central external trigger for euxinia."
Human activities in the Holocene epoch, such as the release of nutrients from farms and sewage, cause relatively small-scale dead zones around the world. British oceanologist and atmospheric scientist Andrew Watson says full-scale ocean anoxia would take "thousands of years to develop." The idea that modern climate change could lead to such an event is also referred to as Kump's hypothesis.
Background
The concept of the oceanic anoxic event (OAE) was first proposed in 1976 by Seymour Schlanger (1927–1990) and geologist Hugh Jenkyns and arose from discoveries made by the Deep Sea Drilling Project (DSDP) in the Pacific Ocean. The finding of black, carbon-rich shales in Cretaceous sediments that had accumulated on submarine volcanic plateaus (e.g. Shatsky Rise, Manihiki Plateau), coupled with their identical age to similar, cored deposits from the Atlantic Ocean and known outcrops in Europe—particularly in the geological record of the otherwise limestone-dominated Apennines chain in Italy—led to the observation that these widespread, similarly distinct strata recorded very unusual, oxygen-depleted conditions in the world's oceans spanning several discrete periods of geological time.
Modern sedimentological investigations of these organic-rich sediments typically reveal the presence of fine laminations undisturbed by bottom-dwelling fauna, indicating anoxic conditions on the seafloor believed to coincide with a low-lying poisonous layer of hydrogen sulfide, H2S. Furthermore, detailed organic geochemical studies have recently revealed the presence of molecules (so-called biomarkers) that derive from both purple sulfur bacteria and green sulfur bacteria—organisms that required both light and free hydrogen sulfide (H2S), illustrating that anoxic conditions extended high into the photic upper-water column.
This is a recent understanding, the puzzle having been pieced slowly together in the last three decades. The handful of known and suspected anoxic events have been tied geologically to large-scale production of the world's oil reserves in worldwide bands of black shale in the geologic record.
Euxinia
Anoxic events with euxinic (anoxic, sulfidic) conditions have been linked to extreme episodes of volcanic outgassing. Volcanism contributed to the buildup of CO2 in the atmosphere and increased global temperatures, causing an accelerated hydrological cycle that introduced nutrients into the oceans (stimulating planktonic productivity). These processes potentially acted as a trigger for euxinia in restricted basins where water-column stratification could develop. Under anoxic to euxinic conditions, oceanic phosphate is not retained in sediment and could hence be released and recycled, aiding perpetual high productivity.
Mechanism
Temperatures throughout the Jurassic and Cretaceous are generally thought to have been relatively warm, and consequently dissolved oxygen levels in the ocean were lower than today—making anoxia easier to achieve. However, more specific conditions are required to explain the short-period (less than a million years) oceanic anoxic events. Two hypotheses, and variations upon them, have proved most durable.
One hypothesis suggests that the anomalous accumulation of organic matter relates to its enhanced preservation under restricted and poorly oxygenated conditions, which themselves were a function of the particular geometry of the ocean basin: such a hypothesis, although readily applicable to the young and relatively narrow Cretaceous Atlantic (which could be likened to a large-scale Black Sea, only poorly connected to the World Ocean), fails to explain the occurrence of coeval black shales on open-ocean Pacific plateaus and shelf seas around the world. There are suggestions, again from the Atlantic, that a shift in oceanic circulation was responsible, where warm, salty waters at low latitudes became hypersaline and sank to form an intermediate layer, at depth, with a temperature of .
The second hypothesis suggests that oceanic anoxic events record a major change in the fertility of the oceans that resulted in an increase in organic-walled plankton (including bacteria) at the expense of calcareous plankton such as coccoliths and foraminifera. Such an accelerated flux of organic matter would have expanded and intensified the oxygen minimum zone, further enhancing the amount of organic carbon entering the sedimentary record. Essentially this mechanism assumes a major increase in the availability of dissolved nutrients such as nitrate, phosphate and possibly iron to the phytoplankton population living in the illuminated layers of the oceans.
For such an increase to occur would have required an accelerated influx of land-derived nutrients coupled with vigorous upwelling, requiring major climate change on a global scale. Geochemical data from oxygen-isotope ratios in carbonate sediments and fossils, and magnesium/calcium ratios in fossils, indicate that all major oceanic anoxic events were associated with thermal maxima, making it likely that global weathering rates, and nutrient flux to the oceans, were increased during these intervals. Indeed, the reduced solubility of oxygen would lead to phosphate release, further nourishing the ocean and fuelling high productivity, hence a high oxygen demand—sustaining the event through a positive feedback.
Another way to explain anoxic events is that the Earth releases a huge volume of carbon dioxide during an interval of intense volcanism; global temperatures rise due to the greenhouse effect; global weathering rates and fluvial nutrient flux increase; organic productivity in the oceans increases; organic-carbon burial in the oceans increases (OAE begins); carbon dioxide is drawn down due to both burial of organic matter and weathering of silicate rocks (inverse greenhouse effect); global temperatures fall, and the ocean–atmosphere system returns to equilibrium (OAE ends).
In this way, an oceanic anoxic event can be viewed as the Earth's response to the injection of excess carbon dioxide into the atmosphere and hydrosphere. One test of this notion is to look at the age of large igneous provinces (LIPs), the extrusion of which would presumably have been accompanied by rapid effusion of vast quantities of volcanogenic gases such as carbon dioxide. The age of three LIPs (Karoo-Ferrar flood basalt, Caribbean large igneous province, Ontong Java Plateau) correlates well with that of the major Jurassic (early Toarcian) and Cretaceous (early Aptian and Cenomanian–Turonian) oceanic anoxic events, indicating that a causal link is feasible.
Occurrence
Oceanic anoxic events most commonly occurred during periods of very warm climate characterized by high levels of carbon dioxide (CO2) and mean surface temperatures probably in excess of . The Quaternary levels, the current period, are just in comparison. Such rises in carbon dioxide may have been in response to a great outgassing of the highly flammable natural gas (methane) that some call an "oceanic burp".<ref
name="Six steps"
>
</ref> Vast quantities of methane are normally locked into the Earth's crust on the continental plateaus in one of the many deposits consisting of compounds of methane hydrate, a solid precipitated combination of methane and water much like ice. Because the methane hydrates are unstable, except at cool temperatures and high (deep) pressures, scientists have observed smaller outgassing events due to tectonic events. Studies suggest the huge release of natural gas could be a major climatological trigger, methane itself being a greenhouse gas many times more powerful than carbon dioxide. However, anoxia was also rife during the Hirnantian (late Ordovician) ice age.
Oceanic anoxic events have been recognized primarily from the already warm Cretaceous and Jurassic Periods, when numerous examples have been documented, but earlier examples have been suggested to have occurred in the late Triassic, Permian, Devonian (Kellwasser event), Ordovician and Cambrian.
The Paleocene–Eocene Thermal Maximum (PETM), which was characterized by a global rise in temperature and deposition of organic-rich shales in some shelf seas, shows many similarities to oceanic anoxic events.
Typically, oceanic anoxic events lasted for less than a million years, before a full recovery.
Consequences
Oceanic anoxic events have had many important consequences. It is believed that they have been responsible for mass extinctions of marine organisms both in the Paleozoic and Mesozoic. The early Toarcian and Cenomanian-Turonian anoxic events correlate with the Toarcian and Cenomanian-Turonian extinction events of mostly marine life forms. Apart from possible atmospheric effects, many deeper-dwelling marine organisms could not adapt to an ocean where oxygen penetrated only the surface layers.
An economically significant consequence of oceanic anoxic events is the fact that the prevailing conditions in so many Mesozoic oceans has helped produce most of the world's petroleum and natural gas reserves. During an oceanic anoxic event, the accumulation and preservation of organic matter was much greater than normal, allowing the generation of potential petroleum source rocks in many environments across the globe. Consequently, some 70 percent of oil source rocks are Mesozoic in age, and another 15 percent date from the warm Paleogene: only rarely in colder periods were conditions favorable for the production of source rocks on anything other than a local scale.
Atmospheric effects
A model put forward by Lee Kump, Alexander Pavlov and Michael Arthur in 2005 suggests that oceanic anoxic events may have been characterized by upwelling of water rich in highly toxic hydrogen sulfide gas, which was then released into the atmosphere. This phenomenon would probably have poisoned plants and animals and caused mass extinctions. Furthermore, it has been proposed that the hydrogen sulfide rose to the upper atmosphere and attacked the ozone layer, which normally blocks the deadly ultraviolet radiation of the Sun. The increased UV radiation caused by this ozone depletion would have amplified the destruction of plant and animal life. Fossil spores from strata recording the Permian–Triassic extinction event show deformities consistent with UV radiation. This evidence, combined with fossil biomarkers of green sulfur bacteria, indicates that this process could have played a role in that mass extinction event, and possibly other extinction events. The trigger for these mass extinctions appears to be a warming of the ocean caused by a rise of carbon dioxide levels to about 1000 parts per million.
Ocean chemistry effects
Reduced oxygen levels are expected to lead to increased seawater concentrations of redox-sensitive metals. The reductive dissolution of iron–manganese oxyhydroxides in seafloor sediments under low-oxygen conditions would release those metals and associated trace metals. Sulfate reduction in such sediments could release other metals such as barium. When heavy-metal-rich anoxic deep water entered continental shelves and encountered increased O2 levels, precipitation of some of the metals, as well as poisoning of the local biota, would have occurred. In the late Silurian mid-Pridoli event, increases are seen in the Fe, Cu, As, Al, Pb, Ba, Mo and Mn levels in shallow-water sediment and microplankton; this is associated with a marked increase in the malformation rate in chitinozoans and other microplankton types, likely due to metal toxicity. Similar metal enrichment has been reported in sediments from the mid-Silurian Ireviken event.
Anoxic events in Earth's history
Cretaceous
Sulfidic (or euxinic) conditions, which exist today in many water bodies from ponds to various land-surrounded mediterranean seas<ref
name="dicdef-1">definition of mediterranean sea; "6. surrounded or nearly surrounded by land."</ref> such as the Black Sea, were particularly prevalent in the Cretaceous Atlantic but also characterised other parts of the world ocean. In an ice-free sea of these supposed super-greenhouse worlds, oceanic waters were as much as higher, in some eras. During the timespans in question, the continental plates are believed to have been well separated, and the mountains as they are known today were (mostly) future tectonic events—meaning the overall landscapes were generally much lower— and even the half super-greenhouse climates would have been eras of highly expedited water erosion carrying massive amounts of nutrients into the world oceans fuelling an overall explosive population of microorganisms and their predator species in the oxygenated upper layers.
Detailed stratigraphic studies of Cretaceous black shales from many parts of the world have indicated that two oceanic anoxic events (OAEs) were particularly significant in terms of their impact on the chemistry of the oceans, one in the early Aptian (~120 Ma), sometimes called the Selli Event (or OAE 1a) after the Italian geologist Raimondo Selli (1916–1983), and another at the Cenomanian–Turonian boundary (~93 Ma), also called the Bonarelli Event (or OAE2) after the Italian geologist Guido Bonarelli (1871–1951). OAE1a lasted for ~1.0 to 1.3 Myr. The duration of OAE2 is estimated to be ~820 kyr based on a high-resolution study of the significantly expanded OAE2 interval in southern Tibet, China.
Insofar as the Cretaceous OAEs can be represented by type localities, it is the striking outcrops of laminated black shales within the vari-coloured claystones and pink and white limestones near the town of Gubbio in the Italian Apennines that are the best candidates.
The 1-metre thick black shale at the Cenomanian–Turonian boundary that crops out near Gubbio is termed the 'Livello Bonarelli' after the scientist who first described it in 1891.
More minor oceanic anoxic events have been proposed for other intervals in the Cretaceous (in the Valanginian, Hauterivian, Albian and Coniacian–Santonian stages), but their sedimentary record, as represented by organic-rich black shales, appears more parochial, being dominantly represented in the Atlantic and neighbouring areas, and some researchers relate them to particular local conditions rather than being forced by global change.
Jurassic
The only oceanic anoxic event documented from the Jurassic took place during the early Toarcian (~183 Ma). Since no DSDP (Deep Sea Drilling Project) or ODP (Ocean Drilling Program) cores have recovered black shales of this age—there being little or no Toarcian ocean crust remaining—the samples of black shale primarily come from outcrops on land. These outcrops, together with material from some commercial oil wells, are found on all major continents and this event seems similar in kind to the two major Cretaceous examples.
Paleozoic
The Permian–Triassic extinction event, triggered by runaway from the Siberian Traps, was marked by ocean deoxygenation.
The boundary between the Ordovician and Silurian periods is marked by repetitive periods of anoxia, interspersed with normal, oxic conditions. In addition, anoxic periods are found during the Silurian. These anoxic periods occurred at a time of low global temperatures (although levels were high), in the midst of a glaciation.
Jeppsson (1990) proposes a mechanism whereby the temperature of polar waters determines the site of formation of downwelling water. If the high latitude waters are below , they will be dense enough to sink; as they are cool, oxygen is highly soluble in their waters, and the deep ocean will be oxygenated. If high latitude waters are warmer than , their density is too low for them to sink below the cooler deep waters. Therefore, thermohaline circulation can only be driven by salt-increased density, which tends to form in warm waters where evaporation is high. This warm water can dissolve less oxygen, and is produced in smaller quantities, producing a sluggish circulation with little deep water oxygen. The effect of this warm water propagates through the ocean, and reduces the amount of that the oceans can hold in solution, which makes the oceans release large quantities of into the atmosphere in a geologically short time (tens or thousands of years). The warm waters also initiate the release of clathrates, which further increases atmospheric temperature and basin anoxia. Similar positive feedbacks operate during cold-pole episodes, amplifying their cooling effects.
The periods with cold poles are termed "P-episodes" (short for primo), and are characterised by bioturbated deep oceans, a humid equator and higher weathering rates, and terminated by extinction events—for example, the Ireviken and Lau events. The inverse is true for the warmer, oxic "S-episodes" (secundo), where deep ocean sediments are typically graptolitic black shales.
A typical cycle of secundo-primo episodes and ensuing event typically lasts around 3 Ma.
The duration of events is so long compared to their onset because the positive feedbacks must be overwhelmed. Carbon content in the ocean-atmosphere system is affected by changes in weathering rates, which in turn is dominantly controlled by rainfall. Because this is inversely related to temperature in Silurian times, carbon is gradually drawn down during warm (high ) S-episodes, while the reverse is true during P-episodes. On top of this gradual trend is overprinted the signal of Milankovic cycles, which ultimately trigger the switch between P- and S- episodes.
These events become longer during the Devonian; the enlarging land plant biota probably acted as a large buffer to carbon dioxide concentrations.
The end-Ordovician Hirnantian event may alternatively be a result of algal blooms, caused by sudden supply of nutrients through wind-driven upwelling or an influx of nutrient-rich meltwater from melting glaciers, which by virtue of its fresh nature would also slow down oceanic circulation.
Archean and Proterozoic
It has been thought that through most of Earth's history, oceans were largely oxygen-deficient. During the Archean, euxinia was largely absent because of low availability of sulfate in the oceans, but during the Proterozoic, it would become more common.
Several anoxic events are known from the late Neoproterozoic, including one from the early Nama assemblage possibly coinciding with the first pulse of the end-Ediacaran extinction.
See also
Anoxic waters
Canfield ocean
Carbon Dioxide
Hydrogen Sulfide
Hypoxia (environmental) – for links to other articles dealing with environmental hypoxia or anoxia.
Long-term effects of global warming
Meromictic
Ocean acidification
Ocean deoxygenation
Shutdown of thermohaline circulation
References
Further reading
Demaison G.J. and Moore G.T., (1980), "Anoxic environments and oil source bed genesis". American Association of Petroleum Geologists (AAPG) Bulletin, Vol.54, 1179–1209.
External links
Hot and stinky: The oceans without oxygen
Cretaceous climate-ocean dynamics
Hugh Jenkyns talking about the Bonarelli Level and OAEs - YouTube
Original article (Geologie en Mijnbouw, 55, 179–184, 1976) on oceanic anoxic events authored by Seymour Schlanger and Hugh Jenkyns Cretaceous Oceanic Anoxic Events: Causes and Consequences
Aquatic ecology
Bioindicators
Chemical oceanography
Doomsday scenarios
Ecotoxicology
Environmental chemistry
Environmental science
Oceanography
Oxygen
Water quality indicators
Anoxic events | Anoxic event | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 4,228 | [
"Hydrology",
"Bioindicators",
"Applied and interdisciplinary physics",
"Anoxic events",
"Oceanography",
"Environmental chemistry",
"Water pollution",
"Chemical oceanography",
"Water quality indicators",
"Ecosystems",
"nan",
"Aquatic ecology"
] |
1,633,057 | https://en.wikipedia.org/wiki/Toroidal%20graph | In the mathematical field of graph theory, a toroidal graph is a graph that can be embedded on a torus. In other words, the graph's vertices and edges can be placed on a torus such that no edges intersect except at a vertex that belongs to both.
Examples
Any graph that can be embedded in a plane can also be embedded in a torus, so every planar graph is also a toroidal graph. A toroidal graph that cannot be embedded in a plane is said to have genus 1.
The Heawood graph, the complete graph K7 (and hence K5 and K6), the Petersen graph (and hence the complete bipartite graph K3,3, since the Petersen graph contains a subdivision of it), one of the Blanuša snarks, and all Möbius ladders are toroidal. More generally, any graph with crossing number 1 is toroidal. Some graphs with greater crossing numbers are also toroidal: the Möbius–Kantor graph, for example, has crossing number 4 and is toroidal.
Properties
Any toroidal graph has chromatic number at most 7. The complete graph K7 provides an example of a toroidal graph with chromatic number 7.
Any triangle-free toroidal graph has chromatic number at most 4.
By a result analogous to Fáry's theorem, any toroidal graph may be drawn with straight edges in a rectangle with periodic boundary conditions. Furthermore, the analogue of Tutte's spring theorem applies in this case.
Toroidal graphs also have book embeddings with at most 7 pages.
Obstructions
By the Robertson–Seymour theorem, there exists a finite set H of minimal non-toroidal graphs, such that a graph is toroidal if and only if it has no graph minor in H.
That is, H forms the set of forbidden minors for the toroidal graphs.
The complete set H is not known, but it has at least 17,523 graphs. Alternatively, there are at least 250,815 non-toroidal graphs that are minimal in the topological minor ordering.
A graph is toroidal if and only if it has none of these graphs as a topological minor.
Gallery
See also
Planar graph
Topological graph theory
Császár polyhedron
Notes
References
.
.
.
.
.
.
.
.
.
Graph families
Topological graph theory | Toroidal graph | [
"Mathematics"
] | 480 | [
"Mathematical relations",
"Topological graph theory",
"Topology",
"Graph theory"
] |
1,633,075 | https://en.wikipedia.org/wiki/Electron%20microprobe | An electron microprobe (EMP), also known as an electron probe microanalyzer (EPMA) or electron micro probe analyzer (EMPA), is an analytical tool used to non-destructively determine the chemical composition of small volumes of solid materials. It works similarly to a scanning electron microscope: the sample is bombarded with an electron beam, emitting x-rays at wavelengths characteristic to the elements being analyzed. This enables the abundances of elements present within small sample volumes (typically 10-30 cubic micrometers or less) to be determined, when a conventional accelerating voltage of 15-20 kV is used. The concentrations of elements from lithium to plutonium may be measured at levels as low as 100 parts per million (ppm), material dependent, although with care, levels below 10 ppm are possible. The ability to quantify lithium by EPMA became a reality in 2008.
History
The electron microprobe (electron probe microanalyzer) developed from two technologies: electron microscopy — using a focused high energy electron beam to impact a target material, and X-ray spectroscopy — identification of the photons scattered from the electron beam impact, with the energy/wavelength of the photons characteristic of the atoms excited by the incident electrons. Ernst Ruska and Max Knoll are associated with the prototype electron microscope in 1931. Henry Moseley was involved in the discovery of the direct relationship between the wavelength of X-rays and the identity of the atom from which it originated.
There have been at several historical threads to electron beam microanalysis. One was developed by James Hillier and Richard Baker at RCA. In the early 1940s, they built an electron microprobe, combining an electron microscope and an energy loss spectrometer. A patent application was filed in 1944. Electron energy loss spectroscopy is very good for light element analysis and they obtained spectra of C-Kα, N-Kα and O-Kα radiation. In 1947, Hiller patented the concept of using an electron beam to produce analytical X-rays, but never constructed a working model. His design proposed using Bragg diffraction from a flat crystal to select specific X-ray wavelengths and a photographic plate as a detector. However, RCA had no interest in commercializing this invention.
A second thread developed in France in the late 1940s. In 1948–1950, Raimond Castaing, supervised by André Guinier, built the first electron “microsonde électronique” (electron microprobe) at ONERA. This microprobe produced an electron beam diameter of 1-3 μm with a beam current of ~10 nanoamperes (nA) and used a Geiger counter to detect the X-rays produced from the sample. However, the Geiger counter could not distinguish X-rays produced from specific elements and in 1950, Castaing added a quartz crystal between the sample and the detector to permit wavelength discrimination. He also added an optical microscope to view the point of beam impact. The resulting microprobe was described in Castaing's 1951 PhD Thesis, translated into English by Pol Duwez and David Wittry, in which he laid the foundations of the theory and application of quantitative analysis by electron microprobe, establishing the theoretical framework for the matrix corrections of absorption and fluorescence effects. Castaing (1921-1999) is considered the father of electron microprobe analysis.
The 1950s was a decade of great interest in electron beam X-ray microanalysis, following Castaing's presentations at the First European Microscopy Conference in Delft in 1949 and then at the National Bureau of Standards conference on Electron Physics in Washington, DC, in 1951, as well as at other conferences in the early to mid-1950s. Many researchers, mainly material scientists, developed their own experimental electron microprobes, sometimes starting from scratch, but many times using surplus electron microscopes.
One of the organizers of the Delft 1949 Electron Microscopy conference was Vernon Ellis Cosslett at the Cavendish Laboratory at Cambridge University, a center of research on electron microscopy, as well as scanning electron microscopy with Charles Oatley as well as X-ray microscopy with Bill Nixon. Peter Duncumb combined all three technologies and developed a scanning electron X-ray microanalyzer for his PhD thesis (1957), which was commercialized as the Cambridge MicroScan.
Pol Duwez, a Belgian material scientist who fled the Nazis and settled at the California Institute of Technology and collaborated with Jesse DuMond, encountered André Guinier on a train in Europe in 1952, where he learned of Castaing's new instrument and the suggestion that Caltech build a similar instrument. David Wittry was hired to build such an instrument as his PhD thesis, which he completed in 1957. It became the prototype for the ARL EMX electron microprobe.
During the late 1950s and early 1960s there were over a dozen other laboratories in North America, the United Kingdom, Europe, Japan and the USSR developing electron beam X-ray microanalyzers.
The first commercial electron microprobe, the "MS85" was produced by CAMECA (France) in 1956.. It was soon followed in the early-mid 1960s by microprobes from other companies; however, all companies except CAMECA, JEOL and Shimadzu Corporation went out of business. In addition, many researchers build electron microprobes in their labs. Significant subsequent improvements and modifications to microprobes included scanning the electron beam to make X-ray maps (1960), the addition of solid state EDS detectors (1968) and the development of synthetic multilayer diffracting crystals for analysis of light elements (1984). Later, CAMECA pioneered manufacturing a shielded electron microprobe for nuclear applications. Several advances in CAMECA instruments in recent decades expanded the range of applications on metallurgy, electronics, geology, mineralogy, nuclear plants, trace elements, and dentistry.
Operation
A beam of electrons is fired at a sample. The beam causes each element in the sample to emit X-rays at a characteristic frequency; the X-rays can then be detected by the electron microprobe. The size and current density of the electron beam determines the trade-off between resolution and scan time and/or analysis time.
Detailed description
Low-energy electrons are produced from a tungsten filament, a lanthanum hexaboride crystal cathode or a field emission electron source and accelerated by a positively biased anode plate to 3 to 30 thousand electron volts (keV). The anode plate has central aperture and electrons that pass through it are collimated and focused by a series of magnetic lenses and apertures. The resulting electron beam (approximately 5 nm to 10 μm diameter) may be rastered across the sample or used in spot mode to produce excitation of various effects in the sample. Among these effects are: phonon excitation (heat), cathodoluminescence (visible light fluorescence), continuum X-ray radiation (bremsstrahlung), characteristic X-ray radiation, secondary electrons (plasmon production), backscattered electron production, and Auger electron production.
When the beam electrons (and scattered electrons from the sample) interact with bound electrons in the innermost electron shells of the atoms of the various elements in the sample, they can scatter the bound electrons from the electron shell producing a vacancy in that shell (ionization of the atom). This vacancy is unstable and must be filled by an electron from either a higher energy bound shell in the atom (producing another vacancy which is in turn filled by electrons from yet higher energy bound shells) or by unbound electrons of low energy. The difference in binding energy between the electron shell in which the vacancy was produced and the shell from which the electron comes to fill the vacancy is emitted as a photon. The energy of the photon is in the X-ray region of the electromagnetic spectrum. As the electron structure of each element is unique, the series X-ray line energies produced by vacancies in the innermost shells is characteristic of that element, although lines from different elements may overlap. As the innermost shells are involved, the X-ray line energies are generally not affected by chemical effects produced by bonding between elements in compounds except in low atomic number (Z) elements ( B, C, N, O and F for Kalpha and Al to Cl for Kbeta) where line energies may be shifted as a result of the involvement of the electron shell from which vacancies are filled in chemical bonding.
The characteristic X-rays are used for chemical analysis. Specific X-ray wavelengths or energies are selected and counted, either by wavelength dispersive X-ray spectroscopy (WDS) or energy dispersive X-ray spectroscopy (EDS). WDS utilizes Bragg diffraction from crystals to select X-ray wavelengths of interest and direct them to gas-flow or sealed proportional detectors. In contrast, EDS uses a solid state semiconductor detector to accumulate X-rays of all wavelengths produced from the sample. While EDS yields more information and typically requires a much shorter counting time, WDS is generally more precise with lower limits of detection due to its superior X-ray peak resolution and greater peak to background ratio.
Chemical composition is determined by comparing the intensities of characteristic X-rays from the sample with intensities from standards of known composition. Counts from the sample must be corrected for matrix effects (depth of production of the X-rays, absorption and secondary fluorescence) to yield quantitative chemical compositions. The resulting chemical data is gathered in textural context. Variations in chemical composition within a material (zoning), such as a mineral grain or metal, can be readily determined.
Volume from which chemical information is gathered (volume of X-rays generated) is 0.3 – 3 cubic micrometers.
Limitations
WDS cannot determine elements below number 5 (Boron). This restricts WDS when analyzing geologically important elements such as H, Li, and Be.
Despite the improved spectral resolution of elemental peaks, some peaks exhibit significant overlap that causes analytical challenges (e.g., VKα and TiKβ). WDS analyses are unable to distinguish the valence states of elements (e.g. Fe2+ vs. Fe3+) which must be obtained by other techniques such as Mössbauer spectroscopy or electron energy loss spectroscopy.
Element isotopes cannot be determined by WDS, but are most commonly obtained with a mass spectrometer.
Applications
Materials science and engineering
The technique is commonly used for analyzing the chemical composition of metals, alloys, ceramics, and glasses. It is particularly useful for assessing the composition of individual particles or grains and chemical changes on the scale of a few micrometres to millimeters. The electron microprobe is widely used for research, quality control, and failure analysis.
Mineralogy and petrology
This technique is most commonly used by mineralogists and petrologists. Most rocks are aggregates of small mineral grains. These grains may preserve chemical information acquired during their formation and subsequent alteration. This information may illuminate geologic processes such as crystallization, lithification, volcanism, metamorphism, orogenic events (mountain building), and plate tectonics. This technique is also used for the study of extraterrestrial rocks (meteorites), and provides chemical data which is vital to understanding the evolution of the planets, asteroids, and comets.
The change in elemental composition from the center (also known as core) to the edge (or rim) of a mineral can yield information about the history of the crystal's formation, including the temperature, pressure, and chemistry of the surrounding medium. Quartz crystals, for example, incorporate a small, but measurable amount of titanium into their structure as a function of temperature, pressure, and the amount of titanium available in their environment. Changes in these parameters are recorded by titanium as the crystal grows.
Paleontology
In exceptionally preserved fossils, such as those of the Burgess shale, soft parts of organisms may be preserved. Since these fossils are often compressed into a planar film, it can be difficult to distinguish the features: a famous example is the triangular extensions in Opabinia, which were interpreted as either legs or extensions of the gut. Elemental mapping showed that their composition was similar to the gut, favoring that interpretation. Because of the thinness of carbon films, only low voltages (5-15 kV) can be used on them.
Meteorite analysis
The chemical composition of meteorites can be analyzed quite accurately using EPMA. This can reveal much about the conditions that existed in the early Solar System.
Online tutorials
Jim Wittke's class notes at Northern Arizona University
John Fournelle's class notes at the University of Wisconsin–Madison
John Donovan's class notes at the University of Oregon
See also
Electron microscope
Electron spectroscopy
Thin section
References
Further reading
External links
Electron Probe Laboratory, Hebrew University of Jerusalem - web page of a lab describing their modern EPMA
X-rays
Microscopes
Analytical chemistry
Scientific techniques | Electron microprobe | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,671 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Measuring instruments",
"Microscopes",
"nan",
"Microscopy"
] |
1,633,173 | https://en.wikipedia.org/wiki/Ecological%20engineering | Ecological engineering uses ecology and engineering to predict, design, construct or restore, and manage ecosystems that integrate "human society with its natural environment for the benefit of both".
Origins, key concepts, definitions, and applications
Ecological engineering emerged as a new idea in the early 1960s, but its definition has taken several decades to refine. Its implementation is still undergoing adjustment, and its broader recognition as a new paradigm is relatively recent. Ecological engineering was introduced by Howard Odum and others as utilizing natural energy sources as the predominant input to manipulate and control environmental systems. The origins of ecological engineering are in Odum's work with ecological modeling and ecosystem simulation to capture holistic macro-patterns of energy and material flows affecting the efficient use of resources.
Mitsch and Jorgensen summarized five basic concepts that differentiate ecological engineering from other approaches to addressing problems to benefit society and nature: 1) it is based on the self-designing capacity of ecosystems; 2) it can be the field (or acid) test of ecological theories; 3) it relies on system approaches; 4) it conserves non-renewable energy sources; and 5) it supports ecosystem and biological conservation.
Mitsch and Jorgensen were the first to define ecological engineering as designing societal services such that they benefit society and nature, and later noted the design should be systems based, sustainable, and integrate society with its natural environment.
Bergen et al. defined ecological engineering as: 1) utilizing ecological science and theory; 2) applying to all types of ecosystems; 3) adapting engineering design methods; and 4) acknowledging a guiding value system.
Barrett (1999) offers a more literal definition of the term: "the design, construction, operation and management (that is, engineering) of landscape/aquatic structures and associated plant and animal communities (that is, ecosystems) to benefit humanity and, often, nature." Barrett continues: "other terms with equivalent or similar meanings include ecotechnology and two terms most often used in the erosion control field: soil bioengineering and biotechnical engineering. However, ecological engineering should not be confused with 'biotechnology' when describing genetic engineering at the cellular level, or 'bioengineering' meaning construction of artificial body parts."
The applications in ecological engineering can be classified into 3 spatial scales: 1) mesocosms (~0.1 to hundreds of meters); 2) ecosystems (~one to tens of km); and 3) regional systems (>tens of km). The complexity of the design likely increases with the spatial scale. Applications are increasing in breadth and depth, and likely impacting the field's definition, as more opportunities to design and use ecosystems as interfaces between society and nature are explored. Implementation of ecological engineering has focused on the creation or restoration of ecosystems, from degraded wetlands to multi-celled tubs and greenhouses that integrate microbial, fish, and plant services to process human wastewater into products such as fertilizers, flowers, and drinking water. Applications of ecological engineering in cities have emerged from collaboration with other fields such as landscape architecture, urban planning, and urban horticulture, to address human health and biodiversity, as targeted by the UN Sustainable Development Goals, with holistic projects such as stormwater management. Applications of ecological engineering in rural landscapes have included wetland treatment and community reforestation through traditional ecological knowledge. Permaculture is an example of broader applications that have emerged as distinct disciplines from ecological engineering, where David Holmgren cites the influence of Howard Odum in development of permaculture.
Design guidelines, functional classes, and design principles
Ecological engineering design will combine systems ecology with the process of engineering design. Engineering design typically involves problem formulation (goal), problem analysis (constraints), alternative solutions search, decision among alternatives, and specification of a complete solution. A temporal design framework is provided by Matlock et al., stating the design solutions are considered in ecological time. In selecting between alternatives, the design should incorporate ecological economics in design evaluation and acknowledge a guiding value system which promotes biological conservation, benefiting society and nature.
Ecological engineering utilizes systems ecology with engineering design to obtain a holistic view of the interactions within and between society and nature. Ecosystem simulation with Energy Systems Language (also known as energy circuit language or energese) by Howard Odum is one illustration of this systems ecology approach. This holistic model development and simulation defines the system of interest, identifies the system's boundary, and diagrams how energy and material moves into, within, and out of, a system in order to identify how to use renewable resources through ecosystem processes and increase sustainability. The system it describes is a collection (i.e., group) of components (i.e., parts), connected by some type of interaction or interrelationship, that collectively responds to some stimulus or demand and fulfills some specific purpose or function. By understanding systems ecology the ecological engineer can more efficiently design with ecosystem components and processes within the design, utilize renewable energy and resources, and increase sustainability.
Mitsch and Jorgensen identified five Functional Classes for ecological engineering designs:
Ecosystem utilized to reduce/solve pollution problem. Example: phytoremediation, wastewater wetland, and bioretention of stormwater to filter excess nutrients and metals pollution
Ecosystem imitated or copied to address resource problem. Example: forest restoration, replacement wetlands, and installing street side rain gardens to extend canopy cover to optimize residential and urban cooling
Ecosystem recovered after disturbance. Example: mine land restoration, lake restoration, and channel aquatic restoration with mature riparian corridors
Ecosystem modified in ecologically sound way. Example: selective timber harvest, biomanipulation, and introduction of predator fish to reduce planktivorous fish, increase zooplankton, consume algae or phytoplankton, and clarify the water.
Ecosystems used for benefit without destroying balance. Example: sustainable agro-ecosystems, multispecies aquaculture, and introducing agroforestry plots into residential property to generate primary production at multiple vertical levels.
Mitsch and Jorgensen identified 19 Design Principles for ecological engineering, yet not all are expected to contribute to any single design:
Ecosystem structure & function are determined by forcing functions of the system;
Energy inputs to the ecosystems and available storage of the ecosystem is limited;
Ecosystems are open and dissipative systems (not thermodynamic balance of energy, matter, entropy, but spontaneous appearance of complex, chaotic structure);
Attention to a limited number of governing/controlling factors is most strategic in preventing pollution or restoring ecosystems;
Ecosystem have some homeostatic capability that results in smoothing out and depressing the effects of strongly variable inputs;
Match recycling pathways to the rates of ecosystems and reduce pollution effects;
Design for pulsing systems wherever possible;
Ecosystems are self-designing systems;
Processes of ecosystems have characteristic time and space scales that should be accounted for in environmental management;
Biodiversity should be championed to maintain an ecosystem's self design capacity;
Ecotones, transition zones, are as important for ecosystems as membranes for cells;
Coupling between ecosystems should be utilized wherever possible;
The components of an ecosystem are interconnected, interrelated, and form a network; consider direct as well as indirect efforts of ecosystem development;
An ecosystem has a history of development;
Ecosystems and species are most vulnerable at their geographical edges;
Ecosystems are hierarchical systems and are parts of a larger landscape;
Physical and biological processes are interactive, it is important to know both physical and biological interactions and to interpret them properly;
Eco-technology requires a holistic approach that integrates all interacting parts and processes as far as possible;
Information in ecosystems is stored in structures.
Mitsch and Jorgensen identified the following considerations prior implementing an ecological engineering design:
Create conceptual model of determine the parts of nature connected to the project;
Implement a computer model to simulate the impacts and uncertainty of the project;
Optimize the project to reduce uncertainty and increase beneficial impacts.
Relationship to other engineering disciplines
The field of Ecological Engineering is closely related to the fields of environmental engineering and civil engineering. The three broadly overlap in the area of water resources engineering, particularly the treatment and management of stormwater and wastewater. While the three disciplines of engineering are closely related to one another, there are distinct areas of expertise within each field.
Ecological engineering is primarily focused on the natural environment and natural infrastructure, emphasizing the mediation of the relationship between people and planet. In complementary disciplines, civil engineering is primarily focused on built infrastructure and public works while environmental engineering focuses on the protection of public and environmental health through the treatment and management of waste streams.
Academic curriculum (colleges)
An academic curriculum was proposed for ecological engineering in 2001. Key elements of the suggested curriculum are: environmental engineering; systems ecology; restoration ecology; ecological modeling; quantitative ecology; economics of ecological engineering, and technical electives. Complementing this set of courses were prerequisites courses in physical, biological, and chemical subject areas, and integrated design experiences. According to Matlock et al., the design should identify constraints, characterize solutions in ecological time, and incorporate ecological economics in design evaluation. Economics of ecological engineering has been demonstrated using energy principles for a wetland., and using nutrient valuation for a dairy farm. With these principals in mind, the world's first B.S. Ecological Engineering program was formalized in 2009 at Oregon State University.
In 2024, the US Accreditation Board for Engineering and Technology, Inc. (ABET) published criteria for accreditation of Ecological Engineering program for the first time. To be accredited, B.S. Ecological Engineering programs must include:
mathematics through differential equations, probability and statistics, calculus-based physics, and college-level chemistry;
earth science, fluid mechanics, hydraulics, and hydrology.
biological and advanced ecological sciences that focus on multi-organism self-sustaining systems at a range of scales, systems ecology, ecosystem services, and ecological modeling;
material and energy balances; fate and transport of substances in and between air, water, and soil; thermodynamics of living systems; and
applications of ecological principles to engineering design that include considerations of climate, species diversity, self-organization, uncertainty, sustainability, resilience, interactions between ecological and social systems, and system-scale impacts and benefits.
See also
Afforestation
Agroecology
Agroforestry
Analog forestry
Biomass (ecology)
Buffer strip
Constructed wetland
Energy-efficient landscaping
Environmental engineering
Forest farming
Forest gardening
Great Green Wall
Great Plains Shelterbelt (1934- )
Great Plan for the Transformation of Nature - an example of applied ecological engineering in the 1940s and 1950s
Hedgerow
Home gardens
Human ecology
Macro-engineering
Sand fence
Seawater greenhouse
Sustainable agriculture
Terra preta
Three-North Shelter Forest Program
Wildcrafting
Windbreak
Literature
Howard T. Odum (1963), "Man and Ecosystem" Proceedings, Lockwood Conference on the Suburban Forest and Ecology, in: Bulletin Connecticut Agric. Station.
W.J. Mitsch (1993), Ecological engineering—"a cooperative role with the planetary life–support systems. Environmental Science & Technology 27:438-445.
H.D. van Bohemen (2004), Ecological Engineering and Civil Engineering works, Doctoral thesis TU Delft, The Netherlands.
References
External links
What is "ecological engineering"? Webtext, Ecological Engineering Group, 2007.
Ecological Engineering Student Society Website, EESS, Oregon State University, 2011.
Ecological Engineering webtext by Howard T. Odum Center for Wetlands at the University of Florida, 2007.
Organizations
American Ecological Engineering Society, homepage.
Ecological Engineering Student Society Website, EESS, Oregon State University, 2011.
American Society of Professional Wetland Engineers, homepage, wiki.
Ecological Engineering Group, homepage.
International Ecological Engineering Society homepage.
Scientific journals
Ecological Engineering since 1992, with a general description of the field.
Landscape and Ecological Engineering since 2005.
Journal of Ecological Engineering Design Officially launched in 2021, this journal offers a diamond open access format (free to the reader, free to the authors). This is the official journal of the American Ecological Engineering Society with production support from the University of Vermont Libraries.
Ecological restoration
Environmental terminology
Environmental engineering
Environmental social science
Engineering disciplines
Climate change policy | Ecological engineering | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,470 | [
"Ecological restoration",
"Chemical engineering",
"Civil engineering",
"nan",
"Environmental engineering",
"Environmental social science"
] |
1,633,222 | https://en.wikipedia.org/wiki/Ferrochrome | Ferrochrome or ferrochromium (FeCr) is a type of ferroalloy, that is, an alloy of chromium and iron, generally containing 50 to 70% chromium by weight.
Ferrochrome is produced by electric arc carbothermic reduction of chromite. Most of the global output is produced in South Africa, Kazakhstan and India, which have large domestic chromite resources. Increasing amounts are coming from Russia and China. Production of steel, especially that of stainless steel with chromium content of 10 to 20%, is the largest consumer and the main application of ferrochrome.
Usage
Over 80% of the world's ferrochrome is utilised in the production of stainless steel. In 2006, 28,000,000 tons of stainless steel were produced.
Stainless steel depends on chromium for its appearance and resistance to corrosion. Average chrome content in stainless steel is approx. 18%. It is also used to add chromium to carbon steel. FeCr from South Africa, known as "charge chrome" and produced from a Cr containing ore with a low carbon content, is most commonly used in stainless steel production. Alternatively, high carbon FeCr produced from high-grade ore found in Kazakhstan (among other places) is more commonly used in specialist applications such as engineering steels where a high Cr/Fe ratio and minimum levels of other elements (sulfur, phosphorus, titanium etc.) are important and production of finished metals takes place in small electric arc furnaces compared to large scale blast furnaces. In the past, Ferrochrome alloys were used in the formulation of Type III Compact Cassettes.
Production
Ferrochrome production is essentially a carbothermic reduction operation taking place at high temperatures. Chromite (an oxide of Cr and Fe) is reduced by coal and coke to form the iron-chromium alloy. The heat for this reaction can come from several forms, but typically from the electric arc formed between the tips of electrodes in the bottom of the furnace and the furnace hearth. This arc creates temperatures of about . In the process of smelting, huge amounts of electricity are consumed, making production very expensive in countries where power costs are high.
Tapping of the material from the furnace takes place intermittently. When enough smelted ferrochrome has accumulated in the furnace hearth, the tap hole is drilled open and a stream of molten metal and slag rushes down a trough into a chill or ladle. Ferrochrome solidifies in large castings which are crushed for sale or further processed.
Ferrochrome is generally classified by the amount of carbon and chrome it contains. The vast majority of FeCr produced is "charge chrome" from South Africa, with high carbon being the second largest segment followed by the smaller sectors of low carbon and intermediate carbon material.
Trading
In March 2021, the Shanghai Futures Exchange decided that it would list ferrochrome futures at some unknown date. At the time, ferrochrome spot 6–8% C, basis 50% Cr, ddp China was trading at $1,336–1,382. In January 2021 the spot price had been 25% lower.
References
Chromium alloys
Ferroalloys | Ferrochrome | [
"Chemistry"
] | 659 | [
"Alloys",
"Chromium alloys"
] |
1,633,227 | https://en.wikipedia.org/wiki/Flow%20net | A flow net is a graphical representation of two-dimensional steady-state groundwater flow through aquifers.
Construction of a flow net is often used for solving groundwater flow problems where the geometry makes analytical solutions impractical. The method is often used in civil engineering, hydrogeology or soil mechanics as a first check for problems of flow under hydraulic structures like dams or sheet pile walls. As such, a grid obtained by drawing a series of equipotential lines is called a flow net. The flow net is an important tool in analysing two-dimensional irrotational flow problems. Flow net technique is a graphical representation method.
Basic method
The method consists of filling the flow area with stream and equipotential lines, which are everywhere perpendicular to each other, making a curvilinear grid. Typically there are two surfaces (boundaries) which are at constant values of potential or hydraulic head (upstream and downstream ends), and the other surfaces are no-flow boundaries (i.e., impermeable; for example the bottom of the dam and the top of an impermeable bedrock layer), which define the sides of the outermost streamtubes (see figure 1 for a stereotypical flow net example).
Mathematically, the process of constructing a flow net consists of contouring the two harmonic or analytic functions of potential and stream function. These functions both satisfy the Laplace equation and the contour lines represent lines of constant head (equipotentials) and lines tangent to flowpaths (streamlines). Together, the potential function and the stream function form the complex potential, where the potential is the real part, and the stream function is the imaginary part.
The construction of a flow net provides an approximate solution to the flow problem, but it can be quite good even for problems with complex geometries by following a few simple rules (initially developed by Philipp Forchheimer around 1900, and later formalized by Arthur Casagrande in 1937) and a little practice:
streamlines and equipotentials meet at right angles (including the boundaries),
diagonals drawn between the cornerpoints of a flow net will meet each other at right angles (useful when near singularities),
streamtubes and drops in equipotential can be halved and should still make squares (useful when squares get very large at the ends),
flow nets often have areas which consist of nearly parallel lines, which produce true squares; start in these areas — working towards areas with complex geometry,
many problems have some symmetry (e.g., radial flow to a well); only a section of the flow net needs to be constructed,
the sizes of the squares should change gradually; transitions are smooth and the curved paths should be roughly elliptical or parabolic in shape.
Example flow nets
The first flow net pictured here (modified from Craig, 1997) illustrates and quantifies the flow which occurs under the dam (flow is assumed to be invariant along the axis of the dam — valid near the middle of the dam); from the pool behind the dam (on the right) to the tailwater downstream from the dam (on the left).
There are 16 green equipotential lines (15 equal drops in hydraulic head) between the 5 m upstream head to the 1m downstream head (4 m / 15 head drops = 0.267 m head drop between each green line). The blue streamlines (equal changes in the streamfunction between the two no-flow boundaries) show the flowpath taken by water as it moves through the system; the streamlines are everywhere tangent to the flow velocity.
The second flow net pictured here (modified from Ferris, et al., 1962) shows a flow net being used to analyze map-view flow (invariant in the vertical direction), rather than a cross-section. Note that this problem has symmetry, and only the left or right portions of it needed to have been done. To create a flow net to a point sink (a singularity), there must be a recharge boundary nearby to provide water and allow a steady-state flowfield to develop.
Flow net results
Darcy's law describes the flow of water through the flow net. Since the head drops are uniform by construction, the gradient is inversely proportional to the size of the blocks. Big blocks mean there is a low gradient, and therefore low discharge (hydraulic conductivity is assumed constant here).
An equivalent amount of flow is passing through each streamtube (defined by two adjacent blue lines in diagram), therefore narrow streamtubes are located where there is more flow. The smallest squares in a flow net are located at points where the flow is concentrated (in this diagram they are near the tip of the cutoff wall, used to reduce dam underflow), and high flow at the land surface is often what the civil engineer is trying to avoid, being concerned about piping or dam failure.
Singularities
Irregular points (also called singularities) in the flow field occur when streamlines have kinks in them (the derivative doesn't exist at a point). This can happen where the bend is outward (e.g., the bottom of the cutoff wall in the figure above), and there is infinite flux at a point, or where the bend is inward (e.g., the corner just above and to the left of the cutoff wall in the figure above) where the flux is zero.
The second flow net illustrates a well, which is typically represented mathematically as a point source (the well shrinks to zero radius); this is a singularity because the flow is converging to a point, at that point the Laplace equation is not satisfied.
These points are mathematical artifacts of the equation used to solve the real-world problem, and do not actually mean that there is infinite or no flux at points in the subsurface. These types of points often do make other types of solutions (especially numeric) to these problems difficult, while the simple graphical technique handles them nicely.
Extensions to standard flow nets
Typically flow nets are constructed for homogeneous, isotropic porous media experiencing saturated flow to known boundaries. There are extensions to the basic method to allow some of these other cases to be solved:
inhomogeneous aquifer: matching conditions at boundaries between properties
anisotropic aquifer: drawing the flownet in a transformed domain, then scaling the results differently in the principle hydraulic conductivity directions, to return the solution
one boundary is a seepage face: iteratively solving for both the boundary condition and the solution throughout the domain
Although the method is commonly used for these types of groundwater flow problems, it can be used for any problem which is described by the Laplace equation (), for example electric current flow through the earth.
References
Casagrande, A., 1937. Seepage through dams, Journal of New England Water Works, 51, 295-336 (also listed as: Harvard Graduate School Eng. Pub. 209)
Cedergren, Harry R. (1977), Seepage, Drainage, and Flow Nets, Wiley.
Knappett, Jonathan and R.F. Craig, 2012. Craig's Soil Mechanics 8th edition, Spon Press.
Ferris, J.G., D.B. Knowles, R.H. Brown & R.W. Stallman, 1962. Theory of Aquifer Tests. US Geological Survey Water-Supply Paper 1536-E. (available from the USGS website as a pdf)
Harr, M.E., 1962. Groundwater and Seepage, Dover. — mathematical treatment of 2D groundwater flow, classic work on flow nets.
See also
Potential flow (the flow net is a method for solving potential flow problems)
Analytic function (the potential and streamfunction plotted in flow nets are examples of analytic functions)
Hydrology
Soil mechanics
Water
Fluid mechanics | Flow net | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,618 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Soil mechanics",
"Civil engineering",
"Environmental engineering",
"Water",
"Fluid mechanics"
] |
1,633,265 | https://en.wikipedia.org/wiki/RedLightGreen | RedLightGreen was a database of bibliographic descriptions on the Web created by Research Libraries Group (RLG). It used a set of four million records extracted from OCLC's WorldCat database, and was designed to help novice users make selections from the vast bibliographic resources they would encounter in such a large set. RedLightGreen also allowed users to create citations for works found.
Work on RedLightGreen began in 2001 with funding from the Andrew W. Mellon Foundation. It was one of the earliest experiments with the Functional Requirements for Bibliographic Records which provides a structured view of bibliographic data. On 1 July 2006, RLG was merged with OCLC, and it was announced that the RedLightGreen service would be replaced by WorldCat, via Open WorldCat, available at WorldCat.org.
References
External links
Open WorldCat site
Library 2.0
Bibliographic databases and indexes | RedLightGreen | [
"Technology"
] | 196 | [
"Computing stubs",
"World Wide Web stubs"
] |
1,633,290 | https://en.wikipedia.org/wiki/Cofinal%20%28mathematics%29 | In mathematics, a subset of a preordered set is said to be cofinal or frequent in if for every it is possible to find an element in that is "larger than " (explicitly, "larger than " means ).
Cofinal subsets are very important in the theory of directed sets and nets, where “cofinal subnet” is the appropriate generalization of "subsequence". They are also important in order theory, including the theory of cardinal numbers, where the minimum possible cardinality of a cofinal subset of is referred to as the cofinality of
Definitions
Let be a homogeneous binary relation on a set
A subset is said to be or with respect to if it satisfies the following condition:
For every there exists some that
A subset that is not frequent is called .
This definition is most commonly applied when is a directed set, which is a preordered set with additional properties.
Final functions
A map between two directed sets is said to be if the image of is a cofinal subset of
Coinitial subsets
A subset is said to be (or in the sense of forcing) if it satisfies the following condition:
For every there exists some such that
This is the order-theoretic dual to the notion of cofinal subset.
Cofinal (respectively coinitial) subsets are precisely the dense sets with respect to the right (respectively left) order topology.
Properties
The cofinal relation over partially ordered sets ("posets") is reflexive: every poset is cofinal in itself. It is also transitive: if is a cofinal subset of a poset and is a cofinal subset of (with the partial ordering of applied to ), then is also a cofinal subset of
For a partially ordered set with maximal elements, every cofinal subset must contain all maximal elements, otherwise a maximal element that is not in the subset would fail to be any element of the subset, violating the definition of cofinal. For a partially ordered set with a greatest element, a subset is cofinal if and only if it contains that greatest element (this follows, since a greatest element is necessarily a maximal element). Partially ordered sets without greatest element or maximal elements admit disjoint cofinal subsets. For example, the even and odd natural numbers form disjoint cofinal subsets of the set of all natural numbers.
If a partially ordered set admits a totally ordered cofinal subset, then we can find a subset that is well-ordered and cofinal in
If is a directed set and if is a cofinal subset of then is also a directed set.
Examples and sufficient conditions
Any superset of a cofinal subset is itself cofinal.
If is a directed set and if some union of (one or more) finitely many subsets is cofinal then at least one of the set is cofinal. This property is not true in general without the hypothesis that is directed.
Subset relations and neighborhood bases
Let be a topological space and let denote the neighborhood filter at a point
The superset relation is a partial order on : explicitly, for any sets and declare that if and only if (so in essence, is equal to ).
A subset is called a at if (and only if) is a cofinal subset of
that is, if and only if for every there exists some such that (I.e. such that .)
Cofinal subsets of the real numbers
For any the interval is a cofinal subset of but it is a cofinal subset of
The set of natural numbers (consisting of positive integers) is a cofinal subset of but this is true of the set of negative integers
Similarly, for any the interval is a cofinal subset of but it is a cofinal subset of
The set of negative integers is a cofinal subset of but this is true of the natural numbers
The set of all integers is a cofinal subset of and also a cofinal subset of ; the same is true of the set
Cofinal set of subsets
A particular but important case is given if is a subset of the power set of some set ordered by reverse inclusion Given this ordering of a subset is cofinal in if for every there is a such that
For example, let be a group and let be the set of normal subgroups of finite index. The profinite completion of is defined to be the inverse limit of the inverse system of finite quotients of (which are parametrized by the set ).
In this situation, every cofinal subset of is sufficient to construct and describe the profinite completion of
See also
a subset of a partially ordered set that contains every element for which there is an with
References
Order theory | Cofinal (mathematics) | [
"Mathematics"
] | 991 | [
"Order theory"
] |
1,633,368 | https://en.wikipedia.org/wiki/Hensel%27s%20lemma | In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number , then this root can be lifted to a unique root modulo any higher power of . More generally, if a polynomial factors modulo into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of (the case of roots corresponds to the case of degree for one of the factors).
By passing to the "limit" (in fact this is an inverse limit) when the power of tends to infinity, it follows that a root or a factorization modulo can be lifted to a root or a factorization over the -adic integers.
These results have been widely generalized, under the same name, to the case of polynomials over an arbitrary commutative ring, where is replaced by an ideal, and "coprime polynomials" means "polynomials that generate an ideal containing ".
Hensel's lemma is fundamental in -adic analysis, a branch of analytic number theory.
The proof of Hensel's lemma is constructive, and leads to an efficient algorithm for Hensel lifting, which is fundamental for factoring polynomials, and gives the most efficient known algorithm for exact linear algebra over the rational numbers.
Modular reduction and lifting
Hensel's original lemma concerns the relation between polynomial factorization over the integers and over the integers modulo a prime number and its powers. It can be straightforwardly extended to the case where the integers are replaced by any commutative ring, and is replaced by any maximal ideal (indeed, the maximal ideals of have the form where is a prime number).
Making this precise requires a generalization of the usual modular arithmetic, and so it is useful to define accurately the terminology that is commonly used in this context.
Let be a commutative ring, and an ideal of . Reduction modulo refers to the replacement of every element of by its image under the canonical map For example, if is a polynomial with coefficients in , its reduction modulo , denoted is the polynomial in obtained by replacing the coefficients of by their image in Two polynomials and in are congruent modulo , denoted if they have the same coefficients modulo , that is if If a factorization of modulo consists in two (or more) polynomials in such that
The lifting process is the inverse of reduction. That is, given objects depending on elements of the lifting process replaces these elements by elements of (or of for some ) that maps to them in a way that keeps the properties of the objects.
For example, given a polynomial and a factorization modulo expressed as lifting this factorization modulo consists of finding polynomials such that and Hensel's lemma asserts that such a lifting is always possible under mild conditions; see next section.
Statement
Originally, Hensel's lemma was stated (and proved) for lifting a factorization modulo a prime number of a polynomial over the integers to a factorization modulo any power of and to a factorization over the -adic integers. This can be generalized easily, with the same proof to the case where the integers are replaced by any commutative ring, the prime number is replaced by a maximal ideal, and the -adic integers are replaced by the completion with respect to the maximal ideal. It is this generalization, which is also widely used, that is presented here.
Let be a maximal ideal of a commutative ring , and
be a polynomial in with a leading coefficient not in
Since is a maximal ideal, the quotient ring is a field, and is a principal ideal domain, and, in particular, a unique factorization domain, which means that every nonzero polynomial in can be factorized in a unique way as the product of a nonzero element of and irreducible polynomials that are monic (that is, their leading coefficients are 1).
Hensel's lemma asserts that every factorization of modulo into coprime polynomials can be lifted in a unique way into a factorization modulo for every .
More precisely, with the above hypotheses, if where and are monic and coprime modulo then, for every positive integer there are monic polynomials and such that
and and are unique (with these properties) modulo
Lifting simple roots
An important special case is when In this case the coprimality hypothesis means that is a simple root of This gives the following special case of Hensel's lemma, which is often also called Hensel's lemma.
With above hypotheses and notations, if is a simple root of then can be lifted in a unique way to a simple root of for every positive integer . Explicitly, for every positive integer , there is a unique such that and is a simple root of
Lifting to adic completion
The fact that one can lift to for every positive integer suggests to "pass to the limit" when tends to the infinity. This was one of the main motivations for introducing -adic integers.
Given a maximal ideal of a commutative ring , the powers of form a basis of open neighborhoods for a topology on , which is called the -adic topology. The completion of this topology can be identified with the completion of the local ring and with the inverse limit This completion is a complete local ring, generally denoted When is the ring of the integers, and where is a prime number, this completion is the ring of -adic integers
The definition of the completion as an inverse limit, and the above statement of Hensel's lemma imply that every factorization into pairwise coprime polynomials modulo of a polynomial can be uniquely lifted to a factorization of the image of in Similarly, every simple root of modulo can be lifted to a simple root of the image of in
Proof
Hensel's lemma is generally proved incrementally by lifting a factorization over to either a factorization over (Linear lifting), or a factorization over (Quadratic lifting).
The main ingredient of the proof is that coprime polynomials over a field satisfy Bézout's identity. That is, if and are coprime univariate polynomials over a field (here ), there are polynomials and such that and
Bézout's identity allows defining coprime polynomials and proving Hensel's lemma, even if the ideal is not maximal. Therefore, in the following proofs, one starts from a commutative ring , an ideal , a polynomial that has a leading coefficient that is invertible modulo (that is its image in is a unit in ), and factorization of modulo or modulo a power of , such that the factors satisfy a Bézout's identity modulo . In these proofs, means
Linear lifting
Let be an ideal of a commutative ring , and be a univariate polynomial with coefficients in that has a leading coefficient that is invertible modulo (that is, the image of in is a unit in ).
Suppose that for some positive integer there is a factorization
such that and are monic polynomials that are coprime modulo , in the sense that there exist such that Then, there are polynomials such that and
Under these conditions, and are unique modulo
Moreover, and satisfy the same Bézout's identity as and , that is, This follows immediately from the preceding assertions, but is needed to apply iteratively the result with increasing values of .
The proof that follows is written for computing and by using only polynomials with coefficients in or When and this allows manipulating only integers modulo .
Proof: By hypothesis, is invertible modulo . This means that there exists and such that
Let of degree less than such that
(One may choose but other choices may lead to simpler computations. For example, if and it is possible and better to choose where the coefficients of are integers in the interval
As is monic, the Euclidean division of by is defined, and provides and such that and Moreover, both and are in Similarly, let with and
One has Indeed, one has
As is monic, the degree modulo of can be less than only if
Thus, considering congruences modulo one has
So, the existence assertion is verified with
Uniqueness
Let , , and as a in the preceding section. Let
be a factorization into coprime polynomials (in the above sense), such The application of linear lifting for shows the existence of and such that and
The polynomials and are uniquely defined modulo This means that, if another pair satisfies the same conditions, then one has
Proof: Since a congruence modulo implies the same concruence modulo one can proceed by induction and suppose that the uniqueness has been proved for , the case being trivial. That is, one can suppose that
By hypothesis, has
and thus
By induction hypothesis, the second term of the latter sum belongs to and the same is thus true for the first term. As is invertible modulo , there exist and such that Thus
using the induction hypothesis again.
The coprimality modulo implies the existence of such that Using the induction hypothesis once more, one gets
Thus one has a polynomial of degree less than that is congruent modulo to the product of the monic polynomial and another polynomial . This is possible only if and implies Similarly, is also in and this proves the uniqueness.
Quadratic lifting
Linear lifting allows lifting a factorization modulo to a factorization modulo Quadratic lifting allows lifting directly to a factorization modulo at the cost of lifting also the Bézout's identity and of computing modulo instead of modulo (if one uses the above description of linear lifting).
For lifting up to modulo for large one can use either method. If, say, a factorization modulo requires steps of linear lifting or only steps of quadratic lifting. However, in the latter case the size of the coefficients that have to be manipulated increase during the computation. This implies that the best lifting method depends on the context (value of , nature of , multiplication algorithm that is used, hardware specificities, etc.).
Quadratic lifting is based on the following property.
Suppose that for some positive integer there is a factorization
such that and are monic polynomials that are coprime modulo , in the sense that there exist such that Then, there are polynomials such that and
Moreover, and satisfy a Bézout's identity of the form
(This is required for allowing iterations of quadratic lifting.)
Proof: The first assertion is exactly that of linear lifting applied with to the ideal instead of
Let One has
where
Setting and one gets
which proves the second assertion.
Explicit example
Let
Modulo 2, Hensel's lemma cannot be applied since the reduction of modulo 2 is simplypg 15-16
with 6 factors not being relatively prime to each other. By Eisenstein's criterion, however, one can conclude that the polynomial is irreducible in
Over , on the other hand, one has
where is the square root of 2 in . As 4 is not a cube in these two factors are irreducible over . Hence the complete factorization of in and is
where is a square root of 2 in that can be obtained by lifting the above factorization.
Finally, in the polynomial splits into
with all factors relatively prime to each other, so that in and there are 6 factors with the (non-rational) 727-adic integers
Using derivatives for lifting roots
Let be a polynomial with integer (or -adic integer) coefficients, and let m, k be positive integers such that m ≤ k. If r is an integer such that
then, for every there exists an integer s such that
Furthermore, this s is unique modulo pk+m, and can be computed explicitly as the integer such that
where is an integer satisfying
Note that so that the condition is met. As an aside, if , then 0, 1, or several s may exist (see Hensel Lifting below).
Derivation
We use the Taylor expansion of f around r to write:
From we see that s − r = tpk for some integer t. Let
For we have:
The assumption that is not divisible by p ensures that has an inverse mod which is necessarily unique. Hence a solution for t exists uniquely modulo and s exists uniquely modulo
Observations
Criterion for irreducible polynomials
Using the above hypotheses, if we consider an irreducible polynomial
such that , then
In particular, for , we find in
but , hence the polynomial cannot be irreducible. Whereas in we have both values agreeing, meaning the polynomial could be irreducible. In order to determine irreducibility, the Newton polygon must be employed.
Frobenius
Note that given an the Frobenius endomorphism gives a nonzero polynomial that has zero derivative
hence the pth roots of do not exist in . For , this implies that cannot contain the root of unity .
Roots of unity
Although the pth roots of unity are not contained in , there are solutions of . Note that
is never zero, so if there exists a solution, it necessarily lifts to . Because the Frobenius gives all of the non-zero elements are solutions. In fact, these are the only roots of unity contained in
Hensel lifting
Using the lemma, one can "lift" a root r of the polynomial f modulo pk to a new root s modulo pk+1 such that (by taking ; taking larger m follows by induction). In fact, a root modulo pk+1 is also a root modulo pk, so the roots modulo pk+1 are precisely the liftings of roots modulo pk. The new root s is congruent to r modulo p, so the new root also satisfies So the lifting can be repeated, and starting from a solution rk of we can derive a sequence of solutions rk+1, rk+2, ... of the same congruence for successively higher powers of p, provided that for the initial root rk. This also shows that f has the same number of roots mod pk as mod pk+1, mod pk+2, or any other higher power of p, provided that the roots of f mod pk are all simple.
What happens to this process if r is not a simple root mod p? Suppose that
Then implies That is, for all integers t. Therefore, we have two cases:
If then there is no lifting of r to a root of f(x) modulo pk+1.
If then every lifting of r to modulus pk+1 is a root of f(x) modulo pk+1.
Example. To see both cases we examine two different polynomials with :
and . Then and We have which means that no lifting of 1 to modulus 4 is a root of f(x) modulo 4.
and . Then and However, since we can lift our solution to modulus 4 and both lifts (i.e. 1, 3) are solutions. The derivative is still 0 modulo 2, so a priori we don't know whether we can lift them to modulo 8, but in fact we can, since g(1) is 0 mod 8 and g(3) is 0 mod 8, giving solutions at 1, 3, 5, and 7 mod 8. Since of these only g(1) and g(7) are 0 mod 16 we can lift only 1 and 7 to modulo 16, giving 1, 7, 9, and 15 mod 16. Of these, only 7 and 9 give , so these can be raised giving 7, 9, 23, and 25 mod 32. It turns out that for every integer , there are four liftings of 1 mod 2 to a root of .
Hensel's lemma for p-adic numbers
In the -adic numbers, where we can make sense of rational numbers modulo powers of p as long as the denominator is not a multiple of p, the recursion from rk (roots mod pk) to rk+1 (roots mod pk+1) can be expressed in a much more intuitive way. Instead of choosing t to be an(y) integer which solves the congruence
let t be the rational number (the pk here is not really a denominator since f(rk) is divisible by pk):
Then set
This fraction may not be an integer, but it is a -adic integer, and the sequence of numbers rk converges in the -adic integers to a root of f(x) = 0. Moreover, the displayed recursive formula for the (new) number rk+1 in terms of rk is precisely Newton's method for finding roots to equations in the real numbers.
By working directly in the -adics and using the -adic absolute value, there is a version of Hensel's lemma which can be applied even if we start with a solution of f(a) ≡ 0 mod p such that We just need to make sure the number is not exactly 0. This more general version is as follows: if there is an integer a which satisfies:
then there is a unique -adic integer b such f(b) = 0 and The construction of b amounts to showing that the recursion from Newton's method with initial value a converges in the -adics and we let b be the limit. The uniqueness of b as a root fitting the condition needs additional work.
The statement of Hensel's lemma given above (taking ) is a special case of this more general version, since the conditions that f(a) ≡ 0 mod p and say that and
Examples
Suppose that p is an odd prime and a is a non-zero quadratic residue modulo p. Then Hensel's lemma implies that a has a square root in the ring of -adic integers Indeed, let If r is a square root of a modulo p then:
where the second condition is dependent on the fact that p is odd. The basic version of Hensel's lemma tells us that starting from r1 = r we can recursively construct a sequence of integers such that:
This sequence converges to some -adic integer b which satisfies b2 = a. In fact, b is the unique square root of a in congruent to r1 modulo p. Conversely, if a is a perfect square in and it is not divisible by p then it is a nonzero quadratic residue mod p. Note that the quadratic reciprocity law allows one to easily test whether a is a nonzero quadratic residue mod p, thus we get a practical way to determine which -adic numbers (for p odd) have a -adic square root, and it can be extended to cover the case p = 2 using the more general version of Hensel's lemma (an example with 2-adic square roots of 17 is given later).
To make the discussion above more explicit, let us find a "square root of 2" (the solution to ) in the 7-adic integers. Modulo 7 one solution is 3 (we could also take 4), so we set . Hensel's lemma then allows us to find as follows:
Based on which the expression
turns into:
which implies Now:
And sure enough, (If we had used the Newton method recursion directly in the 7-adics, then and )
We can continue and find . Each time we carry out the calculation (that is, for each successive value of k), one more base 7 digit is added for the next higher power of 7. In the 7-adic integers this sequence converges, and the limit is a square root of 2 in which has initial 7-adic expansion
If we started with the initial choice then Hensel's lemma would produce a square root of 2 in which is congruent to 4 (mod 7) instead of 3 (mod 7) and in fact this second square root would be the negative of the first square root (which is consistent with 4 = −3 mod 7).
As an example where the original version of Hensel's lemma is not valid but the more general one is, let and Then and so
which implies there is a unique 2-adic integer b satisfying
i.e., b ≡ 1 mod 4. There are two square roots of 17 in the 2-adic integers, differing by a sign, and although they are congruent mod 2 they are not congruent mod 4. This is consistent with the general version of Hensel's lemma only giving us a unique 2-adic square root of 17 that is congruent to 1 mod 4 rather than mod 2. If we had started with the initial approximate root a = 3 then we could apply the more general Hensel's lemma again to find a unique 2-adic square root of 17 which is congruent to 3 mod 4. This is the other 2-adic square root of 17.
In terms of lifting the roots of from modulus 2k to 2k+1, the lifts starting with the root 1 mod 2 are as follows:
1 mod 2 → 1, 3 mod 4
1 mod 4 → 1, 5 mod 8 and 3 mod 4 → 3, 7 mod 8
1 mod 8 → 1, 9 mod 16 and 7 mod 8 → 7, 15 mod 16, while 3 mod 8 and 5 mod 8 don't lift to roots mod 16
9 mod 16 → 9, 25 mod 32 and 7 mod 16 → 7, 23 mod 16, while 1 mod 16 and 15 mod 16 don't lift to roots mod 32.
For every k at least 3, there are four roots of x2 − 17 mod 2k, but if we look at their 2-adic expansions we can see that in pairs they are converging to just two 2-adic limits. For instance, the four roots mod 32 break up into two pairs of roots which each look the same mod 16:
9 = 1 + 23 and 25 = 1 + 23 + 24.
7 = 1 + 2 + 22 and 23 = 1 + 2 + 22 + 24.
The 2-adic square roots of 17 have expansions
Another example where we can use the more general version of Hensel's lemma but not the basic version is a proof that any 3-adic integer c ≡ 1 mod 9 is a cube in Let and take initial approximation a = 1. The basic Hensel's lemma cannot be used to find roots of f(x) since for every r. To apply the general version of Hensel's lemma we want which means That is, if c ≡ 1 mod 27 then the general Hensel's lemma tells us f(x) has a 3-adic root, so c is a 3-adic cube. However, we wanted to have this result under the weaker condition that c ≡ 1 mod 9. If c ≡ 1 mod 9 then c ≡ 1, 10, or 19 mod 27. We can apply the general Hensel's lemma three times depending on the value of c mod 27: if c ≡ 1 mod 27 then use a = 1, if c ≡ 10 mod 27 then use a = 4 (since 4 is a root of f(x) mod 27), and if c ≡ 19 mod 27 then use a = 7. (It is not true that every c ≡ 1 mod 3 is a 3-adic cube, e.g., 4 is not a 3-adic cube since it is not a cube mod 9.)
In a similar way, after some preliminary work, Hensel's lemma can be used to show that for any odd prime number p, any -adic integer c congruent to 1 modulo p2 is a p-th power in (This is false for p = 2.)
Generalizations
Suppose A is a commutative ring, complete with respect to an ideal and let a ∈ A is called an "approximate root" of f, if
If f has an approximate root then it has an exact root b ∈ A "close to" a; that is,
Furthermore, if is not a zero-divisor then b is unique.
This result can be generalized to several variables as follows:
Theorem. Let A be a commutative ring that is complete with respect to ideal Let be a system of n polynomials in n variables over A. View as a mapping from An to itself, and let denote its Jacobian matrix. Suppose a = (a1, ..., an) ∈ An is an approximate solution to f = 0 in the sense that
Then there is some b = (b1, ..., bn) ∈ An satisfying f(b) = 0, i.e.,
Furthermore this solution is "close" to a in the sense that
As a special case, if for all i and is a unit in A then there is a solution to f(b) = 0 with for all i.
When n = 1, a = a is an element of A and The hypotheses of this multivariable Hensel's lemma reduce to the ones which were stated in the one-variable Hensel's lemma.
Related concepts
Completeness of a ring is not a necessary condition for the ring to have the Henselian property: Goro Azumaya in 1950 defined a commutative local ring satisfying the Henselian property for the maximal ideal m to be a Henselian ring.
Masayoshi Nagata proved in the 1950s that for any commutative local ring A with maximal ideal m there always exists a smallest ring Ah containing A such that Ah is Henselian with respect to mAh. This Ah is called the Henselization of A. If A is noetherian, Ah will also be noetherian, and Ah is manifestly algebraic as it is constructed as a limit of étale neighbourhoods. This means that Ah is usually much smaller than the completion  while still retaining the Henselian property and remaining in the same category.
See also
Hasse–Minkowski theorem
Newton polygon
Locally compact field
Lifting-the-exponent lemma
References
Modular arithmetic
Commutative algebra
Lemmas in algebra | Hensel's lemma | [
"Mathematics"
] | 5,486 | [
"Theorems in algebra",
"Lemmas in algebra",
"Fields of abstract algebra",
"Arithmetic",
"Commutative algebra",
"Lemmas",
"Modular arithmetic",
"Number theory"
] |
1,633,397 | https://en.wikipedia.org/wiki/Richard%20P.%20Stanley | Richard Peter Stanley (born June 23, 1944) is an Emeritus Professor of Mathematics at the Massachusetts Institute of Technology, and an Arts and Sciences Distinguished Scholar at the University of Miami. From 2000 to 2010, he was the Norman Levinson Professor of Applied Mathematics. He received his Ph.D. at Harvard University in 1971 under the supervision of Gian-Carlo Rota. He is an expert in the field of combinatorics and its applications to other mathematical disciplines.
Contributions
Stanley is known for his two-volume book Enumerative Combinatorics (1986–1999). He is also the author of Combinatorics and Commutative Algebra (1983) and well over 200 research articles in mathematics. He has served as thesis advisor to 60 doctoral students, many of whom have had distinguished careers in combinatorial research. Donald Knuth named Stanley as one of his combinatorial heroes in a 2023 interview.
Awards and honors
Stanley's distinctions include membership in the National Academy of Sciences (elected in 1995), the 2001 Leroy P. Steele Prize for Mathematical Exposition, the 2003 Schock Prize, a plenary lecture at the International Congress of Mathematicians (in Madrid, Spain), and election in 2012 as a fellow of the American Mathematical Society. In 2022 he was awarded the Leroy P. Steele Prize for Lifetime Achievement.
Selected publications
Stanley, Richard P. (1996). Combinatorics and Commutative Algebra, 2nd ed. .
Stanley, Richard P. (1997, 1999). Enumerative Combinatorics, Volumes 1 and 2. Cambridge University Press. , .
See also
Exponential formula
Order polynomial
Stanley decomposition
Stanley's reciprocity theorem
References
External links
Richard Stanley's Homepage
1944 births
Living people
Members of the United States National Academy of Sciences
Fellows of the American Mathematical Society
20th-century American mathematicians
21st-century American mathematicians
Combinatorialists
Harvard University alumni
Massachusetts Institute of Technology School of Science faculty
Rolf Schock Prize laureates
Educators from New York City
Mathematicians from New York (state) | Richard P. Stanley | [
"Mathematics"
] | 412 | [
"Combinatorialists",
"Combinatorics"
] |
1,633,417 | https://en.wikipedia.org/wiki/George%20Devol | George Charles Devol Jr. (February 20, 1912 – August 11, 2011) was an American inventor, best known for creating Unimate, the first industrial robot. The National Inventors Hall of Fame says, "Devol's patent for the first digitally operated programmable robotic arm represents the foundation of the modern robotics industry."
Early life
George Devol was born in an upper-middle-class family in Louisville, Kentucky. He attended Riordan Prep school.
United Cinephone
Foregoing higher education, Devol went into business in 1932, forming United Cinephone to produce variable area recording directly onto film for the new sound motion pictures ("talkies"). However, he later learned that companies like RCA and Western Electric were working in the same area, and discontinued the product.
World War II
In 1939, Devol applied for a patent for proximity controls for use in laundry press machines, based on a radio frequency field. This control would automatically open and close laundry presses when workers approached the machines. After World War II began, the patent office told Devol that his patent application would be placed on hold for the duration of the conflict.
Around that time, Devol sold his interest in United Cinephone and approached Sperry Gyroscope to pitch his ideas on radar technology. He was retained by Sperry as manager of the Special Projects Department, which developed radar devices and microwave test equipment.
In 1943, he organized General Electronics Industries in Greenwich, Connecticut, as a subsidiary of the Auto Ordnance Corporation. General Electronics produced counter-radar devices until the end of the war. General Electronics was one of the largest producers of radar and radar counter-measure equipment for the U.S. Navy, U.S. Army Air Force and other government agencies. The company's radar counter-measure systems were on Allied planes on D-Day.
Over a difference of opinion regarding the future of certain projects, Devol resigned from Auto Ordinance and joined RCA. After a short stint as eastern sales manager of electronics products, which he felt "wasn't his ball of wax", Devol left RCA to develop ideas that eventually led to the patent application for the first industrial robot. In 1946, he applied for a patent on a magnetic recording system for controlling machines and a digital playback device for machines.
Devol was part of the team that developed the first commercial use of microwave oven technology, the Speedy Weeny, which automatically cooked and dispensed hotdogs in places such as Grand Central Terminal.
In the early 1950s, Devol licensed his digital magnetic recording device to Remington Rand of Norwalk, Connecticut, and became manager of their magnetics department. There he worked with a team to develop his magnetic recording system for business data applications. He also worked on developing the first high-speed printing systems. While the magnetic recording system proved too slow for business data, Devol's invention was re-purposed as a machine control that would eventually become the "brains" of the Unimate robot.
The first industrial robot: Unimate
In the 1940s, Devol was focusing on manipulators and his magnetic recording patents, but he took note of the introduction of automation into factories. In 1954, he applied for his robotics patent. , issued in 1961 for Programmed Article Transfer, introduced the concept of universal automation, or Unimation. His wife Evelyn suggested the word "Unimate" to define the product, much the same as George Eastman had coined Kodak.
Devol wrote that his invention "makes available for the first time a more or less general purpose machine that has universal application to a vast diversity of applications where cyclic digital control is desired."
After applying for this patent Devol searched for a company willing to give him financial backing to develop his programmable articles transfer system. He talked with many major corporations in the United States during his search. Through family connections, Devol obtained an audience with a partner in the firm Manning, Maxwell and Moore in Stratford, Connecticut. Joseph F. Engelberger, chief of engineering in the company's aircraft products division was very interested, and Devol agreed to license his patent and some future patents in the field to the company. But the company was sold that year and its aircraft division was slated to be closed. Engelberger sought a backer to buy out the aircraft division and found one in Consolidated Diesel Electronic (Condec), which agreed to finance the continued development of the robot under a new division, Unimation Incorporated, with Engelberger as its president.
The first Unimate prototypes were controlled by vacuum tubes used as digital switches though later versions used transistors. Most off-the-shelf components available in the late 1950s, such as digital encoders, were inadequate for the Unimate. With Devol's guidance, a team of engineers at Unimation designed and machined practically every part in the first Unimates.
In 1960, Devol personally sold the first Unimate robot, which was shipped in 1961 to General Motors. GM first used the machine for die casting handling and spot welding. The first Unimate robot was installed at GM's Inland Fisher Guide Plant in Ewing Township, New Jersey, in 1961 to lift hot pieces of metal from a die-casting machine and stack them. Soon companies such as Chrysler, Ford, and Fiat saw the necessity for large Unimate purchases.
The company spent about $5 million to develop the first Unimate. In 1966, after many years of market surveys and field tests, full-scale production began in Connecticut. Unimation's first production robot was a materials handling robot and was soon followed by robots for welding and other applications.
In 1975, Unimation showed its first profit. In 1978, the PUMA (Programmable Universal Machine for Assembly) robot was developed by Unimation from Vicarm (Victor Scheinman) and with support from General Motors.
In 2005, Popular Mechanics magazine selected Devol's Unimate as one of the Top 50 Inventions of the Past 50 Years.
Additional work
Elected to honorary member of the Society of Manufacturing Engineers (1985)
Inducted into the National Inventor's Hall of Fame (2011)
Member of the Automation Hall of Fame
Henry Ford and Smithsonian Museum collections both include Unimate robots
Devol's archives are with the Henry Ford Museum in Dearborn, Michigan
Death
Devol died on August 11, 2011, aged 99, at his home in Wilton, Connecticut. He was survived by two daughters, two sons, five grandchildren and five great-grandchildren. His funeral service was held in a Methodist church and he was laid to rest in Wilton.
References
External links
1912 births
2011 deaths
People from Louisville, Kentucky
People from Wilton, Connecticut
American inventors
American roboticists
Industrial robotics
History of robotics | George Devol | [
"Technology"
] | 1,375 | [
"History of robotics",
"History of computing"
] |
1,633,547 | https://en.wikipedia.org/wiki/Total%20angular%20momentum%20quantum%20number | In quantum mechanics, the total angular momentum quantum number parametrises the total angular momentum of a given particle, by combining its orbital angular momentum and its intrinsic angular momentum (i.e., its spin).
If s is the particle's spin angular momentum and ℓ its orbital angular momentum vector, the total angular momentum j is
The associated quantum number is the main total angular momentum quantum number j. It can take the following range of values, jumping only in integer steps:
where ℓ is the azimuthal quantum number (parameterizing the orbital angular momentum) and s is the spin quantum number (parameterizing the spin).
The relation between the total angular momentum vector j and the total angular momentum quantum number j is given by the usual relation (see angular momentum quantum number)
The vector's z-projection is given by
where mj is the secondary total angular momentum quantum number, and the is the reduced Planck constant. It ranges from −j to +j in steps of one. This generates 2j + 1 different values of mj.
The total angular momentum corresponds to the Casimir invariant of the Lie algebra so(3) of the three-dimensional rotation group.
See also
Principal quantum number
Orbital angular momentum quantum number
Magnetic quantum number
Spin quantum number
Angular momentum coupling
Clebsch–Gordan coefficients
Angular momentum diagrams (quantum mechanics)
Rotational spectroscopy
References
Albert Messiah, (1966). Quantum Mechanics (Vols. I & II), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons.
External links
Vector model of angular momentum
LS and jj coupling
Angular momentum
Atomic physics
Quantum numbers
Rotation in three dimensions
Rotational symmetry | Total angular momentum quantum number | [
"Physics",
"Chemistry",
"Mathematics"
] | 338 | [
"Quantum chemistry",
"Physical quantities",
"Quantity",
"Quantum physics stubs",
"Quantum mechanics",
"Quantum numbers",
"Momentum",
" molecular",
" and optical physics",
"Atomic",
"Atomic physics",
"Angular momentum",
"Moment (physics)",
"Symmetry",
"Rotational symmetry"
] |
1,633,721 | https://en.wikipedia.org/wiki/Near-term%20digital%20radio | The Near-term digital radio (NTDR) program provided a prototype mobile ad hoc network (MANET) radio system to the United States Army, starting in the 1990s. The MANET protocols were provided by Bolt, Beranek and Newman; the radio hardware was supplied by ITT. These systems have been fielded by the United Kingdom as the High-capacity data radio (HCDR) and by the Israelis as the Israeli data radio. They have also been purchased by a number of other countries for experimentation.
The NTDR protocols consist of two components: clustering and routing. The clustering algorithms dynamically organize a given network into cluster heads and cluster members. The cluster heads create a backbone; the cluster members use the services of this backbone to send and receive packets. The cluster heads use a link-state routing algorithm to maintain the integrity of their backbone and to track the locations of cluster members.
The NTDR routers also use a variant of Open Shortest Path First (OSPF) that is called Radio-OSPF (ROSPF). ROSPF does not use the OSPF hello protocol for link discovery, etc. Instead, OSPF adjacencies are created and destroyed as a function of MANET information that is distributed by the NTDR routers, both cluster heads and cluster members. It also supported multicasting.
References
Wireless networking | Near-term digital radio | [
"Technology",
"Engineering"
] | 278 | [
"Wireless networking",
"Computer networks engineering"
] |
1,633,733 | https://en.wikipedia.org/wiki/Species%20richness | Species richness is the number of different species represented in an ecological community, landscape or region. Species richness is simply a count of species, and it does not take into account the abundances of the species or their relative abundance distributions. Species richness is sometimes considered synonymous with species diversity, but the formal metric species diversity takes into account both species richness and species evenness.
Sampling considerations
Depending on the purposes of quantifying species richness, the individuals can be selected in different ways. They can be, for example, trees found in an inventory plot, birds observed from a monitoring point, or beetles collected in a pitfall trap. Once the set of individuals has been defined, its species richness can be exactly quantified, provided the species-level taxonomy of the organisms of interest is well enough known. Applying different species delimitations will lead to different species richness values for the same set of individuals.
In practice, people are usually interested in the species richness of areas so large that not all individuals in them can be observed and identified to species. Then applying different sampling methods will lead to different sets of individuals being observed for the same area of interest, and the species richness of each set may be different. When a new individual is added to a set, it may introduce a species that was not yet represented in the set, and thereby increase the species richness of the set. For this reason, sets with many individuals can be expected to contain more species than sets with fewer individuals.
If species richness of the obtained sample is taken to represent species richness of the underlying habitat or other larger unit, values are only comparable if sampling efforts are standardised in an appropriate way. Resampling methods can be used to bring samples of different sizes to a common footing. Properties of the sample, especially the number of species only represented by one or a few individuals, can be used to help estimating the species richness in the population from which the sample was drawn.
Trends in species richness
The observed species richness is affected not only by the number of individuals but also by the heterogeneity of the sample. If individuals are drawn from different environmental conditions (or different habitats), the species richness of the resulting set can be expected to be higher than if all individuals are drawn from similar environments. The accumulation of new species with increasing sampling effort can be visualised with a species accumulation curve. Such curves can be constructed in different ways. Increasing the area sampled increases observed species richness both because more individuals get included in the sample and because large areas are environmentally more heterogeneous than small areas.
Many organism groups have most species in the tropics, which leads to latitudinal gradients in species richness. There has been much discussion about the relationship between productivity and species richness. Results have varied among studies, such that no global consensus on either the pattern or its possible causes has emerged.
Applications
Species richness is often used as a criterion when assessing the relative conservation values of habitats or landscapes. However, species richness is blind to the identity of the species. An area with many endemic or rare species is generally considered to have higher conservation value than another area where species richness is similar, but all the species are common and widespread.
See also
Rapoport's rule
Scaling pattern of occupancy
Species-area curve
Species discovery curve
Storage effect
References
Further reading
Kevin J. Gaston & John I. Spicer. 2004. Biodiversity: an introduction, Blackwell Publishing. 2nd Ed., (pbk.)
Diaz, et al. Ecosystems and Human Well-being: Current State and Trends, Volume 1. Millennium Ecosystem Assessment. 2005. Island Press.
Measurement of biodiversity
cs:Druhová rozmanitost
fr:Richesse spécifique
ja:種多様性
sl:Vrstna diverziteta
zh:物种多样性 | Species richness | [
"Biology"
] | 800 | [
"Biodiversity",
"Measurement of biodiversity"
] |
1,633,875 | https://en.wikipedia.org/wiki/Feedwater%20heater | A feedwater heater is a power plant component used to pre-heat water delivered to a steam generating boiler. Preheating the feedwater reduces the irreversibilities involved in steam generation and therefore improves the thermodynamic efficiency of the system. This reduces plant operating costs and also helps to avoid thermal shock to the boiler metal when the feedwater is introduced back into the steam cycle.
In a steam power plant (usually modeled as a modified Rankine cycle), feedwater heaters allow the feedwater to be brought up to the saturation temperature very gradually. This minimizes the inevitable irreversibilities associated with heat transfer to the working fluid (water). See the article on the second law of thermodynamics for a further discussion of such irreversibilities.
Cycle discussion and explanation
The energy used to heat the feedwater is usually derived from steam extracted between the stages of the steam turbine. Therefore, the steam that would be used to perform expansion work in the turbine (and therefore generate power) is not utilized for that purpose. The percentage of the total cycle steam mass flow used for the feedwater heater is termed the extraction fraction and must be carefully optimized for maximum power plant thermal efficiency since increasing this fraction causes a decrease in turbine power output.
Feedwater heaters can also be "open" or "closed" heat exchangers. An open heat exchanger is one in which extracted steam is allowed to mix with the feedwater. This kind of heater will normally require a feed pump at both the feed inlet and outlet since the pressure in the heater is between the boiler pressure and the condenser pressure. A deaerator is a special case of the open feedwater heater which is specifically designed to remove non-condensable gases from the feedwater.
Closed feedwater heaters are typically shell and tube heat exchangers where the feedwater passes throughout the tubes and is heated by turbine extraction steam. These do not require separate pumps before and after the heater to boost the feedwater to the pressure of the extracted steam as with an open heater. However, the extracted steam (which is most likely almost fully condensed after heating the feedwater) must then be throttled to the condenser pressure, an isenthalpic process that results in some entropy gain with a slight penalty on overall cycle efficiency:
Many power plants incorporate a number of feedwater heaters and may use both open and closed components. Feedwater heaters are used in both fossil- and nuclear-fueled power plants.
Economizer
An economizer serves a similar purpose to a feedwater heater, but is technically different as it does not use cycle steam for heating. In fossil-fuel plants, the economizer uses the lowest-temperature flue gas from the furnace to heat the water before it enters the boiler proper. This allows for the heat transfer between the furnace and the feedwater to occur across a smaller average temperature gradient (for the steam generator as a whole). System efficiency is therefore further increased when viewed with respect to actual energy content of the fuel.
Most nuclear power plants do not have an economizer. However, the Combustion Engineering System 80+ nuclear plant design and its evolutionary successors, (e.g. Korea Electric Power Corporation's APR-1400) incorporate an integral feedwater economizer. This economizer preheats the steam generator feedwater at the steam generator inlet using the lowest-temperature primary coolant.
Testing
A widely use Code for the procedures, direction, and guidance for determining the thermo-hydraulic performance of a closed feedwater heater is the ASME PTC 12.1 Feedwater Heater Standard.
See also
Fossil fuel power plant
Thermal power plant
ASME Codes
The American Society of Mechanical Engineers (ASME), publishes the following Code:
PTC 4.4 Gas Turbine Heat Recovery Steam Generators
References
External links
Power plant diagram
High pressure feedwater heaters
Mechanical engineering
Chemical process engineering
ru:Экономайзер (энергетика) | Feedwater heater | [
"Physics",
"Chemistry",
"Engineering"
] | 847 | [
"Chemical process engineering",
"Chemical engineering",
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
1,633,917 | https://en.wikipedia.org/wiki/C%20parity | In physics, the C parity or charge parity is a multiplicative quantum number of some particles that describes their behavior under the symmetry operation of charge conjugation.
Charge conjugation changes the sign of all quantum charges (that is, additive quantum numbers), including the electrical charge, baryon number and lepton number, and the flavor charges strangeness, charm, bottomness, topness and Isospin (I3). In contrast, it doesn't affect the mass, linear momentum or spin of a particle.
Formalism
Consider an operation that transforms a particle into its antiparticle,
Both states must be normalizable, so that
which implies that is unitary,
By acting on the particle twice with the operator,
we see that and . Putting this all together, we see that
meaning that the charge conjugation operator is Hermitian and therefore a physically observable quantity.
Eigenvalues
For the eigenstates of charge conjugation,
.
As with parity transformations, applying twice must leave the particle's state unchanged,
allowing only eigenvalues of the so-called C-parity or charge parity of the particle.
Eigenstates
The above implies that for eigenstates, Since antiparticles and particles have charges of opposite sign, only states with all quantum charges equal to zero, such as the photon and particle–antiparticle bound states like , , or positronium, are eigenstates of
Multiparticle systems
For a system of free particles, the C parity is the product of C parities for each particle.
In a pair of bound mesons there is an additional component due to the orbital angular momentum. For example, in a bound state of two pions, with an orbital angular momentum , exchanging and inverts the relative position vector, which is identical to a parity operation. Under this operation, the angular part of the spatial wave function contributes a phase factor of , where is the angular momentum quantum number associated with .
.
With a two-fermion system, two extra factors appear: One factor comes from the spin part of the wave function, and the second by considering the intrinsic parities of both the particles. Note that a fermion and an antifermion always have opposite intrinsic parity. Hence,
Bound states can be described with the spectroscopic notation (see term symbol), where is the total spin quantum number (not to be confused with the S orbital), is the total angular momentum quantum number, and the total orbital momentum quantum number (with quantum number etc. replaced by orbital letters S, P, D, etc.).
Example positronium is a bound state electron-positron similar to a hydrogen atom. The names parapositronium and orthopositronium are given to the states 1S0 and 3S1.
With , the spins are anti-parallel, and with they are parallel. This gives a multiplicity of 1 (anti-parallel) or 3 (parallel)
The total orbital angular momentum quantum number is (spectroscopic S orbital)
Total angular momentum quantum number is
C parity depending on and . Since charge parity is preserved, annihilation of these states in photons must be:
{|
|-
| Orbital :
| 1S0 || → ||
|
| 3S1 || → ||
|-
| :
| +1 || = || (−1) × (−1)
|
| −1 || = || (−1) × (−1) × (−1)
|}
Experimental tests of C-parity conservation
: The neutral pion, , is observed to decay to two photons, We can infer that the pion therefore has but each additional introduces a factor of to the overall C-parity of the pion. The decay to would violate C parity conservation. A search for this decay was conducted using pions created in the reaction
: Decay of the eta meson.
annihilations
See also
G-parity
References
Quantum mechanics
Quantum field theory | C parity | [
"Physics"
] | 851 | [
"Quantum field theory",
"Theoretical physics",
"Quantum mechanics"
] |
1,633,981 | https://en.wikipedia.org/wiki/Insect%20repellent | An insect repellent (also commonly called "bug spray") is a substance applied to the skin, clothing, or other surfaces to discourage insects (and arthropods in general) from landing or climbing on that surface. Insect repellents help prevent and control the outbreak of insect-borne (and other arthropod-bourne) diseases such as malaria, Lyme disease, dengue fever, bubonic plague, river blindness, and West Nile fever. Pest animals commonly serving as vectors for disease include insects such as flea, fly, and mosquito; and ticks (arachnids).
Some insect repellents are insecticides (bug killers), but most simply discourage insects and send them flying or crawling away. Nearly any would be fatal upon reaching the median lethal dose, but classification as an insecticide implies death even at lower doses.
Effectiveness
Synthetic repellents tend to be more effective and/or longer lasting than "natural" repellents.
For protection against ticks and mosquito bites, the U.S. Centers for Disease Control (CDC) recommends DEET, icaridin (picaridin, KBR 3023), oil of lemon eucalyptus (OLE), para-menthane-diol (PMD), IR3535 and 2-undecanone with the caveat that higher percentages of the active ingredient provide longer protection.
In 2015, researchers at New Mexico State University tested 10 commercially available products for their effectiveness at repelling mosquitoes. The known active ingredients tested included DEET (at various concentrations), geraniol, p-menthane-3-8-diol (found in lemon eucalyptus oil), thiamine, and several oils (soybean, rosemary, cinnamon, lemongrass, citronella, and lemon eucalyptus). Two of the products tested were fragrances where the active ingredients were unknown. On the mosquito Aedes aegypti, the vector of Zika virus, only one repellent that did not contain DEET had a strong effect for the duration of the 240 minutes test: a lemon eucalyptus oil repellent. However, Victoria's Secret Bombshell, a perfume not advertised as an insect repellant, performed effectively during the first 120 minutes after application.
In one comparative study from 2004, IR3535 was as effective or better than DEET in protection against Aedes aegypti and Culex quinquefasciatus mosquitoes. Other sources (official publications of the associations of German physicians as well as of German druggists) suggest the contrary and state DEET is still the most efficient substance available and the substance of choice for stays in malaria regions, while IR3535 has little effect. However, some plant-based repellents may provide effective relief as well. Essential oil repellents can be short-lived in their effectiveness.
A test of various insect repellents by an independent consumer organization found that repellents containing DEET or icaridin are more effective than repellents with "natural" active ingredients. All the synthetics gave almost 100% repellency for the first 2 hours, where the natural repellent products were most effective for the first 30 to 60 minutes, and required reapplication to be effective over several hours.
Although highly toxic to cats, permethrin is recommended as protection against mosquitoes for clothing, gear, or bed nets. In an earlier report, the CDC found oil of lemon eucalyptus to be more effective than other plant-based treatments, with a similar effectiveness to low concentrations of DEET. However, a 2006 published study found in both cage and field studies that a product containing 40% oil of lemon eucalyptus was just as effective as products containing high concentrations of DEET. Research has also found that neem oil is mosquito repellent for up to 12 hours. Citronella oil's mosquito repellency has also been verified by research, including effectiveness in repelling Aedes aegypti, but requires reapplication after 30 to 60 minutes.
There are also products available based on sound production, particularly ultrasound (inaudibly high-frequency sounds) which purport to be insect repellents. However, these electronic devices have been shown to be ineffective based on studies done by the United States Environmental Protection Agency and many universities.
Safety issues
For humans
Children may be at greater risk for adverse reactions to repellents, in part, because their exposure may be greater.
Children can be at greater risk of accidental eye contact or ingestion.
As with chemical exposures in general, pregnant women should take care to avoid exposures to repellents when practical, as the fetus may be vulnerable.
Some experts also recommend against applying chemicals such as DEET and sunscreen simultaneously since that would increase DEET penetration. Canadian researcher, Xiaochen Gu, a professor at the University of Manitoba's faculty of Pharmacy who led a study about mosquitos, advises that DEET should be applied 30 or more minutes later. Gu also recommends insect repellent sprays instead of lotions which are rubbed into the skin "forcing molecules into the skin".
Regardless of which repellent product used, it is recommended to read the label before use and carefully follow directions. Usage instructions for repellents vary from country to country. Some insect repellents are not recommended for use on younger children.
In the DEET Reregistration Eligibility Decision (RED) the United States Environmental Protection Agency (EPA) reported 14 to 46 cases of potential DEET associated seizures, including 4 deaths. The EPA states: "... it does appear that some cases are likely related to DEET toxicity," but observed that with 30% of the US population using DEET, the likely seizure rate is only about one per 100 million users.
The Pesticide Information Project of Cooperative Extension Offices of Cornell University states that, "Everglades National Park employees having extensive DEET exposure were more likely to have insomnia, mood disturbances and impaired cognitive function than were lesser exposed co-workers".
The EPA states that citronella oil shows little or no toxicity and has been used as a topical insect repellent for 60 years. However, the EPA also states that citronella may irritate skin and cause dermatitis in certain individuals. Canadian regulatory authorities concern with citronella based repellents is primarily based on data-gaps in toxicology, not on incidents.
Within countries of the European Union, implementation of Regulation 98/8/EC, commonly referred to as the Biocidal Products Directive, has severely limited the number and type of insect repellents available to European consumers. Only a small number of active ingredients have been supported by manufacturers in submitting dossiers to the EU Authorities.
In general, only formulations containing DEET, icaridin (sold under the trade name Saltidin and formerly known as Bayrepel or KBR3023), IR3535 and citriodiol (p-menthane-3,8-diol) are available. Most "natural" insect repellents such as citronella, neem oil, and herbal extracts are no longer permitted for sale as insect repellents in the EU due to their lack of effectiveness; this does not preclude them from being sold for other purposes, as long as the label does not indicate they are a biocide (insect repellent).
Toxicity for other animals
A 2018 study found that Icaridin, is highly toxic to salamander larvae, in what the authors described as conservative exposure doses. The LC50 standard was additionally found to be completely inadequate in the context of finding this result.
Permethrin is highly toxic to cats but not to dogs or humans.
Common insect repellents
Common synthetic insect repellents
Benzaldehyde, for bees
Butopyronoxyl (trade name Indalone). Widely used in a "6-2-2" mixture (60% Dimethyl phthalate, 20% Indalone, 20% Ethylhexanediol) during the 1940s and 1950s before the commercial introduction of DEET
DEET (N,N-diethyl-m-toluamide) the most common and effective insect repellent
Dimethyl carbate
Dimethyl phthalate, not as common as it once was but still occasionally an active ingredient in commercial insect repellents
Ethyl butylacetylaminopropionate (IR3535 or 3-[N-Butyl-N-acetyl]-aminopropionic acid, ethyl ester)
Ethylhexanediol, also known as Rutgers 612 or "6–12 repellent," discontinued in the US in 1991 due to evidence of causing developmental defects in animals
Icaridin, also known as picaridin, Bayrepel, and KBR 3023 considered equal in effectiveness to DEET
Methyl anthranilate and other anthranilate-based insect repellents
Metofluthrin
Permethrin is a contact insecticide rather than a repellent
SS220 is a repellent being researched that has shown promise to provide significantly better protection than DEET
Tricyclodecenyl allyl ether, a compound often found in synthetic perfumes
Common natural insect repellents
Beautyberry (Callicarpa) leaves
Birch tree bark is traditionally made into tar. Combined with another oil (e.g., fish oil) at 1/2 dilution, it is then applied to the skin for repelling mosquitos
Bog myrtle (Myrica gale)
Catnip oil whose active compound is Nepetalactone
Citronella oil (citronella candles are not effective)
Essential oil of the lemon eucalyptus (Corymbia citriodora) and its active compound p-menthane-3,8-diol (PMD)
Lemongrass
Neem oil
Tea tree oil from the leaves of Melaleuca alternifolia
Tobacco
Insect repellents from natural sources
Several natural ingredients are certified by the United States Environmental Protection Agency as insect repellents, namely catnip oil, oil of lemon eucalyptus (OLE) (and its active ingredient p-Menthane-3,8-diol), oil of citronella, and 2-Undecanone, which is usually produced synthetically but has also been isolated from many plant sources.
Many other studies have also investigated the potential of natural compounds from plants as insect repellents. Moreover, there are many preparations from naturally occurring sources that have been used as a repellent to certain insects. Some of these act as insecticides while others are only repellent. Below is a list of some natural products with repellent activity:
Achillea alpina (mosquitos)
alpha-terpinene (mosquitos)
Andrographis paniculata extracts (mosquito)
Basil
Sweet basil (Ocimum basilicum)
Breadfruit (Insect repellent, including mosquitoes)
Callicarpa americana (beautyberry)
Camphor (mosquitoes)
Carvacrol (mosquitos)
Castor oil (Ricinus communis) (mosquitos)
Catnip oil (Nepeta species) (nepetalactone against mosquitos)
Cedar oil (mosquitos, moths)
Celery extract (Apium graveolens) (mosquitos) In clinical testing an extract of celery was demonstrated to be at least equally effective to 25% DEET, although the commercial availability of such an extract is not known.
Cinnamon (leaf oil kills mosquito larvae)
Citronella oil (repels mosquitos) (contains insect repelling substances, such as citronellol and geraniol)
Clove oil (mosquitos)
D-Limonene (ticks, fleas, flies, mosquitoes, and other insects) (widely used in insect repellents for pets)
Eucalyptus oil (70%+ eucalyptol), (cineol is a synonym), mosquitos, flies, dust mites In the U.S., eucalyptus oil was first registered in 1948 as an insecticide and miticide.
Fennel oil (Foeniculum vulgare) (mosquitos)
Garlic (Allium sativum) (Mosquito, rice weevil, wheat flour beetle)
Geranium oil (also known as Pelargonium graveolens)
Hinokitiol (ticks, mosquitos, larvae)
Lavender (ineffective alone, but measurable effect in certain repellent mixtures)
Lemon eucalyptus (Corymbia citriodora) essential oil and its active ingredient p-menthane-3,8-diol (PMD)
Lemongrass oil (Cymbopogon species) (mosquitos)
East-Indian lemon grass (Cymbopogon flexuosus)
Linalool (ticks, fleas, mites, mosquitoes, spiders, cockroach)
Marjoram (spider mites Tetranychus urticae and Eutetranychus orientalis)
Mint (menthol is active chemical.) (Mentha sp.)
Neem oil (Azadirachta indica) (Repels or kills mosquitos, their larvae and a plethora of other insects including those in agriculture)
Nootkatone (ticks, mosquitoes and other insects)
Oleic acid, repels bees and ants by simulating the "smell of death" produced by their decomposing corpses.
Pennyroyal (Mentha pulegium) (mosquitos, fleas,) but very toxic to pets
Peppermint (Mentha x piperita) (mosquitos)
Pyrethrum (from Chrysanthemum species, particularly C. cinerariifolium and C. coccineum)
Rosemary (Rosmarinus officinalis) (mosquitos)
Spanish Flag (Lantana camara) (against Tea Mosquito Bug, Helopeltis theivora)
Tea tree oil from the leaves of Melaleuca alternifolia
Thyme (Thymus species) (mosquitos)
Yellow nightshade (Solanum villosum), berry juice (against Stegomyia aegypti mosquitos)
Less effective methods
Some old studies suggested that the ingestion of large doses of thiamine (vitamin B1) could be effective as an oral insect repellent against mosquito bites. However, there is now conclusive evidence that thiamin has no efficacy against mosquito bites. Some claim that plants such as wormwood or sagewort, lemon balm, lemon grass, lemon thyme, and the mosquito plant (Pelargonium) will act against mosquitoes. However, scientists have determined that these plants are "effective" for a limited time only when the leaves are crushed and applied directly to the skin.
There are several, widespread, unproven theories about mosquito control, such as the assertion that vitamin B, in particular B1 (thiamine), garlic, ultrasonic devices or incense can be used to repel or control mosquitoes. Moreover, manufacturers of "mosquito repelling" ultrasonic devices have been found to be fraudulent, and their devices were deemed "useless" according to a review of scientific studies.
Alternatives to repellent
People can reduce the number of mosquito bites they receive (to a greater or lesser degree) by:
Using a mosquito net
Wearing long clothing that covers the skin and is tucked in to seal up holes
Avoiding the outdoors during dawn and dusk, when mosquitos are most active
Keeping air moving to prevent mosquitos from landing, such as by using a fan
Wearing light-colored clothing (light objects are harder for mosquitos to detect)
Reducing exercise, which reduces output of carbon dioxide used by mosquitos for detection
History
Testing and scientific certainty were desired at the end of the 1940s. To that end products meant to be used by humans were tested with model animals to speed trials. Eddy & McGregor 1949 and Wiesmann & Lotmar 1949 used mice, Wasicky et al. 1949 canaries and guinea pigs, Kasman et al. 1953 also guinea pigs, Starnes & Granett 1953 rabbits, and many used cattle.
See also
Fly spray (insecticide)
Mosquito coil
Mosquito control
Mosquito net
Pest control
RID Insect Repellent
Slug tape
VUAA1
Chemical ecology
References
External links
2011 review of studies of plant-based mosquito repellents – NIH
Aphid repellents
Choosing and Using Insect Repellents – National Pesticide Information Center
Dr. Duke's Phytochemical and Ethnobotanical Databases (plant parts with Insect-repellent Activity from the chemical Borneol)
Mosquito repellents; Florida U
Insect repellent active ingredients recommended by the CDC
Chemical ecology
Hiking equipment
Household chemicals
mt:Repellent tal-insetti | Insect repellent | [
"Chemistry",
"Biology"
] | 3,469 | [
"Biochemistry",
"Chemical ecology"
] |
1,634,018 | https://en.wikipedia.org/wiki/Metallic%20fiber | Metallic fibers are manufactured fibers composed of metal, metallic alloys, plastic-coated metal, metal-coated plastic, or a core completely covered by metal.
Having their origin in textile and clothing applications, gold and silver fibers have been used since ancient times as yarns for fabric decoration. More recently, aluminium yarns, aluminized plastic yarns, and aluminized nylon yarns have replaced gold and silver.
Today's metal fiber industry mainly offers fibers in stainless steel, nickel, titanium, copper and aluminium for various applications. Metallic filaments can be coated with transparent films to minimize tarnishing.
Many methods exist to manufacture metallic fibers, and each comes with its own benefits and limitations. The most common methods include shaving from a larger stock, casting directly from molten metal, and growing around a seed. Multiple fibers can also be woven or intertwined to form larger strands.
History
Gold and silver have been used since ancient times as decoration in the clothing and textiles of kings, leaders, nobility and people of status. Many of these elegant textiles can be found in museums around the world. Historically, the metallic thread was constructed by wrapping a metal strip around a fiber core (cotton or silk), often in such a way as to reveal the color of the fiber core to enhance visual quality of the decoration. Ancient textiles and clothing woven from wholly or partly gold threads is sometimes referred to as cloth of gold. They have been woven on Byzantine looms from the 7th to the 9th century, and after that in Sicily, Cyprus, Lucca, and Venice. Weaving also flourished in the 12th century during the legacy of Genghis Khan when art and trade flourished under Mongol rule in China and some Middle Eastern areas. The Dobeckmum Company produced the first modern metallic fiber in 1946.
During the early 1960s, Brunswick Corp. conducted a research program to develop an economically viable process for forming metallic filaments. They started producing metallic filaments in a laboratory-scale pilot plant. By 1964 Brunswick was producing fine metal fibers as small as 1 μm from 304 type stainless steel. Their first large scale production facility, located in the US, was brought on stream in 1966. Metal fibers are now widely produced and used in all kinds of technology. With a wide range of applications, it is a mature sector.
In the past, aluminium was often the base in a metallic fiber. More recently stainless steel has become the dominant metal for metallic fibers. Depending on the alloy, the metallic fibers provide properties to the yarn which allow the use in more high tech applications.
Fiber properties
Metal fibers exists in different forms and diameters. Generally, the sector offers metal fiber diameters from 100μm down to 1μm.
Metallic fibers exists in both long, continuous fibers as well as short fibers (with a length/diameter ratio of less than 100).
Compared to other fiber types, like carbon, glass, aramid or natural fibers, metal fibers have a low electrical resistance. This makes them suitable for any application that requires electrical conductivity. Their excellent thermal resistance makes them withstand extreme temperatures. Corrosion resistance is achieved through the use of high-quality alloys in stainless steels or other metals. Other advantageous mechanical properties of metal fibers include high failure strain, ductility, shock resistance, fire resistance and sound insulation.
Sintered metal fiber structures and products display high porosity properties, while remaining structurally strong and durable. This benefits the function and structure of specific applications like filtration or electrodes.
Coated metallic filaments helps to minimize tarnishing. When suitable adhesives and films are used, they are not affected by salt water, chlorinated water in swimming pools or climatic conditions. If possible anything made with metallic fibers should be dry-cleaned, if there is no care label. Ironing can be problematic because the heat from the iron, especially at high temperatures, can melt the fibers.
Production method
There are several processes which can be used for manufacturing metallic fibers.
The most common technology is known as bundle drawing. Several thousands of filaments are bundled together in a so-called composite wire, a tube which is drawn through a die to further reduce its diameter. The covering tube is later dissolved in acid, resulting in individual continuous metal fibers. This composite wire is drawn further until the desired diameter of the individual filaments within the bundle is obtained. Bundle drawing technology allows for the production of continuous metal fiber bundles with lengths of up to several kilometers. Due to the nature of the process, the cross-section of the fibers is octagonal. In order to achieve high-quality fibers, this technology can be fine-tuned, resulting in uniform, very thin fibers with a very narrow equivalent diameter spread. Special developments within the last couple of years have allowed this technology to be used for the production of fibers with diameters as small as 200 nm and below.
In the laminating process, one seals a layer of aluminium between two layers of acetate or polyester film. These fibers are then cut into lengthwise strips for yarns and wound onto bobbins. The metal can be colored and sealed in a clear film, the adhesive can be colored, or the film can be colored before laminating. There are many different variations of color and effect that can be made in metallic fibers, producing a wide range of looks.
With foil-shaving technology, fibers with diameters down to 14 μm and a more rectangular crosssection are feasible. This produces semicontinuous bundles of fibers or staple fibers.
Machining of staple fibers can produce semicontinuous bundles of fibers down to 10 μm. Improving staple fiber manufacturing allows a narrow diameter spread on these kinds of fibers as well as tuning of the geometry of the fiber. This technology is unique compared to foil shaving or fibers from melt spinning, due to the small diameters that can be reached and the relatively small diameter spread
Metallic fibers can also be made by using the metalizing process. This process involves heating the metal until it vaporizes then depositing it at a high pressure onto the polyester film . This process produces thinner, more flexible, more durable, and more comfortable fibers.
Metal fiber may also be shaved from wire (steel wool), shaven from foil, or bundle drawn to form larger diameter wire.
Types of metallic fiber products
Sintered metal fibers
Metal fibers are converted into fiber media either as non-woven fleece or sintered structures composed of fibers ranging from 1.5 to 80 μm in diameter. These porous metal fiber media have been used for their uniqueness in highly demanding applications. The benefit of having the combination of an outstanding permeable material (porosities up to 90% for sintered and up to 99% for non-woven structures) combined with high corrosion and temperature resistance is highly valued. The sintered porous structure has no binder as the individual fibers are strongly bonded together by inter-metallic diffusion bonding. 3D sintered structures have also become a standard product. Recent developments include filter media using combinations of both metallic and non-metallic fibers.
Short fibers
A specially designed process allows the production of individual powder-like metal fibers known as short fibers with a length over diameter (L/D) range of 100. These short fibers can be used as such or in combination with metal powders to produce sintered filtration structures with ultra-high levels of filtration while allowing unique levels of permeability.
Polymer pellets
Other metal fiber products are polymer pellets or grains composed out of metal fibers. Several bundles of fibers are glued together with a variety of sizings and an adequate compatible extrusion coating is applied. After chopping these coated bundles into pellets they can be used as additives in the production of engineered conductive/ shielding plastic pieces by injection molding and extrusion. The unique benefit of metal fibers is the conductive network formation with a relatively limited volume of conductive additives.
Nonwoven fibers
Non-wovens or felts can be produced with metal fibers, just like with traditional textile fibers. In a very limited number of cases, needle punching can be applied to entangle the fibers and obtain needle-punched felt.
Metal yarns
Bundles of stainless steel fibers can be converted to yarns by textile spinning processes. There are two forms of yarn: one with a low amount of fibers and one with a high amount of fibers. The former, with a number of filaments of around 275, can be converted into a filament yarn by adding twist to the bundle. Bundles with several thousands of fibers are typically used to convert fibers into spun yarn. That can be done by stretch breaking and subsequent traditional yarn spinning technologies. This results in 100% metal yarns. During the spinning process, tows can be blended and blended yarns can also be produced. Blends with cotton, polyester and wool are possible. Subsequently, metal yarns can be further converted into various textile products using textile processes. Knitting (circular, flat, warp) and weaving are possible, as well as braiding. Blended textile products can be obtained by combining metal yarns with other yarns, or by using yarns that have two kinds of fibers inside and hence are already blends by themselves.
Electrical cables
To make cables, two or more filaments are twisted together a number of times. During the process, a cable's torsion and straightness are monitored. The cable can be fine-tuned for a certain application by combining different filament strengths, diameters or the number of twists, or by preforming.
Fiber Reinforced Composites
Metal fiber can be used as reinforcement fiber for composite materials, improving breaking behavior upon impact and electrical conductivity. Traditional carbon or glass fiber reinforcement fibers have very limited elongation possibilities, which results in a brittle and explosive breaking behavior. Metal fibers act perfectly complementary to this, and can absorb much more energy before breaking. Processing is no different from any other reinforcement fiber for composite material. It is even possible to combine metal fibers with other fibers into a 'hybrid' composite structure, which combines all the benefits of carbon, glass and steel.
Producers
Currently metallic fibers are manufactured primarily in Europe. The largest and most integrated metal fiber producer worldwide is the multinational company Bekaert, headquartered in Belgium, but with manufacturing footprint in Europe, Asia and the Americas. Three manufacturers are still producing metallic yarn in the United States. Metlon Corporation is one of the remaining manufacturers in the U.S. that stocks a wide variety of laminated and non-laminated metallic yarns & Brightex Corporation, Reiko. Co of Japan and South Korea, such as Hwa Young, is also manufacturing Metallic fibers. China also produces metallic yarns; the city Dongyang contains more than 100 factories, though some of these are home based production sites rather than conventional factories. Two of the more popular factories are Salu Metallic Yarn and Aoqi Textile.
In 2020, Fibrecoat a German start-up from Aachen started producing Aluminium coated Basalt Fibres in Germany, their patented coating technique allows for an exponential increase in production speed, and decrease in process steps, energy consumption and price.
Trademarks
Bekaert manufactures metal fibers and many derived products such as continuous fiber, sintered media, nonwoven structures, polymer pellets, braids, woven fabrics, cables, yarns and short fibers. Well established brand names are Bekipor, Beki-shield and Bekinox.
The Lurex Company has manufactured metallic fibers in Europe for over fifty years. They produce a wide variety of metallic fiber products including fibers used in apparel fabric, embroidery, braids, knitting, military regalia, trimmings, ropes, cords, and lace surface decoration. The majority of Lurex fibers have a polyamide film covering the metal strand but polyester and viscose are also used. The fibers are also treated with a lubricant called P.W., a mineral-based oil, which helps provide ease of use.
Metlon Corporation is a trademark of Metallic Yarns in the United States and has been producing metallic yarns for over sixty years. Metlon produces their metallic yarn by wrapping single slit yarns with two ends of nylon. One end of nylon is wrapped clockwise and the other end is wrapped counterclockwise around the metallic yarn. The most commonly used nylon is either 15 denier or 20 denier, but heavier deniers are used for special purposes.
Uses
Metallic fibers are used in a wide range of sectors and segments.
Automotive
Metal fiber sintered sheets are used for diesel and gasoline particulate filtration and crankcase ventilation filters.
Heat-resistant textile materials are made from metal fibers for automotive glass bending processes. These metal fiber cloths protect the glass during the bending process with highly elevated temperatures and high pressures.
Also heating cables for car seat heating and Selective Catalytic Reduction tubes, adblue tanks. Metal fiber heating cables show an extremely high flexibility and durability when compared to copper wire.
Aerospace
Metal fiber filters are used for Hydraulic fluid filtration in aircraft hydraulic systems. When compared to glass fiber filtration media, metal fibers show excellent durability, as the fibers are metallically bonded together by sintering, instead of kept together by a binder material.
Metal fiber sintered porous sheets are used as a sound attenuation medium in the aircraft cabin, reducing HVAC sounds, and auxiliary power unit noise.
Technical textiles
Metal fibers can serve as antistatic fibers for textiles, which can be used in, amongst others, electrical protective clothing or antistatic big bags.
Not only antistatic, but also shielding from electromagnetic interference (EMI) can be achieved by metal fiber textiles.
Stainless steel fiber textiles can be heated by applying electrical current and can also be used for cut resistant clothing (gloves).
Power
Metal fiber filters can reach very high porosity, at very low pore sizes, which makes them suitable for HEPA and ULPA filtration. These filters are used in, amongst others, nuclear power plants as a safety measure to prevent eventual release of radio-active steam.
Marine
Metal fiber filters are used for the purification of marine fuel and lube oil.
Other uses of metal fibers
Another common use for metallic fibers is upholstery fabric and textiles such as lamé and brocade. Many people also use metallic fibers in weaving and needlepoint. Increasingly common today are metallic fibers in clothing, anything from party and evening wear to club clothing, cold weather and survival clothing, and everyday wear. Metallic yarns are woven, braided, and knit into many fashionable fabrics and trims. For additional variety, metallic yarns are twisted with other fibers such as wool, nylon, cotton, and synthetic blends to produce yarns which add novelty effects to the end cloth or trim.
Stainless steel and other metal fibers are used in communication lines such as phone lines and cable television lines.
Stainless steel fibers are also used in carpets. They are dispersed throughout the carpet with other fibers so they are not detected. The presence of the fibers helps to conduct electricity so that the static shock is reduced. These types of carpets are often used in computer-use areas or other areas where static build-up could damage equipment. Other uses include tire cord, missile nose cones, work clothing such as protective suits, space suits, and cut resistant gloves for butchers and other people working near bladed or dangerous machinery.
Metal fibers can be used as a reinforcement or electrical conductivity fiber for fiber reinforced composites.
References
Synthetic fibers
Technical fabrics | Metallic fiber | [
"Physics",
"Chemistry"
] | 3,155 | [
"Synthetic fibers",
"Synthetic materials",
"Metallic objects",
"Physical objects",
"Matter"
] |
1,634,352 | https://en.wikipedia.org/wiki/Atomic%20battery | An atomic battery, nuclear battery, radioisotope battery or radioisotope generator uses energy from the decay of a radioactive isotope to generate electricity. Like a nuclear reactor, it generates electricity from nuclear energy, but it differs by not using a chain reaction. Although commonly called batteries, atomic batteries are technically not electrochemical and cannot be charged or recharged. Although they are very costly, they have extremely long lives and high energy density, so they are typically used as power sources for equipment that must operate unattended for long periods, such as spacecraft, pacemakers, underwater systems, and automated scientific stations in remote parts of the world.
Nuclear batteries began in 1913, when Henry Moseley first demonstrated a current generated by charged-particle radiation. In the 1950s and 1960s, this field of research got much attention for applications requiring long-life power sources for spacecraft. In 1954, RCA researched a small atomic battery for small radio receivers and hearing aids. Since RCA's initial research and development in the early 1950s, many types and methods have been designed to extract electrical energy from nuclear sources. The scientific principles are well known, but modern nano-scale technology and new wide-bandgap semiconductors have allowed the making of new devices and interesting material properties not previously available.
Nuclear batteries can be classified by their means of energy conversion into two main groups: thermal converters and non-thermal converters. The thermal types convert some of the heat generated by the nuclear decay into electricity; an example is the radioisotope thermoelectric generator (RTG), often used in spacecraft. The non-thermal converters, such as betavoltaic cells, extract energy directly from the emitted radiation, before it is degraded into heat; they are easier to miniaturize and do not need a thermal gradient to operate, so they can be used in small machines.
Atomic batteries usually have an efficiency of 0.1–5%. High-efficiency betavoltaic devices can reach 6–8% efficiency.
Thermal conversion
Thermionic conversion
A thermionic converter consists of a hot electrode, which thermionically emits electrons over a space-charge barrier to a cooler electrode, producing a useful power output. Caesium vapor is used to optimize the electrode work functions and provide an ion supply (by surface ionization) to neutralize the electron space charge.
Thermoelectric conversion
A radioisotope thermoelectric generator (RTG) uses thermocouples. Each thermocouple is formed from two wires of different metals (or other materials). A temperature gradient along the length of each wire produces a voltage gradient from one end of the wire to the other; but the different materials produce different voltages per degree of temperature difference. By connecting the wires at one end, heating that end but cooling the other end, a usable, but small (millivolts), voltage is generated between the unconnected wire ends. In practice, many are connected in series (or in parallel) to generate a larger voltage (or current) from the same heat source, as heat flows from the hot ends to the cold ends. Metal thermocouples have low thermal-to-electrical efficiency. However, the carrier density and charge can be adjusted in semiconductor materials such as bismuth telluride and silicon germanium to achieve much higher conversion efficiencies.
Thermophotovoltaic conversion
Thermophotovoltaic (TPV) cells work by the same principles as a photovoltaic cell, except that they convert infrared light (rather than visible light) emitted by a hot surface, into electricity. Thermophotovoltaic cells have an efficiency slightly higher than thermoelectric couples and can be overlaid on thermoelectric couples, potentially doubling efficiency. The University of Houston TPV Radioisotope Power Conversion Technology development effort is aiming at combining thermophotovoltaic cells concurrently with thermocouples to provide a 3- to 4-fold improvement in system efficiency over current thermoelectric radioisotope generators.
Stirling generators
A Stirling radioisotope generator is a Stirling engine driven by the temperature difference produced by a radioisotope. A more efficient version, the advanced Stirling radioisotope generator, was under development by NASA, but was cancelled in 2013 due to large-scale cost overruns.
Non-thermal conversion
Non-thermal converters extract energy from emitted radiation before it is degraded into heat. Unlike thermoelectric and thermionic converters their output does not depend on the temperature difference. Non-thermal generators can be classified by the type of particle used and by the mechanism by which their energy is converted.
Electrostatic conversion
Energy can be extracted from emitted charged particles when their charge builds up in a conductor, thus creating an electrostatic potential. Without a dissipation mode the voltage can increase up to the energy of the radiated particles, which may range from several kilovolts (for beta radiation) up to megavolts (alpha radiation). The built up electrostatic energy can be turned into usable electricity in one of the following ways.
Direct-charging generator
A direct-charging generator consists of a capacitor charged by the current of charged particles from a radioactive layer deposited on one of the electrodes. Spacing can be either vacuum or dielectric. Negatively charged beta particles or positively charged alpha particles, positrons or fission fragments may be utilized. Although this form of nuclear-electric generator dates back to 1913, few applications have been found in the past for the extremely low currents and inconveniently high voltages provided by direct-charging generators. Oscillator/transformer systems are employed to reduce the voltages, then rectifiers are used to transform the AC power back to direct current.
English physicist H. G. J. Moseley constructed the first of these. Moseley's apparatus consisted of a glass globe silvered on the inside with a radium emitter mounted on the tip of a wire at the center. The charged particles from the radium created a flow of electricity as they moved quickly from the radium to the inside surface of the sphere. As late as 1945 the Moseley model guided other efforts to build experimental batteries generating electricity from the emissions of radioactive elements.
Electromechanical conversion
Electromechanical atomic batteries use the buildup of charge between two plates to pull one bendable plate towards the other, until the two plates touch, discharge, equalizing the electrostatic buildup, and spring back. The mechanical motion produced can be used to produce electricity through flexing of a piezoelectric material or through a linear generator. Milliwatts of power are produced in pulses depending on the charge rate, in some cases multiple times per second (35 Hz).
Radiovoltaic conversion
A radiovoltaic (RV) device converts the energy of ionizing radiation directly into electricity using a semiconductor junction, similar to the conversion of photons into electricity in a photovoltaic cell. Depending on the type of radiation targeted, these devices are called alphavoltaic (AV, αV), betavoltaic (BV, βV) and/or gammavoltaic (GV, γV). Betavoltaics have traditionally received the most attention since (low-energy) beta emitters cause the least amount of radiative damage, thus allowing a longer operating life and less shielding. Interest in alphavoltaic and (more recently) gammavoltaic devices is driven by their potential higher efficiency.
Alphavoltaic conversion
Alphavoltaic devices use a semiconductor junction to produce electrical energy from energetic alpha particles.
Betavoltaic conversion
Betavoltaic devices use a semiconductor junction to produce electrical energy from energetic beta particles (electrons). A commonly used source is the hydrogen isotope tritium, which is employed in City Labs' NanoTritium batteries.
Betavoltaic devices are particularly well-suited to low-power electrical applications where long life of the energy source is needed, such as implantable medical devices or military and space applications.
The Chinese startup Betavolt claimed in January 2024 to have a miniature device in the pilot testing stage. It is allegedly generating 100 microwatts of power and a voltage of 3V and has a lifetime of 50 years without any need for charging or maintenance. Betavolt claims it to be the first such miniaturised device ever developed.
It gains its energy from the isotope nickel-63, held in a module the size of a very small coin.
As it is consumed, the nickel-63 decays into stable, non-radioactive isotopes of copper, which pose no environmental threat. It contains a thin wafer of nickel-63 providing beta particle electrons sandwiched between two thin crystallographic diamond semiconductor layers.
Gammavoltaic conversion
Gammavoltaic devices use a semiconductor junction to produce electrical energy from energetic gamma particles (high-energy photons). They have only been considered in the 2010s but were proposed as early as 1981.
A gammavoltaic effect has been reported in perovskite solar cells. Another patented design involves scattering of the gamma particle until its energy has decreased enough to be absorbed in a conventional photovoltaic cell. Gammavoltaic designs using diamond and Schottky diodes are also being investigated.
Radiophotovoltaic (optoelectric) conversion
In a radiophotovoltaic (RPV) device the energy conversion is indirect: the emitted particles are first converted into light using a radioluminescent material (a scintillator or phosphor), and the light is then converted into electricity using a photovoltaic cell. Depending on the type of particle targeted, the conversion type can be more precisely specified as alphaphotovoltaic (APV or α-PV), betaphotovoltaic (BPV or β-PV) or gammaphotovoltaic (GPV or γ-PV).
Radiophotovoltaic conversion can be combined with radiovoltaic conversion to increase the conversion efficiency.
Pacemakers
Medtronic and Alcatel developed a plutonium-powered pacemaker, the Numec NU-5, powered by a 2.5 Ci slug of plutonium 238, first implanted in a human patient in 1970. The 139 Numec NU-5 nuclear pacemakers implanted in the 1970s are expected to never need replacing, an advantage over non-nuclear pacemakers, which require surgical replacement of their batteries every 5 to 10 years. The plutonium "batteries" are expected to produce enough power to drive the circuit for longer than the 88-year halflife of the plutonium-238.
The last of these units was implanted in 1988, as lithium-powered pacemakers, which had an expected lifespan of 10 or more years without the disadvantages of radiation concerns and regulatory hurdles, made these units obsolete.
Betavoltaic batteries are also being considered as long-lasting power sources for lead-free pacemakers.
Radioisotopes used
Atomic batteries use radioisotopes that produce low energy beta particles or sometimes alpha particles of varying energies. Low energy beta particles are needed to prevent the production of high energy penetrating Bremsstrahlung radiation that would require heavy shielding. Radioisotopes such as tritium, nickel-63, promethium-147, and technetium-99 have been tested. Plutonium-238, curium-242, curium-244 and strontium-90 have been used. Besides the nuclear properties of the used isotope, there are also the issues of chemical properties and availability. A product deliberately produced via neutron irradiation or in a particle accelerator is more difficult to obtain than a fission product easily extracted from spent nuclear fuel.
Plutonium-238 must be deliberately produced via neutron irradiation of Neptunium-237 but it can be easily converted into a stable plutonium oxide ceramic. Strontium-90 is easily extracted from spent nuclear fuel but must be converted into the perovskite form strontium titanate to reduce its chemical mobility, cutting power density in half. Caesium-137, another high yield nuclear fission product, is rarely used in atomic batteries because it is difficult to convert into chemically inert substances. Another undesirable property of Cs-137 extracted from spent nuclear fuel is that it is contaminated with other isotopes of Caesium which reduce power density further.
Micro-batteries
In the field of microelectromechanical systems (MEMS), nuclear engineers at the University of Wisconsin, Madison have explored the possibilities of producing minuscule batteries which exploit radioactive nuclei of substances such as polonium or curium to produce electric energy. As an example of an integrated, self-powered application, the researchers have created an oscillating cantilever beam that is capable of consistent, periodic oscillations over very long time periods without the need for refueling. Ongoing work demonstrate that this cantilever is capable of radio frequency transmission, allowing MEMS devices to communicate with one another wirelessly.
These micro-batteries are very light and deliver enough energy to function as power supply for use in MEMS devices and further for supply for nanodevices.
The radiation energy released is transformed into electric energy, which is restricted to the area of the device that contains the processor and the micro-battery that supplies it with energy.
See also
References
External links
Betavoltaic Historical Review
Cantilever Electromechanical Atomic Battery
Types of Radioisotopic Batteries
Americium Battery Concept Proposed for Space Applications- TFOT article
Nuclear Batteries (25 MW)
Tiny 'nuclear batteries' unveiled, BBC article about the research of Jae Wan Kwon et al. from the University of Missouri.
Battery types
Electrical generators
Nuclear technology
Nuclear power in space | Atomic battery | [
"Physics",
"Technology"
] | 2,872 | [
"Electrical generators",
"Machines",
"Nuclear technology",
"Physical systems",
"Nuclear physics"
] |
1,634,427 | https://en.wikipedia.org/wiki/Zymography | Zymography is an electrophoretic technique for the detection of hydrolytic enzymes, based on the substrate repertoire of the enzyme. Three types of zymography are used; in gel zymography, in situ zymography and in vivo zymography. For instance, gelatin embedded in a polyacrylamide gel will be digested by active gelatinases run through the gel. After Coomassie staining, areas of degradation are visible as clear bands against a darkly stained background.
Modern usage of the term zymography has been adapted to define the study and cataloging of fermented products, such as beer or wine, often by specific brewers or winemakers or within an identified category of fermentation such as with a particular strain of yeast or species of bacteria.
Zymography also refers to a collection of related, fermented products, considered as a body of work. For example, all of the beers produced by a particular brewery could collectively be referred to as its zymography.
See also Zymology or the applied science of zymography. Zymology relates to the biochemical processes of fermentation, especially the selection of fermenting yeast and bacteria in brewing, winemaking, and other fermented foods. For example, beer-making involves the application of top (ale) or bottom fermenting yeast (lager), to produce the desired variety of beer. The synthesis of the yeast can impact the flavor profile of the beer, i.e. diacetyl (taste or aroma of buttery, butterscotch).
Gel zymography
Samples are prepared in a standard, non-reducing loading buffer for SDS-PAGE. No reducing agent or boiling are necessary since these would interfere with refolding of the enzyme. A suitable substrate (e.g. gelatin or casein for protease detection) is embedded in the resolving gel during preparation of the acrylamide gel. Following electrophoresis, the SDS is removed from the gel (or zymogram) by incubation in unbuffered Triton X-100, followed by incubation in an appropriate digestion buffer, for an optimized length of time at 37 °C. The zymogram is subsequently stained (commonly with Amido Black or Coomassie brilliant blue), and areas of digestion appear as clear bands against a darkly stained background where the substrate has been degraded by the enzyme.
Variations on the standard protocol
The standard protocol may require modifications depending on the sample enzyme; for instance, D. melanogaster digestive glycosidases generally survive reducing conditions (i.e. the presence of 2-mercaptoethanol or DTT), and to an extent, heating. Indeed, the separations following heating to 50 °C tend to exhibit a substantial increase in band resolution, without appreciable loss of activity.
A common protocol used in the past for zymography of α-amylase activity was the so-called starch film protocol of W.W. Doane. Here a native PAGE gel was run to separate the proteins in a homogenate. Subsequently, a thin gel with starch dissolved (or more properly, suspended) in it was overlaid for a period of time on top of the original gel. The starch was then stained with Lugol's iodine.
Gel zymography is often used for the detection and analysis of enzymes produced by microorganisms. This has led to variations on the standard protocol e.g. mixed-substrate zymography.
Reverse zymography copolymerizes both the substrate and the enzyme with the acrylamide, and is useful for the demonstration of enzyme inhibitor activity. Following staining, areas of inhibition are visualized as dark bands against a clear (or lightly stained) background.
In imprint technique, the enzyme is separated by native gel electrophoresis and the gel is laid on top of a substrate treated agarose.
Zymography can also be applied to other types of enzymes, including xylanases, lipases and chitinases.
See also
SDS-PAGE
References
Molecular biology techniques | Zymography | [
"Chemistry",
"Biology"
] | 871 | [
"Molecular biology techniques",
"Molecular biology"
] |
1,634,778 | https://en.wikipedia.org/wiki/Rough%20set | In computer science, a rough set, first described by Polish computer scientist Zdzisław I. Pawlak, is a formal approximation of a crisp set (i.e., conventional set) in terms of a pair of sets which give the lower and the upper approximation of the original set. In the standard version of rough set theory described in Pawlak (1991), the lower- and upper-approximation sets are crisp sets, but in other variations, the approximating sets may be fuzzy sets.
Definitions
The following section contains an overview of the basic framework of rough set theory, as originally proposed by Zdzisław I. Pawlak, along with some of the key definitions. More formal properties and boundaries of rough sets can be found in and cited references. The initial and basic theory of rough sets is sometimes referred to as "Pawlak Rough Sets" or "classical rough sets", as a means to distinguish it from more recent extensions and generalizations.
Information system framework
Let be an information system (attribute–value system), where is a non-empty, finite set of objects (the universe) and is a non-empty, finite set of attributes such that for every . is the set of values that attribute may take. The information table assigns a value from to each attribute and object in the universe .
With any there is an associated equivalence relation :
The relation is called a -indiscernibility relation. The partition of is a family of all equivalence classes of and is denoted by (or ).
If , then and are indiscernible (or indistinguishable) by attributes from .
The equivalence classes of the -indiscernibility relation are denoted .
Example: equivalence-class structure
For example, consider the following information table:
{| class="wikitable" style="text-align:center; width:30%" border="1"
|+ Sample Information System
! Object !! !! !! !! !!
|-
!
| 1 || 2 || 0 || 1 || 1
|-
!
| 1 || 2 || 0 || 1 || 1
|-
!
| 2 || 0 || 0 || 1 || 0
|-
!
| 0 || 0 || 1 || 2 || 1
|-
!
| 2 || 1 || 0 || 2 || 1
|-
!
| 0 || 0 || 1 || 2 || 2
|-
!
| 2 || 0 || 0 || 1 || 0
|-
!
| 0 || 1 || 2 || 2 || 1
|-
!
| 2 || 1 || 0 || 2 || 2
|-
!
| 2 || 0 || 0 || 1 || 0
|}
When the full set of attributes is considered, we see that we have the following seven equivalence classes:
Thus, the two objects within the first equivalence class, , cannot be distinguished from each other based on the available attributes, and the three objects within the second equivalence class, , cannot be distinguished from one another based on the available attributes. The remaining five objects are each discernible from all other objects.
It is apparent that different attribute subset selections will in general lead to different indiscernibility classes. For example, if attribute alone is selected, we obtain the following, much coarser, equivalence-class structure:
Definition of a rough set
Let be a target set that we wish to represent using attribute subset ; that is, we are told that an arbitrary set of objects comprises a single class, and we wish to express this class (i.e., this subset) using the equivalence classes induced by attribute subset . In general, cannot be expressed exactly, because the set may include and exclude objects which are indistinguishable on the basis of attributes .
For example, consider the target set , and let attribute subset , the full available set of features. The set cannot be expressed exactly, because in , objects are indiscernible. Thus, there is no way to represent any set which includes but excludes objects and .
However, the target set can be approximated using only the information contained within by constructing the -lower and -upper approximations of :
Lower approximation and positive region
The -lower approximation, or positive region, is the union of all equivalence classes in which are contained by (i.e., are subsets of) the target set – in the example, , the union of the two equivalence classes in which are contained in the target set. The lower approximation is the complete set of objects in that can be positively (i.e., unambiguously) classified as belonging to target set .
Upper approximation and negative region
The -upper approximation is the union of all equivalence classes in which have non-empty intersection with the target set – in the example, , the union of the three equivalence classes in that have non-empty intersection with the target set. The upper approximation is the complete set of objects that in that cannot be positively (i.e., unambiguously) classified as belonging to the complement () of the target set . In other words, the upper approximation is the complete set of objects that are possibly members of the target set .
The set therefore represents the negative region, containing the set of objects that can be definitely ruled out as members of the target set.
Boundary region
The boundary region, given by set difference , consists of those objects that can neither be ruled in nor ruled out as members of the target set .
In summary, the lower approximation of a target set is a conservative approximation consisting of only those objects which can positively be identified as members of the set. (These objects have no indiscernible "clones" which are excluded by the target set.) The upper approximation is a liberal approximation which includes all objects that might be members of target set. (Some objects in the upper approximation may not be members of the target set.) From the perspective of , the lower approximation contains objects that are members of the target set with certainty (probability = 1), while the upper approximation contains objects that are members of the target set with non-zero probability (probability > 0).
The rough set
The tuple composed of the lower and upper approximation is called a rough set; thus, a rough set is composed of two crisp sets, one representing a lower boundary of the target set , and the other representing an upper boundary of the target set .
The accuracy of the rough-set representation of the set can be given by the following:
That is, the accuracy of the rough set representation of , , , is the ratio of the number of objects which can positively be placed in to the number of objects that can possibly be placed in – this provides a measure of how closely the rough set is approximating the target set. Clearly, when the upper and lower approximations are equal (i.e., boundary region empty), then , and the approximation is perfect; at the other extreme, whenever the lower approximation is empty, the accuracy is zero (regardless of the size of the upper approximation).
Objective analysis
Rough set theory is one of many methods that can be employed to analyse uncertain (including vague) systems, although less common than more traditional methods of probability, statistics, entropy and Dempster–Shafer theory. However a key difference, and a unique strength, of using classical rough set theory is that it provides an objective form of analysis. Unlike other methods, as those given above, classical rough set analysis requires no additional information, external parameters, models, functions, grades or subjective interpretations to determine set membership – instead it only uses the information presented within the given data. More recent adaptations of rough set theory, such as dominance-based, decision-theoretic and fuzzy rough sets, have introduced more subjectivity to the analysis.
Definability
In general, the upper and lower approximations are not equal; in such cases, we say that target set is undefinable or roughly definable on attribute set . When the upper and lower approximations are equal (i.e., the boundary is empty), , then the target set is definable on attribute set . We can distinguish the following special cases of undefinability:
Set is internally undefinable if and . This means that on attribute set , there are no objects which we can be certain belong to target set , but there are objects which we can definitively exclude from set .
Set is externally undefinable if and . This means that on attribute set , there are objects which we can be certain belong to target set , but there are no objects which we can definitively exclude from set .
Set is totally undefinable if and . This means that on attribute set , there are no objects which we can be certain belong to target set , and there are no objects which we can definitively exclude from set . Thus, on attribute set , we cannot decide whether any object is, or is not, a member of .
Reduct and core
An interesting question is whether there are attributes in the information system (attribute–value table) which are more important to the knowledge represented in the equivalence class structure than other attributes. Often, we wonder whether there is a subset of attributes which can, by itself, fully characterize the knowledge in the database; such an attribute set is called a reduct.
Formally, a reduct is a subset of attributes such that
= , that is, the equivalence classes induced by the reduced attribute set are the same as the equivalence class structure induced by the full attribute set .
the attribute set is minimal, in the sense that for any attribute ; in other words, no attribute can be removed from set without changing the equivalence classes .
A reduct can be thought of as a sufficient set of features – sufficient, that is, to represent the category structure. In the example table above, attribute set is a reduct – the information system projected on just these attributes possesses the same equivalence class structure as that expressed by the full attribute set:
Attribute set is a reduct because eliminating any of these attributes causes a collapse of the equivalence-class structure, with the result that .
The reduct of an information system is not unique: there may be many subsets of attributes which preserve the equivalence-class structure (i.e., the knowledge) expressed in the information system. In the example information system above, another reduct is , producing the same equivalence-class structure as .
The set of attributes which is common to all reducts is called the core: the core is the set of attributes which is possessed by every reduct, and therefore consists of attributes which cannot be removed from the information system without causing collapse of the equivalence-class structure. The core may be thought of as the set of necessary attributes – necessary, that is, for the category structure to be represented. In the example, the only such attribute is ; any one of the other attributes can be removed singly without damaging the equivalence-class structure, and hence these are all dispensable. However, removing by itself does change the equivalence-class structure, and thus is the indispensable attribute of this information system, and hence the core.
It is possible for the core to be empty, which means that there is no indispensable attribute: any single attribute in such an information system can be deleted without altering the equivalence-class structure. In such cases, there is no essential or necessary attribute which is required for the class structure to be represented.
Attribute dependency
One of the most important aspects of database analysis or data acquisition is the discovery of attribute dependencies; that is, we wish to discover which variables are strongly related to which other variables. Generally, it is these strong relationships that will warrant further investigation, and that will ultimately be of use in predictive modeling.
In rough set theory, the notion of dependency is defined very simply. Let us take two (disjoint) sets of attributes, set and set , and inquire what degree of dependency obtains between them. Each attribute set induces an (indiscernibility) equivalence class structure, the equivalence classes induced by given by , and the equivalence classes induced by given by .
Let , where is a given equivalence class from the equivalence-class structure induced by attribute set . Then, the dependency of attribute set on attribute set , , is given by
That is, for each equivalence class in , we add up the size of its lower approximation by the attributes in , i.e., . This approximation (as above, for arbitrary set ) is the number of objects which on attribute set can be positively identified as belonging to target set . Added across all equivalence classes in , the numerator above represents the total number of objects which – based on attribute set – can be positively categorized according to the classification induced by attributes . The dependency ratio therefore expresses the proportion (within the entire universe) of such classifiable objects. The dependency "can be interpreted as a proportion of such objects in the information system for which it suffices to know the values of attributes in to determine the values of attributes in ".
Another, intuitive, way to consider dependency is to take the partition induced by as the target class , and consider as the attribute set we wish to use in order to "re-construct" the target class . If can completely reconstruct , then depends totally upon ; if results in a poor and perhaps a random reconstruction of , then does not depend upon at all.
Thus, this measure of dependency expresses the degree of functional (i.e., deterministic) dependency of attribute set on attribute set ; it is not symmetric. The relationship of this notion of attribute dependency to more traditional information-theoretic (i.e., entropic) notions of attribute dependence has been discussed in a number of sources, e.g. Pawlak, Wong, & Ziarko (1988), Yao & Yao (2002), Wong, Ziarko, & Ye (1986), and Quafafou & Boussouf (2000).
Rule extraction
The category representations discussed above are all extensional in nature; that is, a category or complex class is simply the sum of all its members. To represent a category is, then, just to be able to list or identify all the objects belonging to that category. However, extensional category representations have very limited practical use, because they provide no insight for deciding whether novel (never-before-seen) objects are members of the category.
What is generally desired is an intentional description of the category, a representation of the category based on a set of rules that describe the scope of the category. The choice of such rules is not unique, and therein lies the issue of inductive bias. See Version space and Model selection for more about this issue.
There are a few rule-extraction methods. We will start from a rule-extraction procedure based on Ziarko & Shan (1995).
Decision matrices
Let us say that we wish to find the minimal set of consistent rules (logical implications) that characterize our sample system. For a set of condition attributes and a decision attribute , these rules should have the form , or, spelled out,
where are legitimate values from the domains of their respective attributes. This is a form typical of association rules, and the number of items in which match the condition/antecedent is called the support for the rule. The method for extracting such rules given in is to form a decision matrix corresponding to each individual value of decision attribute . Informally, the decision matrix for value of decision attribute lists all attribute–value pairs that differ between objects having and .
This is best explained by example (which also avoids a lot of notation). Consider the table above, and let be the decision variable (i.e., the variable on the right side of the implications) and let be the condition variables (on the left side of the implication). We note that the decision variable takes on two different values, namely . We treat each case separately.
First, we look at the case , and we divide up into objects that have and those that have . (Note that objects with in this case are simply the objects that have , but in general, would include all objects having any value for other than , and there may be several such classes of objects (for example, those having ).) In this case, the objects having are while the objects which have are . The decision matrix for lists all the differences between the objects having and those having ; that is, the decision matrix lists all the differences between and . We put the "positive" objects () as the rows, and the "negative" objects as the columns.
{| class="wikitable" style="text-align:center; width:30%" border="1"
|+ Decision matrix for
! Object !! !! !! !! !!
|-
!
| || || || ||
|-
!
| || || || ||
|-
!
| || || || ||
|-
!
| || || || ||
|-
!
| || || || ||
|}
To read this decision matrix, look, for example, at the intersection of row and column , showing in the cell. This means that with regard to decision value , object differs from object on attributes and , and the particular values on these attributes for the positive object are and . This tells us that the correct classification of as belonging to decision class rests on attributes and ; although one or the other might be dispensable, we know that at least one of these attributes is indispensable.
Next, from each decision matrix we form a set of Boolean expressions, one expression for each row of the matrix. The items within each cell are aggregated disjunctively, and the individuals cells are then aggregated conjunctively. Thus, for the above table we have the following five Boolean expressions:
Each statement here is essentially a highly specific (probably too specific) rule governing the membership in class of the corresponding object. For example, the last statement, corresponding to object , states that all the following must be satisfied:
Either must have value 2, or must have value 0, or both.
must have value 0.
Either must have value 2, or must have value 0, or both.
Either must have value 2, or must have value 0, or must have value 0, or any combination thereof.
must have value 0.
It is clear that there is a large amount of redundancy here, and the next step is to simplify using traditional Boolean algebra. The statement corresponding to objects simplifies to , which yields the implication
Likewise, the statement corresponding to objects simplifies to . This gives us the implication
The above implications can also be written as the following rule set:
It can be noted that each of the first two rules has a support of 1 (i.e., the antecedent matches two objects), while each of the last two rules has a support of 2. To finish writing the rule set for this knowledge system, the same procedure as above (starting with writing a new decision matrix) should be followed for the case of , thus yielding a new set of implications for that decision value (i.e., a set of implications with as the consequent). In general, the procedure will be repeated for each possible value of the decision variable.
LERS rule induction system
The data system LERS (Learning from Examples based on Rough Sets) may induce rules from inconsistent data, i.e., data with conflicting objects. Two objects are conflicting when they are characterized by the same values of all attributes, but they belong to different concepts (classes). LERS uses rough set theory to compute lower and upper approximations for concepts involved in conflicts with other concepts.
Rules induced from the lower approximation of the concept certainly describe the concept, hence such rules are called certain. On the other hand, rules induced from the upper approximation of the concept describe the concept possibly, so these rules are called possible. For rule induction LERS uses three algorithms: LEM1, LEM2, and IRIM.
The LEM2 algorithm of LERS is frequently used for rule induction and is used not only in LERS but also in other systems, e.g., in RSES. LEM2 explores the search space of attribute–value pairs. Its input data set is a lower or upper approximation of a concept, so its input data set is always consistent. In general, LEM2 computes a local covering and then converts it into a rule set. We will quote a few definitions to describe the LEM2 algorithm.
The LEM2 algorithm is based on an idea of an attribute–value pair block. Let be a nonempty lower or upper approximation of a concept represented by a decision-value pair . Set depends on a set of attribute–value pairs if and only if
Set is a minimal complex of if and only if depends on and no proper subset of exists such that depends on . Let be a nonempty collection of nonempty sets of attribute–value pairs. Then is a local covering of if and only if the following three conditions are satisfied:
each member of is a minimal complex of ,
is minimal, i.e., has the smallest possible number of members.
For our sample information system, LEM2 will induce the following rules:
Other rule-learning methods can be found, e.g., in Pawlak (1991), Stefanowski (1998), Bazan et al. (2004), etc.
Incomplete data
Rough set theory is useful for rule induction from incomplete data sets. Using this approach we can distinguish between three types of missing attribute values: lost values (the values that were recorded but currently are unavailable), attribute-concept values (these missing attribute values may be replaced by any attribute value limited to the same concept), and "do not care" conditions (the original values were irrelevant). A concept (class) is a set of all objects classified (or diagnosed) the same way.
Two special data sets with missing attribute values were extensively studied: in the first case, all missing attribute values were lost, in the second case, all missing attribute values were "do not care" conditions.
In attribute-concept values interpretation of a missing attribute value, the missing attribute value may be replaced by any value of the attribute domain restricted to the concept to which the object with a missing attribute value belongs. For example, if for a patient the value of an attribute Temperature is missing, this patient is sick with flu, and all remaining patients sick with flu have values high or very-high for Temperature when using the interpretation of the missing attribute value as the attribute-concept value, we will replace the missing attribute value with high and very-high. Additionally, the characteristic relation, (see, e.g., ) enables to process data sets with all three kind of missing attribute values at the same time: lost, "do not care" conditions, and attribute-concept values.
Applications
Rough set methods can be applied as a component of hybrid solutions in machine learning and data mining. They have been found to be particularly useful for rule induction and feature selection (semantics-preserving dimensionality reduction). Rough set-based data analysis methods have been successfully applied in bioinformatics, economics and finance, medicine, multimedia, web and text mining, signal and image processing, software engineering, robotics, and engineering (e.g. power systems and control engineering). Recently the three regions of rough sets are interpreted as regions of acceptance, rejection and deferment. This leads to three-way decision making approach with the model which can potentially lead to interesting future applications.
History
The idea of rough set was proposed by Pawlak (1981) as a new mathematical tool to deal with vague concepts. Comer, Grzymala-Busse, Iwinski, Nieminen, Novotny, Pawlak, Obtulowicz, and Pomykala have studied algebraic properties of rough sets. Different algebraic semantics have been developed by P. Pagliani, I. Duntsch, M. K. Chakraborty, M. Banerjee and A. Mani; these have been extended to more generalized rough sets by D. Cattaneo and A. Mani, in particular. Rough sets can be used to represent ambiguity, vagueness and general uncertainty.
Extensions and generalizations
Since the development of rough sets, extensions and generalizations have continued to evolve. Initial developments focused on the relationship - both similarities and difference - with fuzzy sets. While some literature contends these concepts are different, other literature considers that rough sets are a generalization of fuzzy sets - as represented through either fuzzy rough sets or rough fuzzy sets. Pawlak (1995) considered that fuzzy and rough sets should be treated as being complementary to each other, addressing different aspects of uncertainty and vagueness.
Three notable extensions of classical rough sets are:
Dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński (2001). The main change in this extension of classical rough sets is the substitution of the indiscernibility relation by a dominance relation, which permits the formalism to deal with inconsistencies typical in consideration of criteria and preference-ordered decision classes.
Decision-theoretic rough sets (DTRS) is a probabilistic extension of rough set theory introduced by Yao, Wong, and Lingras (1990). It utilizes a Bayesian decision procedure for minimum risk decision making. Elements are included into the lower and upper approximations based on whether their conditional probability is above thresholds and . These upper and lower thresholds determine region inclusion for elements. This model is unique and powerful since the thresholds themselves are calculated from a set of six loss functions representing classification risks.
Game-theoretic rough sets (GTRS) is a game theory-based extension of rough set that was introduced by Herbert and Yao (2011). It utilizes a game-theoretic environment to optimize certain criteria of rough sets based classification or decision making in order to obtain effective region sizes.
Rough membership
Rough sets can be also defined, as a generalisation, by employing a rough membership function instead of objective approximation. The rough membership function expresses a conditional probability that belongs to given . This can be interpreted as a degree that belongs to in terms of information about expressed by .
Rough membership primarily differs from the fuzzy membership in that the membership of union and intersection of sets cannot, in general, be computed from their constituent membership as is the case of fuzzy sets. In this, rough membership is a generalization of fuzzy membership. Furthermore, the rough membership function is grounded more in probability than the conventionally held concepts of the fuzzy membership function.
Other generalizations
Several generalizations of rough sets have been introduced, studied and applied to solving problems. Here are some of these generalizations:
Rough multisets
Fuzzy rough sets extend the rough set concept through the use of fuzzy equivalence classes
Alpha rough set theory (α-RST) - a generalization of rough set theory that allows approximation using of fuzzy concepts
Intuitionistic fuzzy rough sets
Generalized rough fuzzy sets
Rough intuitionistic fuzzy sets
Soft rough fuzzy sets and soft fuzzy rough sets
Composite rough sets
See also
Algebraic semantics
Alternative set theory
Analog computer
Description logic
Fuzzy logic
Fuzzy set theory
Granular computing
Near sets
Rough fuzzy hybridization
Type-2 fuzzy sets and systems
Decision-theoretic rough sets
Version space
Dominance-based rough set approach
References
Further reading
Gianpiero Cattaneo and Davide Ciucci, "Heyting Wajsberg Algebras as an Abstract Environment Linking Fuzzy and Rough Sets" in J.J. Alpigini et al. (Eds.): RSCTC 2002, LNAI 2475, pp. 77–84, 2002.
Pawlak, Zdzisław Rough Sets Research Report PAS 431, Institute of Computer Science, Polish Academy of Sciences (1981)
Zhang J., Wong J-S, Pan Y, Li T. (2015). A parallel matrix-based method for computing approximations in incomplete information systems, IEEE Transactions on Knowledge and Data Engineering, 27(2): 326-339
Burgin M. (1990). Theory of Named Sets as a Foundational Basis for Mathematics, In Structures in mathematical theories: Reports of the San Sebastian international symposium, September 25–29, 1990 (http://www.blogg.org/blog-30140-date-2005-10-26.html)
Burgin, M. (2004). Unified Foundations of Mathematics, Preprint Mathematics LO/0403186, p39. (electronic edition: https://arxiv.org/ftp/math/papers/0403/0403186.pdf)
Burgin, M. (2011), Theory of Named Sets, Mathematics Research Developments, Nova Science Pub Inc,
Chen H., Li T., Luo C., Horng S-J., Wang G. (2015). A decision-theoretic rough set approach for dynamic data mining. IEEE Transactions on Fuzzy Systems, 23(6): 1958-1970
Chen H., Li T., Luo C., Horng S-J., Wang G. (2014). A rough set-based method for updating decision rules on attribute values' coarsening and refining, IEEE Transactions on Knowledge and Data Engineering, 26(12): 2886-2899
Chen H., Li T., Ruan D., Lin J., Hu C, (2013) A rough-set based incremental approach for updating approximations under dynamic maintenance environments. IEEE Transactions on Knowledge and Data Engineering, 25(2): 274-284
External links
The International Rough Set Society
Rough set tutorial
Rough Sets: A Quick Tutorial
Rough Set Exploration System
Rough Sets in Data Warehousing
Systems of set theory
Theoretical computer science
Approximations | Rough set | [
"Mathematics"
] | 6,215 | [
"Theoretical computer science",
"Applied mathematics",
"Mathematical relations",
"Approximations"
] |
1,634,790 | https://en.wikipedia.org/wiki/Grothendieck%20group | In mathematics, the Grothendieck group, or group of differences, of a commutative monoid is a certain abelian group. This abelian group is constructed from in the most universal way, in the sense that any abelian group containing a homomorphic image of will also contain a homomorphic image of the Grothendieck group of . The Grothendieck group construction takes its name from a specific case in category theory, introduced by Alexander Grothendieck in his proof of the Grothendieck–Riemann–Roch theorem, which resulted in the development of K-theory. This specific case is the monoid of isomorphism classes of objects of an abelian category, with the direct sum as its operation.
Grothendieck group of a commutative monoid
Motivation
Given a commutative monoid , "the most general" abelian group that arises from is to be constructed by introducing inverse elements to all elements of . Such an abelian group always exists; it is called the Grothendieck group of . It is characterized by a certain universal property and can also be concretely constructed from .
If does not have the cancellation property (that is, there exists and in such that and ), then the Grothendieck group cannot contain . In particular, in the case of a monoid operation denoted multiplicatively that has a zero element satisfying for every the Grothendieck group must be the trivial group (group with only one element), since one must have
for every .
Universal property
Let M be a commutative monoid. Its Grothendieck group is an abelian group K with a monoid homomorphism satisfying the following universal property: for any monoid homomorphism from M to an abelian group A, there is a unique group homomorphism such that
This expresses the fact that any abelian group A that contains a homomorphic image of M will also contain a homomorphic image of K, K being the "most general" abelian group containing a homomorphic image of M.
Explicit constructions
To construct the Grothendieck group K of a commutative monoid M, one forms the Cartesian product . The two coordinates are meant to represent a positive part and a negative part, so corresponds to in K.
Addition on is defined coordinate-wise:
.
Next one defines an equivalence relation on , such that is equivalent to if, for some element k of M, m1 + n2 + k = m2 + n1 + k (the element k is necessary because the cancellation law does not hold in all monoids). The equivalence class of the element (m1, m2) is denoted by [(m1, m2)]. One defines K to be the set of equivalence classes. Since the addition operation on M × M is compatible with our equivalence relation, one obtains an addition on K, and K becomes an abelian group. The identity element of K is [(0, 0)], and the inverse of [(m1, m2)] is [(m2, m1)]. The homomorphism sends the element m to [(m, 0)].
Alternatively, the Grothendieck group K of M can also be constructed using generators and relations: denoting by the free abelian group generated by the set M, the Grothendieck group K is the quotient of by the subgroup generated by . (Here +′ and −′ denote the addition and subtraction in the free abelian group while + denotes the addition in the monoid M.) This construction has the advantage that it can be performed for any semigroup M and yields a group which satisfies the corresponding universal properties for semigroups, i.e. the "most general and smallest group containing a homomorphic image of M. This is known as the "group completion of a semigroup" or "group of fractions of a semigroup".
Properties
In the language of category theory, any universal construction gives rise to a functor; one thus obtains a functor from the category of commutative monoids to the category of abelian groups which sends the commutative monoid M to its Grothendieck group K. This functor is left adjoint to the forgetful functor from the category of abelian groups to the category of commutative monoids.
For a commutative monoid M, the map i : M → K is injective if and only if M has the cancellation property, and it is bijective if and only if M is already a group.
Example: the integers
The easiest example of a Grothendieck group is the construction of the integers from the (additive) natural numbers .
First one observes that the natural numbers (including 0) together with the usual addition indeed form a commutative monoid Now when one uses the Grothendieck group construction one obtains the formal differences between natural numbers as elements n − m and one has the equivalence relation
for some .
Now define
This defines the integers . Indeed, this is the usual construction to obtain the integers from the natural numbers. See "Construction" under Integers for a more detailed explanation.
Example: the positive rational numbers
Similarly, the Grothendieck group of the multiplicative commutative monoid (starting at 1) consists of formal fractions with the equivalence
for some
which of course can be identified with the positive rational numbers.
Example: the Grothendieck group of a manifold
The Grothendieck group is the fundamental construction of K-theory. The group of a compact manifold M is defined to be the Grothendieck group of the commutative monoid of all isomorphism classes of vector bundles of finite rank on M with the monoid operation given by direct sum. This gives a contravariant functor from manifolds to abelian groups. This functor is studied and extended in topological K-theory.
Example: The Grothendieck group of a ring
The zeroth algebraic K group of a (not necessarily commutative) ring R is the Grothendieck group of the monoid consisting of isomorphism classes of finitely generated projective modules over R, with the monoid operation given by the direct sum. Then is a covariant functor from rings to abelian groups.
The two previous examples are related: consider the case where is the ring of complex-valued smooth functions on a compact manifold M. In this case the projective R-modules are dual to vector bundles over M (by the Serre–Swan theorem). Thus and are the same group.
Grothendieck group and extensions
Definition
Another construction that carries the name Grothendieck group is the following: Let R be a finite-dimensional algebra over some field k or more generally an artinian ring. Then define the Grothendieck group as the abelian group generated by the set of isomorphism classes of finitely generated R-modules and the following relations: For every short exact sequence
of R-modules, add the relation
This definition implies that for any two finitely generated R-modules M and N, , because of the split short exact sequence
Examples
Let K be a field. Then the Grothendieck group is an abelian group generated by symbols for any finite-dimensional K-vector space V. In fact, is isomorphic to whose generator is the element . Here, the symbol for a finite-dimensional K-vector space V is defined as , the dimension of the vector space V. Suppose one has the following short exact sequence of K-vector spaces.
Since any short exact sequence of vector spaces splits, it holds that . In fact, for any two finite-dimensional vector spaces V and W the following holds:
The above equality hence satisfies the condition of the symbol in the Grothendieck group.
Note that any two isomorphic finite-dimensional K-vector spaces have the same dimension. Also, any two finite-dimensional K-vector spaces V and W of same dimension are isomorphic to each other. In fact, every finite n-dimensional K-vector space V is isomorphic to . The observation from the previous paragraph hence proves the following equation:
Hence, every symbol is generated by the element with integer coefficients, which implies that is isomorphic to with the generator .
More generally, let be the set of integers. The Grothendieck group is an abelian group generated by symbols for any finitely generated abelian groups A. One first notes that any finite abelian group G satisfies that . The following short exact sequence holds, where the map is multiplication by n.
The exact sequence implies that , so every cyclic group has its symbol equal to 0. This in turn implies that every finite abelian group G satisfies by the fundamental theorem of finite abelian groups.
Observe that by the fundamental theorem of finitely generated abelian groups, every abelian group A is isomorphic to a direct sum of a torsion subgroup and a torsion-free abelian group isomorphic to for some non-negative integer r, called the rank of A and denoted by . Define the symbol as . Then the Grothendieck group is isomorphic to with generator Indeed, the observation made from the previous paragraph shows that every abelian group A has its symbol the same to the symbol where . Furthermore, the rank of the abelian group satisfies the conditions of the symbol of the Grothendieck group. Suppose one has the following short exact sequence of abelian groups:
Then tensoring with the rational numbers implies the following equation.
Since the above is a short exact sequence of -vector spaces, the sequence splits. Therefore, one has the following equation.
On the other hand, one also has the following relation; for more information, see Rank of an abelian group.
Therefore, the following equation holds:
Hence one has shown that is isomorphic to with generator
Universal Property
The Grothendieck group satisfies a universal property. One makes a preliminary definition: A function from the set of isomorphism classes to an abelian group is called additive if, for each exact sequence , one has Then, for any additive function , there is a unique group homomorphism such that factors through and the map that takes each object of to the element representing its isomorphism class in Concretely this means that satisfies the equation for every finitely generated -module and is the only group homomorphism that does that.
Examples of additive functions are the character function from representation theory: If is a finite-dimensional -algebra, then one can associate the character to every finite-dimensional -module is defined to be the trace of the -linear map that is given by multiplication with the element on .
By choosing a suitable basis and writing the corresponding matrices in block triangular form one easily sees that character functions are additive in the above sense. By the universal property this gives us a "universal character" such that .
If and is the group ring of a finite group then this character map even gives a natural isomorphism of and the character ring . In the modular representation theory of finite groups, can be a field the algebraic closure of the finite field with p elements. In this case the analogously defined map that associates to each -module its Brauer character is also a natural isomorphism onto the ring of Brauer characters. In this way Grothendieck groups show up in representation theory.
This universal property also makes the 'universal receiver' of generalized Euler characteristics. In particular, for every bounded complex of objects in
one has a canonical element
In fact the Grothendieck group was originally introduced for the study of Euler characteristics.
Grothendieck groups of exact categories
A common generalization of these two concepts is given by the Grothendieck group of an exact category . Simply put, an exact category is an additive category together with a class of distinguished short sequences A → B → C. The distinguished sequences are called "exact sequences", hence the name. The precise axioms for this distinguished class do not matter for the construction of the Grothendieck group.
The Grothendieck group is defined in the same way as before as the abelian group with one generator [M ] for each (isomorphism class of) object(s) of the category and one relation
for each exact sequence
.
Alternatively and equivalently, one can define the Grothendieck group using a universal property: A map from into an abelian group X is called "additive" if for every exact sequence one has ; an abelian group G together with an additive mapping is called the Grothendieck group of iff every additive map factors uniquely through .
Every abelian category is an exact category if one just uses the standard interpretation of "exact". This gives the notion of a Grothendieck group in the previous section if one chooses the category of finitely generated R-modules as . This is really abelian because R was assumed to be artinian (and hence noetherian) in the previous section.
On the other hand, every additive category is also exact if one declares those and only those sequences to be exact that have the form with the canonical inclusion and projection morphisms. This procedure produces the Grothendieck group of the commutative monoid in the first sense (here means the "set" [ignoring all foundational issues] of isomorphism classes in .)
Grothendieck groups of triangulated categories
Generalizing even further it is also possible to define the Grothendieck group for triangulated categories. The construction is essentially similar but uses the relations [X] − [Y] + [Z] = 0 whenever there is a distinguished triangle X → Y → Z → X[1].
Further examples
In the abelian category of finite-dimensional vector spaces over a field k, two vector spaces are isomorphic if and only if they have the same dimension. Thus, for a vector space V
Moreover, for an exact sequence
m = l + n, so
Thus
and is isomorphic to and is generated by Finally for a bounded complex of finite-dimensional vector spaces V *,
where is the standard Euler characteristic defined by
For a ringed space , one can consider the category of all locally free sheaves over X. is then defined as the Grothendieck group of this exact category and again this gives a functor.
For a ringed space , one can also define the category to be the category of all coherent sheaves on X. This includes the special case (if the ringed space is an affine scheme) of being the category of finitely generated modules over a noetherian ring R. In both cases is an abelian category and a fortiori an exact category so the construction above applies.
In the case where R is a finite-dimensional algebra over some field, the Grothendieck groups (defined via short exact sequences of finitely generated modules) and (defined via direct sum of finitely generated projective modules) coincide. In fact, both groups are isomorphic to the free abelian group generated by the isomorphism classes of simple R-modules.
There is another Grothendieck group of a ring or a ringed space which is sometimes useful. The category in the case is chosen to be the category of all quasi-coherent sheaves on the ringed space which reduces to the category of all modules over some ring R in case of affine schemes. is not a functor, but nevertheless it carries important information.
Since the (bounded) derived category is triangulated, there is a Grothendieck group for derived categories too. This has applications in representation theory for example. For the unbounded category the Grothendieck group however vanishes. For a derived category of some complex finite-dimensional positively graded algebra there is a subcategory in the unbounded derived category containing the abelian category A of finite-dimensional graded modules whose Grothendieck group is the q-adic completion of the Grothendieck group of A.
See also
Field of fractions
Localization
Topological K-theory
Atiyah–Hirzebruch spectral sequence for computing topological K-theory
References
Michael F. Atiyah, K-Theory, (Notes taken by D.W.Anderson, Fall 1964), published in 1967, W.A. Benjamin Inc., New York.
.
The Grothendieck Group of Algebraic Vector Bundles; Calculations of Affine and Projective Space
Grothendieck Group of a Smooth Projective Complex Curve
Algebraic structures
Homological algebra
K-theory | Grothendieck group | [
"Mathematics"
] | 3,450 | [
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Algebraic structures",
"Homological algebra"
] |
631,063 | https://en.wikipedia.org/wiki/Key%20management | Key management refers to management of cryptographic keys in a cryptosystem. This includes dealing with the generation, exchange, storage, use, crypto-shredding (destruction) and replacement of keys. It includes cryptographic protocol design, key servers, user procedures, and other relevant protocols.
Key management concerns keys at the user level, either between users or systems. This is in contrast to key scheduling, which typically refers to the internal handling of keys within the operation of a cipher.
Successful key management is critical to the security of a cryptosystem. It is the more challenging side of cryptography in a sense that it involves aspects of social engineering such as system policy, user training, organizational and departmental interactions, and coordination between all of these elements, in contrast to pure mathematical practices that can be automated.
Types of keys
Cryptographic systems may use different types of keys, with some systems using more than one. These may include symmetric keys or asymmetric keys. In a symmetric key algorithm the keys involved are identical for both encrypting and decrypting a message. Keys must be chosen carefully, and distributed and stored securely. Asymmetric keys, also known as public keys, in contrast are two distinct keys that are mathematically linked. They are typically used together to communicate. Public key infrastructure (PKI), the implementation of public key cryptography, requires an organization to establish an infrastructure to create and manage public and private key pairs along with digital certificates.
Inventory
The starting point in any certificate and private key management strategy is to create a comprehensive inventory of all certificates, their locations and responsible parties. This is not a trivial matter because certificates from a variety of sources are deployed in a variety of locations by different individuals and teams - it's simply not possible to rely on a list from a single certificate authority. Certificates that are not renewed and replaced before they expire can cause serious downtime and outages. Some other considerations:
Regulations and requirements, like PCI-DSS, demand stringent security and management of cryptographic keys and auditors are increasingly reviewing the management controls and processes in use.
Private keys used with certificates must be kept secure or unauthorised individuals can intercept confidential communications or gain unauthorised access to critical systems. Failure to ensure proper segregation of duties means that admins who generate the encryption keys can use them to access sensitive, regulated data.
If a certificate authority is compromised or an encryption algorithm is broken, organizations must be prepared to replace all of their certificates and keys in a matter of hours.
Management steps
Once keys are inventoried, key management typically consists of three steps: exchange, storage and use.
Key exchange
Prior to any secured communication, users must set up the details of the cryptography. In some instances this may require exchanging identical keys (in the case of a symmetric key system). In others it may require possessing the other party's public key. While public keys can be openly exchanged (their corresponding private key is kept secret), symmetric keys must be exchanged over a secure communication channel. Formerly, exchange of such a key was extremely troublesome, and was greatly eased by access to secure channels such as a diplomatic bag. Clear text exchange of symmetric keys would enable any interceptor to immediately learn the key, and any encrypted data.
The advance of public key cryptography in the 1970s has made the exchange of keys less troublesome. Since the Diffie-Hellman key exchange protocol was published in 1975, it has become possible to exchange a key over an insecure communications channel, which has substantially reduced the risk of key disclosure during distribution. It is possible, using something akin to a book code, to include key indicators as clear text attached to an encrypted message. The encryption technique used by Richard Sorge's code clerk was of this type, referring to a page in a statistical manual, though it was in fact a code. The German Army Enigma symmetric encryption key was a mixed type early in its use; the key was a combination of secretly distributed key schedules and a user chosen session key component for each message.
In more modern systems, such as OpenPGP compatible systems, a session key for a symmetric key algorithm is distributed encrypted by an asymmetric key algorithm. This approach avoids even the necessity for using a key exchange protocol like Diffie-Hellman key exchange.
Another method of key exchange involves encapsulating one key within another. Typically a master key is generated and exchanged using some secure method. This method is usually cumbersome or expensive (breaking a master key into multiple parts and sending each with a trusted courier for example) and not suitable for use on a larger scale. Once the master key has been securely exchanged, it can then be used to securely exchange subsequent keys with ease. This technique is usually termed key wrap. A common technique uses block ciphers and cryptographic hash functions.
A related method is to exchange a master key (sometimes termed a root key) and derive subsidiary keys as needed from that key and some other data (often referred to as diversification data). The most common use for this method is probably in smartcard-based cryptosystems, such as those found in banking cards. The bank or credit network embeds their secret key into the card's secure key storage during card production at a secured production facility. Then at the point of sale the card and card reader are both able to derive a common set of session keys based on the shared secret key and card-specific data (such as the card serial number). This method can also be used when keys must be related to each other (i.e., departmental keys are tied to divisional keys, and individual keys tied to departmental keys). However, tying keys to each other in this way increases the damage which may result from a security breach as attackers will learn something about more than one key. This reduces entropy, with regard to an attacker, for each key involved.
A recent method uses an oblivious pseudorandom function to issue keys without the key management system ever being in a position to see the keys.
Key storage
However distributed, keys must be stored securely to maintain communications security. Security is a big concern and hence there are various techniques in use to do so. Likely the most common is that an encryption application manages keys for the user and depends on an access password to control use of the key. Likewise, in the case of smartphone keyless access platforms, they keep all identifying door information off mobile phones and servers and encrypt all data, where just like low-tech keys, users give codes only to those they trust.
In terms of regulation, there are few that address key storage in depth. "Some contain minimal guidance like 'don’t store keys with encrypted data' or suggest that 'keys should be kept securely.'" The notable exceptions to that are PCI DSS 3.2.1, NIST 800-53 and NIST 800–57.
For optimal security, keys may be stored in a Hardware Security Module (HSM) or protected using technologies such as Trusted Execution Environment (TEE, e.g. Intel SGX) or Multi-Party Computation (MPC). Additional alternatives include utilizing Trusted Platform Modules (TPM), virtual HSMs, aka "Poor Man's Hardware Security Modules" (pmHSM), or non-volatile Field-Programmable-Gate-Arrays (FPGA) with supporting System-on-Chip configurations. In order to verify the integrity of a key stored without compromising its actual value a KCV algorithm can be used.
Key encryption use
The major issue is length of time a key is to be used, and therefore frequency of replacement. Because it increases any attacker's required effort, keys should be frequently changed. This also limits loss of information, as the number of stored encrypted messages which will become readable when a key is found will decrease as the frequency of key change increases. Historically, symmetric keys have been used for long periods in situations in which key exchange was very difficult or only possible intermittently. Ideally, the symmetric key should change with each message or interaction, so that only that message will become readable if the key is learned (e.g., stolen, cryptanalyzed, or social engineered).
Challenges
Several challenges IT organizations face when trying to control and manage their encryption keys are:
Scalability: Managing a large number of encryption keys.
Security: Vulnerability of keys from outside hackers, malicious insiders.
Availability: Ensuring data accessibility for authorized users.
Heterogeneity: Supporting multiple databases, applications and standards.
Governance: Defining policy-driven access control and protection for data. Governance includes compliance with data protection requirements.
Compliance
Key management compliance refers to the oversight, assurance, and capability of being able to demonstrate that keys are securely managed. This includes the following individual compliance domains:
Physical security – the most visible form of compliance, which may include locked doors to secure system equipment and surveillance cameras. These safeguards can prevent unauthorized access to printed copies of key material and computer systems that run key management software.
Logical security – protects the organization against the theft or unauthorized access of information. This is where the use of cryptographic keys comes in by encrypting data, which is then rendered useless to those who do not have the key to decrypt it.
Personnel security – this involves assigning specific roles or privileges to personnel to access information on a strict need-to-know basis. Background checks should be performed on new employees along with periodic role changes to ensure security.
Compliance can be achieved with respect to national and international data protection standards and regulations, such as Payment Card Industry Data Security Standard, Health Insurance Portability and Accountability Act, Sarbanes–Oxley Act, or General Data Protection Regulation.
Management and compliance systems
Key management system
A key management system (KMS), also known as a cryptographic key management system (CKMS) or enterprise key management system (EKMS), is an integrated approach for generating, distributing and managing cryptographic keys for devices and applications. They may cover all aspects of security - from the secure generation of keys over the secure exchange of keys up to secure key handling and storage on the client. Thus, a KMS includes the backend functionality for key generation, distribution, and replacement as well as the client functionality for injecting keys, storing and managing keys on devices.
Standards-based key management
Many specific applications have developed their own key management systems with home grown protocols. However, as systems become more interconnected keys need to be shared between those different systems. To facilitate this, key management standards have evolved to define the protocols used to manage and exchange cryptographic keys and related information.
Key Management Interoperability Protocol (KMIP)
KMIP is an extensible key management protocol that has been developed by many organizations working within the OASIS standards body. The first version was released in 2010, and it has been further developed by an active technical committee.
The protocol allows for the creation of keys and their distribution among disparate software systems that need to utilize them. It covers the full key life cycle of both symmetric and asymmetric keys in a variety of formats, the wrapping of keys, provisioning schemes, and cryptographic operations as well as meta data associated with the keys.
The protocol is backed by an extensive series of test cases, and interoperability testing is performed between compliant systems each year.
A list of some 80 products that conform to the KMIP standard can be found on the OASIS website.
Closed source
Non-KMIP-compliant key management
Open source
Barbican, the OpenStack security API.
KeyBox - web-based SSH access and key management.
EPKS - Echo Public Key Share, system to share encryption keys online in a p2p community.
Kmc-Subset137 - key management system implementing UNISIG Subset-137 for ERTMS/ETCS railway application.
privacyIDEA - two factor management with support for managing SSH keys.
StrongKey - open source, last updated on SourceForge in 2016. There is no more maintenance on this project according to its home page.
Vault - secret server from HashiCorp.
NuCypher
SecretHub - end-to-end encrypted SaaS key management
Infisical - end-to-end open-source secret management platform.
Closed source
Amazon Web Service (AWS) Key Management Service (KMS)
Bell ID Key Manager
Bloombase KeyCastle
Cryptomathic CKMS
Doppler SecretOps Platform
Encryptionizer Key Manager (Windows only)
Google Cloud Key Management
IBM Cloud Key Protect
Microsoft Azure Key Vault
Porticor Virtual Private Data
SSH Communications Security Universal SSH Key Manager
Akeyless Vault
KMS security policy
The security policy of a key management system provides the rules that are to be used to protect keys and metadata that the key management system supports. As defined by the National Institute of Standards and Technology NIST, the policy shall establish and specify rules for this information that will protect its:
Confidentiality
Integrity
Availability
Authentication of source
This protection covers the complete key life-cycle from the time the key becomes operational to its elimination.
Bring your own encryption / key
Bring your own encryption (BYOE)—also called bring your own key (BYOK)—refers to a cloud-computing security model to allow public-cloud customers to use their own encryption software and manage their own encryption keys.
This security model is usually considered a marketing stunt, as critical keys are being handed over to third parties (cloud providers) and key owners are still left with the operational burden of generating, rotating and sharing their keys.
Public-key infrastructure (PKI)
A public-key infrastructure is a type of key management system that uses hierarchical digital certificates to provide authentication, and public keys to provide encryption. PKIs are used in World Wide Web traffic, commonly in the form of SSL and TLS.
Multicast group key management
Group key management means managing the keys in a group communication. Most of the group communications use multicast communication so that if the message is sent once by the sender, it will be received by all the users. The main problem in multicast group communication is its security. In order to improve the security, various keys are given to the users. Using the keys, the users can encrypt their messages and send them secretly. IETF.org released RFC 4046, entitled Multicast Security (MSEC) Group Key Management Architecture, which discusses the challenges of group key management.
See also
References
45.NeoKeyManager - Hancom Intelligence Inc.
External links
Recommendation for Key Management — Part 1: general, NIST Special Publication 800-57
NIST Cryptographic Toolkit
Q* The IEEE Security in Storage Working Group (SISWG) that is creating the P1619.3 standard for Key Management
American National Standards Institute - ANSI X9.24, Retail Financial Services Symmetric Key Management
The OASIS Key Management Interoperability Protocol (KMIP) Technical Committee
The OASIS Enterprise Key Management Infrastructure (EKMI)Technical Committee
"Key Management with a Powerful Keystore"
"Intelligent Key Management System - KeyGuard | Senergy Intellution"
IBM Security Key Lifecycle Manager, SKLM
NeoKeyManager - Hancom Intelligence Inc.
KMS Key
Data security | Key management | [
"Engineering"
] | 3,133 | [
"Cybersecurity engineering",
"Data security"
] |
631,159 | https://en.wikipedia.org/wiki/Art%20director | Art director is a title for a variety of similar job functions in theater, advertising, marketing, publishing, fashion, live-action and animated film and television, the Internet, and video games.
It is the charge of a sole art director to supervise and unify the vision of an artistic production. In particular, they are in charge of its overall visual appearance and how it communicates visually, stimulates moods, contrasts features, and psychologically appeals to a target audience. The art director makes decisions about visual elements, what artistic style(s) to use, and when to use motion. One of the biggest challenges art directors face is translating desired moods, messages, concepts, and underdeveloped ideas into imagery. In the brainstorming process, art directors, colleagues and clients explore ways the finished piece or scene could look. At times, the art director is responsible for solidifying the vision of the collective imagination while resolving conflicting agendas and inconsistencies between contributors' ideas.
In eastern animated works, such as Japanese anime and Chinese animation, the role of specifically refers to the artist in charge of supervising and directing the background art and the background art staff of a particular work rather than a role unifying a work's overall artistic vision.
In advertising
Despite the title, an advertising art director is not necessarily the head of an art department. In modern advertising practice, an art director typically works with a copywriter as a creative team. In advertising, an art director makes sure the client's message is conveyed to their desired audience. They are responsible for the advertising's visual aspects, while working with other team members such as the graphic designer. They work together to devise an overall concept (also known as the "creative" or "big idea") for the commercial, mailer, brochure, or other advertisements. The copywriter is responsible for the textual content, and the art director for the visual aspects. But the art director may come up with the headline or other copy, and the copywriter may suggest a visual or aesthetic approach. Each usually welcomes suggestions and constructive criticism from the other, as such collaboration often improves the work.
Although a good art director is expected to have good graphic design judgment and technical knowledge of production, it may not be necessary for an art director to hand-render comprehensive layouts, or even be able to draw, now that virtually all but the most preliminary work is done on computer.
Except in the smallest organizations, the art director/copywriter team is overseen by a creative director, senior media creative or chief creative director. In a large organization, an art director may oversee other art directors and a team of junior designers, image developers and/or production artists, and coordinate with a separate production department. In a smaller organization, the art director may fill all these roles, including overseeing printing and other production.
In film
An art director, in the hierarchical structure of a film art department, works directly below the production designer, in collaboration with the set decorator and the set designers. A large part of their duties include the administrative aspects of the art department. They are responsible for assigning tasks to personnel such as the art department coordinator and the construction coordinator, keeping track of the art department budget and scheduling, and overall quality control. They are often also a liaison to other departments, especially construction, special effects, property, transportation (graphics), and locations departments. The art director also attends all production meetings and tech scouts in order to provide information to the set designers in preparation for all departments to have a visual floor plan of each location visited.
The term "art director" was first used in 1914 by Wilfred Buckland when this title was used to denote the head of the art department (hence the Academy Award for Best Art Direction), which also included the set decorator. Now the award includes the production designer and set decorator. On the movie Gone with the Wind, David O. Selznick felt that William Cameron Menzies had such a significant role in the look of the film that the title art director was not sufficient, and so he gave Menzies the title of production designer. This title is now commonly used as the title for the head of the art department, although the title actually implies control over every visual aspect of a film, including costumes.
On films with smaller art departments, such as small independent films and short films, the terms "production designer" and "art director" are often synonymous, and the person taking on the role may be credited as either.
In publishing
Art directors in publishing typically work with the publication's editors. Together, they work on a concept for sections and pages of a publication. Individually, the art director is mostly responsible for the visual look and feel of the publication, and the editor has ultimate responsibility for the publication's verbal and textual contents.
See also
Production designer
VFX creative director
Scenography
References
External links
ADG Art Direction Wiki — Online community and knowledge base relating to new and classic technologies relevant to the art of film design
Advertising occupations
Filmmaking occupations
Theatrical occupations
Theatrical management
Computer occupations | Art director | [
"Technology"
] | 1,035 | [
"Computer occupations"
] |
631,188 | https://en.wikipedia.org/wiki/Halton%20sequence | In statistics, Halton sequences are sequences used to generate points in space for numerical methods such as Monte Carlo simulations. Although these sequences are deterministic, they are of low discrepancy, that is, appear to be random for many purposes. They were first introduced in 1960 and are an example of a quasi-random number sequence. They generalize the one-dimensional van der Corput sequences.
Example of Halton sequence used to generate points in (0, 1) × (0, 1) in R2
The Halton sequence is constructed according to a deterministic method that uses coprime numbers as its bases. As a simple example, let's take one dimension of the two-dimensional Halton sequence to be based on 2 and the other dimension on 3. To generate the sequence for 2, we start by dividing the interval (0,1) in half, then in fourths, eighths, etc., which generates
,
, ,
, , , ,
, ,...
Equivalently, the nth number of this sequence is the number n written in binary representation, inverted, and written after the decimal point. This is true for any base. As an example, to find the sixth element of the above sequence, we'd write 6 = 1*2 + 1*2 + 0*2 = 110, which can be inverted and placed after the decimal point to give 0.011 = 0*2 + 1*2 + 1*2 = . So the sequence above is the same as
0.1, 0.01, 0.11, 0.001, 0.101, 0.011, 0.111, 0.0001, 0.1001,...
To generate the sequence for 3 for the other dimension, we divide the interval (0,1) in thirds, then ninths, twenty-sevenths, etc., which generates
, , , , , , , , ,...
When we pair them up, we get a sequence of points in a unit square:
(, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ).
Even though standard Halton sequences perform very well in low dimensions, correlation problems have been noted between sequences generated from higher primes. For example, if we started with the primes 17 and 19, the first 16 pairs of points: (, ), (, ), (, ) ... (, ) would have perfect linear correlation. To avoid this, it is common to drop the first 20 entries, or some other predetermined quantity depending on the primes chosen. Several other methods have also been proposed. One of the most prominent solutions is the scrambled Halton sequence, which uses permutations of the coefficients used in the construction of the standard sequence. Another solution is the leaped Halton, which skips points in the standard sequence. Using, e.g., only each 409th point (also other prime numbers not used in the Halton core sequence are possible), can achieve significant improvements.
Implementation
In pseudocode:
algorithm Halton-Sequence is
inputs: index
base
output: result
while do
return
An alternative implementation that produces subsequent numbers of a Halton sequence for base b is given in the following generator function (in Python). This algorithm uses only integer numbers internally, which makes it robust against round-off errors.
def halton_sequence(b):
"""Generator function for Halton sequence."""
n, d = 0, 1
while True:
x = d - n
if x == 1:
n = 1
d *= b
else:
y = d // b
while x <= y:
y //= b
n = (b + 1) * y - x
yield n / d
See also
Constructions of low-discrepancy sequences
References
.
.
.
Low-discrepancy sequences
Sequences and series
Articles with example pseudocode
Articles with example Python (programming language) code | Halton sequence | [
"Mathematics"
] | 836 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Mathematical objects"
] |
631,196 | https://en.wikipedia.org/wiki/Golgi%27s%20method | Golgi's method is a silver staining technique that is used to visualize nervous tissue under light microscopy. The method was discovered by Camillo Golgi, an Italian physician and scientist, who published the first picture made with the technique in 1873. It was initially named the black reaction (la reazione nera) by Golgi, but it became better known as the Golgi stain or later, Golgi method.
Golgi staining was used by Spanish neuroanatomist Santiago Ramón y Cajal (1852–1934) to discover a number of novel facts about the organization of the nervous system, inspiring the birth of the neuron doctrine. Ultimately, Ramón y Cajal improved the technique by using a method he termed "double impregnation". Ramón y Cajal's staining technique, still in use, is called Cajal's Stain.
Mechanism
The cells in nervous tissue are densely packed and little information on their structures and interconnections can be obtained if all the cells are stained. Furthermore, the thin filamentary extensions of neural cells, including the axon and the dendrites of neurons, are too slender and transparent to be seen with normal staining techniques. Golgi's method stains a limited number of cells at random in their entirety. The mechanism by which this happens is still largely unknown. Dendrites, as well as the cell soma, are clearly stained in brown and black and can be followed in their entire length, which allowed neuroanatomists to track connections between neurons and to make visible the complex networking structure of many parts of the brain and spinal cord.
Golgi's staining is achieved by impregnating aldehyde fixed nervous tissue with potassium dichromate and silver nitrate. Cells thus stained are filled by microcrystallization of silver chromate.
Technique
According to SynapseWeb, this is the recipe for Golgi's staining technique:
Immerse a block (approx. 10x5 mm) of formaldehyde-fixed (or paraformaldehyde- glutaraldehyde-perfused) brain tissue into a 2% aqueous solution of potassium dichromate for 2 days
Dry the block shortly with filter paper.
Immerse the block into a 2% aqueous solution of silver nitrate for another 2 days.
Cut sections approx. 20–100 μm thick.
Dehydrate quickly in ethanol, clear and mount (e.g., into Depex or Enthalan).
This technique has since been refined to substitute the silver precipitate with gold by immersing the sample in gold chloride then oxalic acid, followed by removal of the silver by sodium thiosulphate. This preserves a greater degree of fine structure with the ultrastructural details marked by small particles of gold.
Quote
Ramón y Cajal said of the Golgi method:
I expressed the surprise which I experienced upon seeing with my own eyes the wonderful revelatory powers of the chrome-silver reaction and the absence of any excitement in the scientific world aroused by its discovery.
Recuerdos de mi vida, Vol. 2, Historia de mi labor científica. Madrid: Moya, 1917, p. 76.
References
External links
Photomicrograph of a cortex cell stained with Golgi's. IHC Image Gallery.
Golgi impregnations. Images of the brain of flies.
Visualization of dendritic spines using Golgi Method. SynapseWeb. Includes a time-lapse study of Golgi impregnation.
Berrebi, Albert: Cell Biology of Neurons: Structure and Methods of Study. (in PDF)
Genetics techniques
Staining
History of neuroscience
Neurohistology | Golgi's method | [
"Chemistry",
"Engineering",
"Biology"
] | 793 | [
"Genetics techniques",
"Staining",
"Genetic engineering",
"Microbiology techniques",
"Microscopy",
"Cell imaging"
] |
631,308 | https://en.wikipedia.org/wiki/Animal%20euthanasia | Animal euthanasia (euthanasia from ; "good death") is the act of killing an animal humanely, most commonly with injectable drugs. Reasons for euthanasia include incurable (and especially painful) conditions or diseases, lack of resources to continue supporting the animal, or laboratory test procedures. Euthanasia methods are designed to cause minimal pain and distress. Euthanasia is distinct from animal slaughter and pest control.
In domesticated animals, the discussion of animal euthanasia may be substituted with euphemisms, such as "put down" or "put to sleep" to make the wording less harsh.
Methods
The methods of euthanasia can be divided into pharmacological and physical methods. Acceptable pharmacological methods include injected drugs and gases that first depress the central nervous system and then cardiovascular activity. Acceptable physical methods must first cause rapid loss of consciousness by disrupting the central nervous system. The most common methods are discussed here, but there are other acceptable methods used in different situations.
Intravenous anesthetic
Upon administration of intravenous anesthetic, unconsciousness, respiratory then cardiac arrest follow rapidly, usually within 30 seconds.
The two-stage process that some veterinarians use includes a first shot that is a sedative to make the animal more comfortable and then a second shot that euthanizes the animal. This allows the owner the chance to say goodbye to a live pet without their emotions stressing the animal. It also greatly mitigates any tendency toward spasm and other involuntary movement which tends to increase the emotional upset that the pet's owner experiences.
For large animals, the volumes of barbiturates required are considered by some to be impractical, although this is standard practice in the United States. For horses and cattle, other drugs may be available. Some specially formulated combination products are available, such as Somulose (secobarbital/cinchocaine) and Tributame (embutramide/chloroquine/lidocaine), which cause deep unconsciousness and cardiac arrest independently with a lower volume of injection, thus making the process faster, safer, and more effective.
Occasionally, a horse injected with these mixtures may display apparent seizure activity before death. This may be due to premature cardiac arrest. However, if normal precautions (e.g., sedation with detomidine) are taken, this is rarely a problem. Anecdotal reports that long-term use of phenylbutazone increases the risk of this reaction are unverified.
After the animal has died, it is not uncommon for the body to have posthumous body jerks or a sudden bladder or bowel outburst. This is caused by the muscles of the deceased animal's body relaxing.
Inhalants
Gas anesthetics such as isoflurane and sevoflurane can be used for euthanasia of very small animals. The animals are placed in sealed chambers where high levels of anesthetic gas are introduced. Death may also be caused using carbon dioxide once unconsciousness has been achieved by inhaled anesthetic. Carbon dioxide is often used on its own for euthanasia of wild animals. There are mixed opinions on whether it causes distress when used on its own, with human experiments lending support to the evidence that it can cause distress and equivocal results in non-humans. In 2013, the American Veterinary Medical Association (AVMA) issued new guidelines for carbon dioxide induction, stating that a flow rate of 10% to 30% volume/min is optimal for the humane euthanasia of small rodents.
Carbon monoxide is often used, but some states in the US have banned its use in animal shelters: although carbon monoxide poisoning is not particularly painful, the conditions in the gas chamber are often not humane. Nitrogen has been shown to be effective, although some young animals are more resistant to the effects, and it currently is not widely used.
The use of gas chambers is not the most humane form of euthanasia as it can take up to 20 minutes to fully euthanize the animal. If the chambers are not calibrated correctly or the animal is ill, the process is only delayed further which can cause more harm to the animal.
Cervical dislocation
Cervical dislocation, or displacement (breaking or fracturing) of the neck, is an older and less common method of killing small animals such as mice. Performed properly it is intended to cause as painless a death as possible and has no cost or equipment involved. The handler must know the proper method of executing the movement which will cause the cervical displacement and without proper training and method education there is a risk of not causing death and can cause severe pain and suffering. It is unknown how long an animal remains conscious, or the level of suffering it goes through after a correct snapping of the neck, which is why it has become less common and often substituted with inhalants.
Intracardiac or intraperitoneal injection
When intravenous injection is not possible, euthanasia drugs such as pentobarbital can be injected directly into a heart chamber or body cavity. With regard to state and federal laws, one of the most humane forms of euthanizing animals is through the injection Sodium Pentobarbital. This is typically the second shot that is followed after a sedative when euthanizing animals.
While intraperitoneal injection is fully acceptable (although it may take up to 15 minutes to take effect in dogs and cats), an intracardiac (IC) injection may only be performed on an unconscious or deeply sedated animal. Performing IC injections on a fully conscious animal in places with humane laws for animal handling is often a criminal offense.
Shooting
This can be a means of euthanasia for large animals—such as horses, cattle, and deer—if performed properly.
This may be performed by means of:
Firearms Traditionally used in the field for euthanizing horses, deer or other large game animals. The animal is shot in the forehead with the bullet directed down the spine through the medulla oblongata, resulting in instant death. The risks are minimal if carried out by skilled personnel in a suitable location.
Captive bolt gun Commonly used by the meat packing industry to slaughter cattle and other livestock. The bolt is fired through the forehead causing massive disruption of the cerebral cortex. In cattle, this stuns the animal, though if left for a prolonged period it will die from cerebral oedema. Death should therefore be rapidly brought about by pithing or exsanguination. Horses are killed outright by the captive bolt, making pithing and exsanguination unnecessary.
Reasons
The reasons for euthanasia of pets and other animals include:
Terminal illness, e.g. cancer or rabies
Illness or accident that is not terminal but would cause suffering for the animal to live with, or when the owner cannot afford the treatment or has a moral objection to the treatment
Old age and deterioration leading to loss of major bodily functions, resulting in severe impairment of the quality of life.
Dementia in pets leading to loss of cognitive function and normal daily behaviour and interactions with owner. Dementia resulting in unsocial and repetitive behaviour causing prolonged stress for both pets and their owners.
A hunter's coup de grâce
Behavioural problems (usually ones that cannot be corrected) e.g. aggression – Canines that have usually caused grievous bodily harm (severe injuries or death) to either humans or other animals through mauling are usually seized and euthanised ('destroyed' in British legal terms)
Lack of home or caretaker or resources for feeding
"Convenience euthanasia", if the owner no longer wants to care for the pet
Research and testing – In the course of scientific research or testing, animals may be euthanized in order to be dissected, to prevent suffering after testing, to prevent the spread of disease, or other reasons
Small animal euthanasia is typically performed in a veterinary clinic or hospital, at animal shelter, or at the pet owner's home and is usually carried out by a veterinarian or a veterinary technician working under the veterinarian's supervision. Often animal shelter workers are trained to perform euthanasia as well. Knowing when it's time to put a pet down can be difficult. A licensed veterinarian can help an owner determine when in the course of an illness or behavioral problem euthanasia is appropriate.
In the case of large animals which have sustained injuries, this will also occur at the site of the accident, for example, on a racecourse.
Some animal rights organizations support animal euthanasia in certain circumstances and practice euthanasia at shelters that they operate.
Legal status
In the U.S., for companion animals euthanized in animal shelters, most states prescribe intravenous injection as the required method. These laws date to 1990, when Georgia's Humane Euthanasia Act became the first state law to mandate this method. Before that, gas chambers and other means were commonly employed. The Georgia law was resisted by the Georgia Commissioner of Agriculture, Tommy Irvin, who was charged with enforcing the act. In March 2007, he was sued by former State Representative Chesley V. Morton, who wrote the law, and subsequently ordered by the court to enforce all provisions of the Act.
Some states allow the use of carbon monoxide chambers for euthanasia.
Remains
Many pet owners choose to have their pets cremated or buried after the pet is euthanized, and there are pet funeral homes that specialize in animal burial or cremation. Otherwise, the animal facility will often freeze the body and subsequently send it to the local landfill.
In some instances, animals euthanized at shelters or animal control agencies have been sent to meat rendering facilities to be processed for use in cosmetics, fertilizer, gelatin, poultry feed, pharmaceuticals and pet food. It was proposed that the presence of pentobarbital in dog food may have caused dogs to become less responsive to the drug when being euthanized. However, a 2002 FDA study found no dog or cat DNA in the foods they tested, so it was theorized that the drug found in dog food came from euthanized cattle and horses. Furthermore, the level of the drug found in pet food was safe.
See also
Animal chaplains
Animal loss
Animal slaughter
Animal welfare
British Pet Massacre
Chick culling
Dysthanasia (animal)
Insect euthanasia
Overpopulation in companion animals
Pet
Rainbow Bridge (pets)
References
External links
AVMA Guidelines on Euthanasia
Questions Every Pet Owner Has About Dog Euthanasia But is Afraid to Ask
Euthanasia of Animals Used for Scientific Purposes at The University of Adelaide
Putting to Sleep Your Pet Dog Cat or Rabbit at Home.
World Internet News chronicles what happens to abandoned dogs.
Reasons to euthanize your pet at home
National Agricultural Library, United States Department of Agriculture
No Kill Advocacy Center – "no kill" shelter advocacy organization
Recommendations for euthanasia of experimental animals: Part1
Recommendations for euthanasia of experimental animals: Part2
Chesley V. Morton v. Georgia Department of Agriculture and Tommy Irvin in his Official Capacity as Commissioner
Is It Time Checklist
Animal euthanasia | Animal euthanasia | [
"Chemistry"
] | 2,296 | [
"Animal testing",
"Animal euthanasia"
] |
631,310 | https://en.wikipedia.org/wiki/Mark%20%28unit%29 | The Mark (from Middle High German: Marc, march, brand) is originally a medieval weight or mass unit, which supplanted the pound weight as a precious metals and coinage weight in parts of Europe in the 11th century. The Mark is traditionally divided into 8 ounces or 16 lots. The Cologne mark corresponded to about 234 grams.
Like the German systems, the French poids de marc weight system considered one "Marc" equal to 8 troy ounces.
Just as the pound of 12 troy ounces (373 g) lent its name to the pound unit of currency, the mark lent its name to the mark unit of currency.
Origin of the term
The Etymological Dictionary of the German Language by Friedrich Kluge derives the word from the Proto-Germanic term marka, "weight and value unit" (originally "division, shared").<ref>Kluge, Friedrich (2012). Etymological Dictionary of the German Language. 25th edition, edited by Elmar Seebold, Berlin/Boston, ISBN 978-3-11-022364-4, p. 602 (Google Books).</ref>
The etymological dictionary by Wolfgang Pfeifer sees the Old High German marc, "delimitation, sign", as the stem and assumes that marc originally meant "minting" (marking of a certain weight), later denoting the ingot itself and its weight, and finally a coin of a certain weight and value.
According to an 1848 trade lexicon, the term Gewichtsmark comes from the fact that "the piece of metal used for weighing was stamped with a sign or symbol". Meyer's 1905 Konversationslexikon similarly derives the origin of the word to the emergence of the mark from the Roman pound of to 11 ounces. Charlemagne, as King of the Franks, carried out a monetary and measures reform towards the end of the 8th century. In particular, he had introduced the Karlspfund ("Charles pound") as the basic unit of coinage and trade which, however, weighed only 8 ounces. In order to prevent a further reduction in the weight of a pound, a sign, the mark, was now stamped on the new weights. The actual weight of these weights, known as marca'', is said to have fluctuated between 196 g and 280 g.
References
Units of mass
Obsolete units of measurement
Units of measurement of the Holy Roman Empire | Mark (unit) | [
"Physics",
"Mathematics"
] | 505 | [
"Obsolete units of measurement",
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
631,336 | https://en.wikipedia.org/wiki/Atmospheric%20diffraction | Atmospheric diffraction is manifested in the following principal ways:
Optical atmospheric diffraction
Radio wave diffraction is the scattering of radio frequency or lower frequencies from the Earth's ionosphere, resulting in the ability to achieve greater distance radio broadcasting.
Sound wave diffraction is the bending of sound waves, as the sound travels around edges of geometric objects. This produces the effect of being able to hear even when the source is blocked by a solid object. The sound waves bend appreciably around the solid object.
However, if the object has a diameter greater than the acoustic wavelength, a 'sound shadow' is cast behind the object where the sound is inaudible. (Note: some sound may be propagated through the object depending on material).
Optical atmospheric diffraction
When light travels through thin clouds made up of nearly uniform sized water or aerosol droplets or ice crystals, diffraction or bending of light occurs as the light is diffracted by the edges of the particles. This degree of bending of light depends on the wavelength (color) of light and the size of the particles. The result is a pattern of rings, which seem to emanate from the Sun, the Moon, a planet, or another astronomical object. The most distinct part of this pattern is a central, nearly white disk. This resembles an atmospheric Airy disc but is not actually an Airy disk. It is different from rainbows and halos, which are mainly caused by refraction.
The left photo shows a diffraction ring around the rising Sun caused by a veil of aerosol. This effect dramatically disappeared when the Sun rose high enough until the pattern was no longer visible on the Earth's surface. This phenomenon is sometimes called the corona effect, not to be confused with the solar corona.
On the right is a 1/10-second exposure showing an overexposed full moon. The Moon is seen through thin vaporous clouds, which glow with a bright disk surrounded by an illuminated red ring. A longer exposure would show more faint colors beyond the outside red ring.
Another form of atmospheric diffraction or bending of light occurs when light moves through fine layers of particulate dust trapped primarily in the middle layers of the troposphere. This effect differs from water based atmospheric diffraction because the dust material is opaque whereas water allows light to pass through it. This has the effect of tinting the light the color of the dust particles. This tinting can vary from red to yellow depending on geographical location. the other primary difference is that dust based diffraction acts as a magnifier instead of creating a distinct halo. This occurs because the opaque matter does not share the lensing properties of water. The effect is to make an object visibly larger while being more indistinct as the dust distorts the image. This effect varies largely based on the amount and type of dust in the atmosphere.
Radio wave propagation in the ionosphere
The ionosphere is a layer of partially ionized gases high above the majority of the Earth's atmosphere; these gases are ionized by cosmic rays originating on the sun. When radio waves travel into this zone, which commences about 80 kilometers above the earth, they experience diffraction in a manner similar to the visible light phenomenon described above. In this case some of the electromagnetic energy is bent in a large arc, such that it can return to the Earth's surface at a very distant point (on the order of hundreds of kilometers from the broadcast source. More remarkably some of this radio wave energy bounces off the Earth's surface and reaches the ionosphere for a second time, at a distance even farther away than the first time. Consequently, a high powered transmitter can effectively broadcast over 1000 kilometers by using multiple "skips" off of the ionosphere. And, at times of favorable atmospheric conditions good "skip" occurs, then even a low power transmitter can be heard halfway around the world. This often occurs for "novice" radio amateurs "hams" who are limited by law to transmitters with no more than 65 watts. The Kon-Tiki expedition communicated regularly with a 6 watt transmitter from the middle of the Pacific. For more details see the "communications" part of the "Kon-Tiki expedition" entry in Wikipedia.
An exotic variant of this radio wave propagation has been examined to show that, theoretically, the ionospheric bounce could be greatly exaggerated if a high powered spherical acoustical wave were created in the ionosphere from a source on earth.
Acoustical diffraction near the Earth's surface
In the case of sound waves travelling near the Earth's surface, the waves are diffracted or bent as they traverse by a geometric edge, such as a wall or building. This phenomenon leads to a very important practical effect: that we can hear "around corners". Because of the frequencies involved considerable amount of the sound energy (on the order of ten percent) actually travels into this would be sound "shadow zone". Visible light exhibits a similar effect, but, due to its much shorter wavelength, only a minute amount of light energy travels around a corner.
A useful branch of acoustics dealing with the design of noise barriers examines this acoustical diffraction phenomenon in quantitative detail to calculate the optimum height and placement of a soundwall or berm adjacent to a highway.
This phenomenon is also inherent in calculating the sound levels from aircraft noise, so that an accurate determination of topographic features may be understood. In that way one can produce sound level isopleths, or contour maps, which faithfully depict outcomes over variable terrain.
Bibliography
See also
Atmospheric refraction
Noise barrier
External links
Explanation and image gallery - Atmospheric Optics by Les Cowley
Diffraction
Atmosphere
Sound
Acoustics | Atmospheric diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,176 | [
"Spectrum (physical sciences)",
"Classical mechanics",
"Acoustics",
"Diffraction",
"Crystallography",
"Spectroscopy"
] |
631,443 | https://en.wikipedia.org/wiki/Seismic%20moment | Seismic moment is a quantity used by seismologists to measure the size of an earthquake. The scalar seismic moment is defined by the equation
, where
is the shear modulus of the rocks involved in the earthquake (in pascals (Pa), i.e. newtons per square meter)
is the area of the rupture along the geologic fault where the earthquake occurred (in square meters), and
is the average slip (displacement offset between the two sides of the fault) on (in meters).
thus has dimensions of torque, measured in newton meters. The connection between seismic moment and a torque is natural in the body-force equivalent representation of seismic sources as a double-couple (a pair of force couples with opposite torques): the seismic moment is the torque of each of the two couples. Despite having the same dimensions as energy, seismic moment is not a measure of energy. The relations between seismic moment, potential energy drop and radiated energy are indirect and approximative.
The seismic moment of an earthquake is typically estimated using whatever information is available to constrain its factors. For modern earthquakes, moment is usually estimated from ground motion recordings of earthquakes known as seismograms. For earthquakes that occurred in times before modern instruments were available, moment may be estimated from geologic estimates of the size of the fault rupture and the slip.
Seismic moment is the basis of the moment magnitude scale introduced by Caltech's Thomas C. Hanks and Hiroo Kanamori, which is often used to compare the size of different earthquakes and is especially useful for comparing the sizes of large (great) earthquakes.
The seismic moment is not restricted to earthquakes. For a more general seismic source described by a seismic moment tensor (a symmetric tensor, but not necessarily a double couple tensor), the seismic moment is
See also
Richter scale
Moment magnitude scale
Sources
.
.
.
.
Seismology measurement
Moment (physics) | Seismic moment | [
"Physics",
"Mathematics"
] | 392 | [
"Quantity",
"Physical quantities",
"Moment (physics)"
] |
631,494 | https://en.wikipedia.org/wiki/Moment%20magnitude%20scale | The moment magnitude scale (MMS; denoted explicitly with or Mwg, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment. was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale () defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales. Despite the difference, news media often use the term "Richter scale" when referring to the moment magnitude scale.
Moment magnitude () is considered the authoritative magnitude scale for ranking earthquakes by size. It is more directly related to the energy of an earthquake than other scales, and does not saturatethat is, it does not underestimate magnitudes as other scales do in certain conditions. It has become the standard scale used by seismological authorities like the United States Geological Survey for reporting large earthquakes (typically M > 4), replacing the local magnitude () and surface-wave magnitude () scales. Subtypes of the moment magnitude scale (, etc.) reflect different ways of estimating the seismic moment.
History
Richter scale: the original measure of earthquake magnitude
At the beginning of the twentieth century, very little was known about how earthquakes happen, how seismic waves are generated and propagate through the Earth's crust, and what information they carry about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how to adjust for epicentral distance (and some other factors) so that the logarithm of the amplitude of the seismograph trace could be used as a measure of "magnitude" that was internally consistent and corresponded roughly with estimates of an earthquake's energy. He established a reference point and the ten-fold (exponential) scaling of each degree of magnitude, and in 1935 published what he called the "magnitude scale", now called the local magnitude scale, labeled . (This scale is also known as the Richter scale, but news media sometimes use that term indiscriminately to refer to other similar scales.)
The local magnitude scale was developed on the basis of shallow (~ deep), moderate-sized earthquakes at a distance of approximately , conditions where the surface waves are predominant. At greater depths, distances, or magnitudes the surface waves are greatly reduced, and the local magnitude scale underestimates the magnitude, a problem called saturation. Additional scales were developed – a surface-wave magnitude scale () by Beno Gutenberg in 1945, a body-wave magnitude scale () by Gutenberg and Richter in 1956, and a number of variants – to overcome the deficiencies of the scale, but all are subject to saturation. A particular problem was that the scale (which in the 1970s was the preferred magnitude scale) saturates around and therefore underestimates the energy release of "great" earthquakes such as the 1960 Chilean and 1964 Alaskan earthquakes. These had magnitudes of 8.5 and 8.4 respectively but were notably more powerful than other M 8 earthquakes; their moment magnitudes were closer to 9.6 and 9.3, respectively.
Single couple or double couple
The study of earthquakes is challenging as the source events cannot be observed directly, and it took many years to develop the mathematics for understanding what the seismic waves from an earthquake can tell about the source event. An early step was to determine how different systems of forces might generate seismic waves equivalent to those observed from earthquakes.
The simplest force system is a single force acting on an object. If it has sufficient strength to overcome any resistance it will cause the object to move ("translate"). A pair of forces, acting on the same "line of action" but in opposite directions, will cancel; if they cancel (balance) exactly there will be no net translation, though the object will experience stress, either tension or compression. If the pair of forces are offset, acting along parallel but separate lines of action, the object experiences a rotational force, or torque. In mechanics (the branch of physics concerned with the interactions of forces) this model is called a couple, also simple couple or single couple. If a second couple of equal and opposite magnitude is applied their torques cancel; this is called a double couple. A double couple can be viewed as "equivalent to a pressure and tension acting simultaneously at right angles".
In 1923 Hiroshi Nakano showed that certain aspects of seismic waves could be explained in terms of a double couple model. This led to a three-decade-long controversy over the best way to model the seismic source: as a single couple, or a double couple. While Japanese seismologists favored the double couple, most seismologists favored the single couple. Although the single couple model had some shortcomings, it seemed more intuitive, and there was a belief – mistaken, as it turned out – that the elastic rebound theory for explaining why earthquakes happen required a single couple model. In principle these models could be distinguished by differences in the radiation patterns of their S waves, but the quality of the observational data was inadequate for that.
but not from a single couple. This was confirmed as better and more plentiful data coming from the World-Wide Standard Seismograph Network (WWSSN) permitted closer analysis of seismic waves. Notably, in 1966 Keiiti Aki showed that the seismic moment of the 1964 Niigata earthquake as calculated from the seismic waves on the basis of a double couple was in reasonable agreement with the seismic moment calculated from the observed physical dislocation.
Dislocation theory
A double couple model suffices to explain an earthquake's far-field pattern of seismic radiation, but tells us very little about the nature of an earthquake's source mechanism or its physical features. While slippage along a fault was theorized as the cause of earthquakes (other theories included movement of magma, or sudden changes of volume due to phase changes), observing this at depth was not possible, and understanding what could be learned about the source mechanism from the seismic waves requires an understanding of the source mechanism.
Modeling the physical process by which an earthquake generates seismic waves required much theoretical development of dislocation theory, first formulated by the Italian Vito Volterra in 1907, with further developments by E. H. Love in 1927. More generally applied to problems of stress in materials, an extension by F. Nabarro in 1951 was recognized by the Russian geophysicist A. V. Vvedenskaya as applicable to earthquake faulting. In a series of papers starting in 1956 she and other colleagues used dislocation theory to determine part of an earthquake's focal mechanism, and to show that a dislocation – a rupture accompanied by slipping – was indeed equivalent to a double couple.
In a pair of papers in 1958, J. A. Steketee worked out how to relate dislocation theory to geophysical features. Numerous other researchers worked out other details, culminating in a general solution in 1964 by Burridge and Knopoff, which established the relationship between double couples and the theory of elastic rebound, and provided the basis for relating an earthquake's physical features to seismic moment.
Seismic moment
Seismic moment – symbol – is a measure of the fault slip and area involved in the earthquake. Its value is the torque of each of the two force couples that form the earthquake's equivalent double-couple. (More precisely, it is the scalar magnitude of the second-order moment tensor that describes the force components of the double-couple.) Seismic moment is measured in units of Newton meters (N·m) or Joules, or (in the older CGS system) dyne-centimeters (dyn-cm).
The first calculation of an earthquake's seismic moment from its seismic waves was by Keiiti Aki for the 1964 Niigata earthquake. He did this two ways. First, he used data from distant stations of the WWSSN to analyze long-period (200 second) seismic waves (wavelength of about 1,000 kilometers) to determine the magnitude of the earthquake's equivalent double couple. Second, he drew upon the work of Burridge and Knopoff on dislocation to determine the amount of slip, the energy released, and the stress drop (essentially how much of the potential energy was released). In particular, he derived an equation that relates an earthquake's seismic moment to its physical parameters:
with being the rigidity (or resistance to moving) of a fault with a surface area of over an average dislocation (distance) of . (Modern formulations replace with the equivalent , known as the "geometric moment" or "potency".) By this equation the moment determined from the double couple of the seismic waves can be related to the moment calculated from knowledge of the surface area of fault slippage and the amount of slip. In the case of the Niigata earthquake the dislocation estimated from the seismic moment reasonably approximated the observed dislocation.
Seismic moment is a measure of the work (more precisely, the torque) that results in inelastic (permanent) displacement or distortion of the Earth's crust. It is related to the total energy released by an earthquake. However, the power or potential destructiveness of an earthquake depends (among other factors) on how much of the total energy is converted into seismic waves. This is typically 10% or less of the total energy, the rest being expended in fracturing rock or overcoming friction (generating heat).
Nonetheless, seismic moment is regarded as the fundamental measure of earthquake size, representing more directly than other parameters the physical size of an earthquake. As early as 1975 it was considered "one of the most reliably determined instrumental earthquake source parameters".
Introduction of an energy-motivated magnitude Mw
Most earthquake magnitude scales suffered from the fact that they only provided a comparison of the amplitude of waves produced at a standard distance and frequency band; it was difficult to relate these magnitudes to a physical property of the earthquake. Gutenberg and Richter suggested that radiated energy Es could be estimated as
,
(in Joules). Unfortunately, the duration of many very large earthquakes was longer than 20 seconds, the period of the surface waves used in the measurement of . This meant that giant earthquakes such as the 1960 Chilean earthquake (M 9.5) were only assigned an . Caltech seismologist Hiroo Kanamori recognized this deficiency and took the simple but important step of defining a magnitude based on estimates of radiated energy, , where the "w" stood for work (energy):
Kanamori recognized that measurement of radiated energy is technically difficult since it involves the integration of wave energy over the entire frequency band. To simplify this calculation, he noted that the lowest frequency parts of the spectrum can often be used to estimate the rest of the spectrum. The lowest frequency asymptote of a seismic spectrum is characterized by the seismic moment, . Using an approximate relation between radiated energy and seismic moment (which assumes stress drop is complete and ignores fracture energy),
(where E is in Joules and is in Nm), Kanamori approximated by
Moment magnitude scale
The formula above made it much easier to estimate the energy-based magnitude , but it changed the fundamental nature of the scale into a moment magnitude scale. USGS seismologist Thomas C. Hanks noted that Kanamori's scale was very similar to a relationship between and that was reported by
combined their work to define a new magnitude scale based on estimates of seismic moment
where is defined in newton meters (N·m).
Current use
Moment magnitude is now the most common measure of earthquake size for medium to large earthquake magnitudes, but in practice, seismic moment (), the seismological parameter it is based on, is not measured routinely for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which includes the great majority of quakes.
Popular press reports most often deal with significant earthquakes larger than . For these events, the preferred magnitude is the moment magnitude , not Richter's local magnitude .
Definition
The symbol for the moment magnitude scale is , with the subscript "w" meaning mechanical work accomplished. The moment magnitude is a dimensionless value defined by Hiroo Kanamori as
where is the seismic moment in dyne⋅cm (10−7 N⋅m). The constant values in the equation are chosen to achieve consistency with the magnitude values produced by earlier scales, such as the local magnitude and the surface wave magnitude. Thus, a magnitude zero microearthquake has a seismic moment of approximately , while the Great Chilean earthquake of 1960, with an estimated moment magnitude of 9.4–9.6, had a seismic moment between and .
Seismic moment magnitude (M wg or Das Magnitude Scale ) and moment magnitude (M w) scales
To understand the magnitude scales based on Mo detailed background of Mwg and Mw scales is given below.
Mw scale
Hiroo Kanamori defined a magnitude scale (Log W0 = 1.5 Mw + 11.8, where W0 is the minimum strain energy) for great earthquakes using Gutenberg Richter Eq. (1).
Log Es = 1.5 Ms + 11.8 (A)
Hiroo Kanamori used W0 in place of Es (dyn.cm) and consider a constant term (W0/Mo = 5 × 10−5) in Eq. (A) and estimated Ms and denoted as Mw (dyn.cm). The energy Eq. (A) is derived by substituting m = 2.5 + 0.63 M in the energy equation Log E = 5.8 + 2.4 m (Richter 1958), where m is the Gutenberg unified magnitude and M is a least squares approximation to the magnitude determined from surface wave magnitudes. After replacing the ratio of seismic Energy (E) and Seismic Moment (Mo), i.e., E/Mo = 5 × 10−5, into the Gutenberg–Richter energy magnitude Eq. (A), Hanks and Kanamori provided Eq. (B):
Log M0 = 1.5 Ms + 16.1 (B)
Note that Eq. (B) was already derived by Hiroo Kanamori and termed it as Mw. Eq. (B) was based on large earthquakes; hence, in order to validate Eq. (B) for intermediate and smaller earthquakes, Hanks and Kanamori (1979) compared this Eq. (B) with Eq. (1) of Percaru and Berckhemer (1978) for the magnitude 5.0 ≤ Ms ≤ 7.5 (Hanks and Kanamori 1979). Note that Eq. (1) of Percaru and Berckhemer (1978) for the magnitude range 5.0 ≤ Ms ≤ 7.5 is not reliable due to the inconsistency of defined magnitude range (moderate to large earthquakes defined as Ms ≤ 7.0 and Ms = 7–7.5) and scarce data in lower magnitude range (≤ 7.0) which rarely represents the global seismicity (e.g., see Figs. 1A, B, 4 and Table 2 of Percaru and Berckhemer 1978). Furthermore, Equation (1) of Percaru and Berckhemer 1978) is only valid for (≤ 7.0).
Relations between seismic moment, potential energy released and radiated energy
Seismic moment is not a direct measure of energy changes during an earthquake. The relations between seismic moment and the energies involved in an earthquake depend on parameters that have large uncertainties and that may vary between earthquakes. Potential energy is stored in the crust in the form of elastic energy due to built-up stress and gravitational energy. During an earthquake, a portion of this stored energy is transformed into
energy dissipated in frictional weakening and inelastic deformation in rocks by processes such as the creation of cracks
heat
radiated seismic energy
The potential energy drop caused by an earthquake is related approximately to its seismic moment by
where is the average of the absolute shear stresses on the fault before and after the earthquake (e.g., equation 3 of ) and is the average of the shear moduli of the rocks that constitute the fault. Currently, there is no technology to measure absolute stresses at all depths of interest, nor method to estimate it accurately, and is thus poorly known. It could vary highly from one earthquake to another. Two earthquakes with identical but different would have released different .
The radiated energy caused by an earthquake is approximately related to seismic moment by
where is radiated efficiency and is the static stress drop, i.e., the difference between shear stresses on the fault before and after the earthquake (e.g., from equation 1 of ). These two quantities are far from being constants. For instance, depends on rupture speed; it is close to 1 for regular earthquakes but much smaller for slower earthquakes such as tsunami earthquakes and slow earthquakes. Two earthquakes with identical but different or would have radiated different .
Because and are fundamentally independent properties of an earthquake source, and since can now be computed more directly and robustly than in the 1970s, introducing a separate magnitude associated to radiated energy was warranted. Choy and Boatwright defined in 1995 the energy magnitude
where is in J (N·m).
Comparative energy released by two earthquakes
Assuming the values of are the same for all earthquakes, one can consider as a measure of the potential energy change ΔW caused by earthquakes. Similarly, if one assumes is the same for all earthquakes, one can consider as a measure of the energy Es radiated by earthquakes.
Under these assumptions, the following formula, obtained by solving for the equation defining , allows one to assess the ratio of energy release (potential or radiated) between two earthquakes of different moment magnitudes, and :
.
As with the Richter scale, an increase of one step on the logarithmic scale of moment magnitude corresponds to a 101.5 ≈ 32 times increase in the amount of energy released, and an increase of two steps corresponds to a 103 = 1,000 times increase in energy. Thus, an earthquake of of 7.0 contains 1,000 times as much energy as one of 5.0 and about 32 times that of 6.0.
Comparison with TNT equivalents
To make the significance of the magnitude value plausible, the seismic energy released during the earthquake is sometimes compared to the effect of the conventional chemical explosive TNT.
The seismic energy results from the above-mentioned formula according to Gutenberg and Richter to
or converted into Hiroshima bombs:
For comparison of seismic energy (in joules) with the corresponding explosion energy, a value of 4.2 x 109 joules per ton of TNT applies. The table illustrates the relationship between seismic energy and moment magnitude.
The end of the scale is at the value 10.6, corresponding to the assumption that at this value the Earth's crust would have to break apart completely.
Subtypes of Mw
Various ways of determining moment magnitude have been developed, and several subtypes of the scale can be used to indicate the basis used.
– Based on moment tensor inversion of long-period (~10 – 100 s) body-waves.
– From a moment tensor inversion of complete waveforms at regional distances (~1,000 miles). Sometimes called RMT.
– Derived from a centroid moment tensor inversion of intermediate- and long-period body- and surface-waves.
– Derived from a centroid moment tensor inversion of the W-phase.
() – Developed by Seiji Tsuboi for quick estimation of the tsunami potential of large near-coastal earthquakes from measurements of the P waves, and later extended to teleseismic earthquakes in general.
– A duration-amplitude procedure which takes into account the duration of the rupture, providing a fuller picture of the energy released by longer lasting ("slow") ruptures than seen with .
–Rapidly estimates earthquake magnitude by combining maximum displacements of teleseismic P wave and source durations.
See also
Earthquake engineering
Lists of earthquakes
Seismic magnitude scales
Notes
Sources
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
USGS: Measuring earthquakes
Perspective: a graphical comparison of earthquake energy release – Pacific Tsunami Warning Center
Seismic magnitude scales
Geophysics
Logarithmic scales of measurement | Moment magnitude scale | [
"Physics",
"Mathematics"
] | 4,266 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Logarithmic scales of measurement",
"Geophysics"
] |
631,654 | https://en.wikipedia.org/wiki/Reflecting%20pool | A reflecting pool, also called a reflection pool, is a water feature found in gardens, parks and memorial sites. It usually consists of a shallow pool of water with a reflective surface, undisturbed by fountain jets.
Design
Reflecting pools are often designed with the outer basin floor at the rim slightly deeper than the central area to suppress wave formation. They can be as small as a bird bath to as large as a major civic element. Their origins are from ancient Persian gardens.
List of notable pools
The Miroir d'eau (Water mirror) on Place de la Bourse in Bordeaux, France.
The Mughal garden reflecting pools at the Taj Mahal in Agra, India
Chehel Sotoun in Iran
The Lincoln Memorial Reflecting Pool and Capitol Reflecting Pool, in Washington, D.C.
Mary Gibbs and Jesse H. Jones Reflection Pool, Hermann Park, Houston, Texas, U.S.
The modernist Palácio do Planalto and Palácio da Alvorada in Brasília, Brazil
Martin Luther King Jr. National Historical Park in Atlanta, Georgia
The Oklahoma City National Memorial, at the site of the Oklahoma City bombing
The Hollywood Bowl in Los Angeles, California, where a former reflecting pool was located in front of the stage, – 1972
The National September 11 Memorial & Museum, located at the World Trade Center site in New York City, with two reflecting pools on the location where the Twin Towers stood
Gallery
References
Garden features
Bodies of water
Architectural elements
Islamic architectural elements
Persian gardens | Reflecting pool | [
"Technology",
"Engineering"
] | 297 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
631,671 | https://en.wikipedia.org/wiki/Angiosperm%20Phylogeny%20Group | The Angiosperm Phylogeny Group (APG) is an informal international group of systematic botanists who collaborate to establish a consensus on the taxonomy of flowering plants (angiosperms) that reflects new knowledge about plant relationships discovered through phylogenetic studies.
, four incremental versions of a classification system have resulted from this collaboration, published in 1998, 2003, 2009 and 2016. An important motivation for the group was what they considered deficiencies in prior angiosperm classifications since they were not based on monophyletic groups (i.e., groups that include all the descendants of a common ancestor).
APG publications are increasingly influential, with a number of major herbaria changing the arrangement of their collections to match the latest APG system.
Angiosperm classification and the APG
In the past, classification systems were typically produced by an individual botanist or by a small group. The result was a large number of systems (see List of systems of plant taxonomy). Different systems and their updates were generally favoured in different countries. Examples are the Engler system in continental Europe, the Bentham & Hooker system in Britain (particularly influential because it was used by Kew), the Takhtajan system in the former Soviet Union and countries within its sphere of influence and the Cronquist system in the United States.
Before the availability of genetic evidence, the classification of angiosperms (also known as flowering plants, Angiospermae, Anthophyta or Magnoliophyta) was based on their morphology (particularly of their flower) and biochemistry (the kinds of chemical compounds in the plant).
After the 1980s, detailed genetic evidence analysed by phylogenetic methods became available and while confirming or clarifying some relationships in existing classification systems, it radically changed others. This genetic evidence created a rapid increase in knowledge that led to many proposed changes; stability was "rudely shattered". This posed problems for all users of classification systems (including encyclopaedists). The impetus came from a major molecular study published in 1993 based on 5000 flowering plants and a photosynthesis gene (rbcL). This produced a number of surprising results in terms of the relationships between groupings of plants, for instance the dicotyledons were not supported as a distinct group. At first there was a reluctance to develop a new system based entirely on a single gene. However, subsequent work continued to support these findings. These research studies involved an unprecedented collaboration between a very large number of scientists. Therefore, rather than naming all the individual contributors a decision was made to adopt the name Angiosperm Phylogeny Group classification, or APG for short. The first publication under this name was in 1998, and attracted considerable media attention. The intention was to provide a widely accepted and more stable point of reference for angiosperm classification.
, three revisions have been published, in 2003 (APG II), in 2009 (APG III) and in 2016 (APG IV), each superseding the previous system. Thirteen researchers have been listed as authors to the three papers, and a further 43 as contributors (see Members of the APG below).
A classification presents a view at a particular point in time, based on a particular state of research. Independent researchers, including members of the APG, continue to publish their own views on areas of angiosperm taxonomy. Classifications change, however inconvenient this is to users. However, the APG publications are increasingly regarded as an authoritative point of reference and the following are some examples of the influence of the APG system:
A significant number of major herbaria, including Kew, are changing the order of their collections in accordance with APG.
The influential World Checklist of Selected Plant Families (also from Kew) is being updated to the APG III system.
In the United States in 2006, a photographic survey of the plants of the US and Canada is organized according to the APG II system.
In the UK, the 2010 edition of the standard flora of the British Isles (by Stace) is based on the APG III system. The previous editions were based on the Cronquist system.
Principles of the APG system
The principles of the APG's approach to classification were set out in the first paper of 1998, and have remained unchanged in subsequent revisions. Briefly, these are:
The Linnean system of orders and families should be retained. "The family is central in flowering plant systematics." An ordinal classification of families is proposed as a "reference tool of broad utility". Orders are considered to be of particular value in teaching and in studying family relationships.
Groups should be monophyletic (i.e. consist of all descendants of a common ancestor). The main reason why existing systems are rejected is because they do not have this property, they are not phylogenetic.
A broad approach is taken to defining the limits of groups such as orders and families. Thus of orders, it is said that a limited number of larger orders will be more useful. Families containing only a single genus and orders containing only a single family are avoided where this is possible without violating the over-riding requirement for monophyly.
Above or parallel to the level of orders and families, the term clades is used more freely. (Some clades have later been given formal names in a paper associated with the 2009 revision of the APG system.) The authors say that it is "not possible, nor is it desirable" to name all clades in a phylogenetic tree; however, systematists need to agree on names for some clades, particularly orders and families, to facilitate communication and discussion.
For a detailed discussion on phylogenetic nomenclature, see Cantino et al. (2007).)
APG I (1998)
The initial 1998 paper by the APG made angiosperms the first large group of organisms to be systematically re-classified primarily on the basis of genetic characteristics. The paper explained the authors' view that there is a need for a classification system for angiosperms at the level of families, orders and above, but that existing classifications were "outdated". The main reason why existing systems were rejected was because they were not phylogenetic, i.e. not based on strictly monophyletic groups (groups which consist of all descendants of a common ancestor). An ordinal classification of flowering plant families was proposed as a "reference tool of broad utility". The broad approach adopted to defining the limits of orders resulted in the recognition of 40 orders, compared to, for example, 232 in Takhtajan's 1997 classification.
In 1998 only a handful of families had been adequately studied, but the primary aim was to obtain a consensus on the naming of higher orders. Such a consensus proved relatively easy to achieve but the resultant tree was highly unresolved. That is, while the relationship of orders was established, their composition was not.
Other features of the proposed classification included:
Formal, scientific names are not used above the level of order, named clades being used instead. Thus eudicots and monocots are not given a formal rank on the grounds that "it is not yet clear at which level they should be recognized".
A substantial number of taxa whose classification had traditionally been uncertain are given places, although there still remain 25 families of "uncertain position".
Alternative classifications are provided for some groups, in which a number of families can either be regarded as separate or can be merged into a single larger family. For example, the Fumariaceae can either be treated as a separate family or as part of Papaveraceae.
A major outcome of the classification was the disappearance of the traditional division of the flowering plants into two groups, monocots and dicots. The monocots were recognized as a clade, but the dicots were not, with a number of former dicots being placed in separate groups basal to both monocots and the remaining dicots, the eudicots or 'true dicots'. The overall scheme was relatively simple. This consisted of a grade consisting of isolated taxa (referred to as ANITA), followed by the major angiosperm radiation, clades of monocots, magnolids and eudicots. The last being a large clade with smaller subclades and two main groupings, rosids and asterids, each in turn having two major subclades.
APG II (2003)
As the overall relationship between groups of flowering plants became clearer, the focus shifted to the family level, in particular those families generally accepted as problematic. Again, consensus was achieved relatively easily resulting in an updated classification at the family level. The second paper published by the APG in 2003 presented an update to the original classification of 1998. The authors stated that changes were proposed only when there was "substantial new evidence" which supported them.
The classification continued the tradition of seeking broad circumscriptions of taxa, for example trying to place small families containing only one genus in a larger group. The authors stated that they have generally accepted the views of specialists, although noting that specialists "nearly always favour splitting of groups" regarded as too varied in their morphology.
APG II continued and indeed extends the use of alternative 'bracketed' taxa allowing the choice of either a large family or a number of smaller ones. For example, the large family Asparagaceae includes seven 'bracketed' families which can either be considered as part of the Asparagaceae or as separate families.
Some of the main changes in APG II were:
New orders are proposed, particularly to accommodate the 'basal clades' left as families in the first system.
Many of the previously unplaced families are now located within the system.
Several major families are re-structured.
In 2007, a paper was published giving a linear ordering of the families in APG II, suitable for ordering herbarium specimens, for example.
APG III (2009)
The third paper from the APG updates the system described in the 2003 paper. The broad outline of the system remains unchanged, but the number of previously unplaced families and genera is significantly reduced. This requires the recognition of both new orders and new families compared to the previous classification. The number of orders goes up from 45 to 59; only 10 families are not placed in an order and only two of these (Apodanthaceae and Cynomoriaceae) are left entirely outside the classification. The authors say that they have tried to leave long-recognized families unchanged, while merging families with few genera. They "hope the classification [...] will not need much further change."
A major change is that the paper discontinues the use of 'bracketed' families in favour of larger, more inclusive families. As a result, the APG III system contains only 415 families, rather than the 457 of APG II. For example, the agave family (Agavaceae) and the hyacinth family (Hyacinthaceae) are no longer regarded as distinct from the broader asparagus family (Asparagaceae). The authors say that alternative circumscriptions, as in APG I and II, are likely to cause confusion and that major herbaria which are re-arranging their collections in accordance with the APG approach have all agreed to use the more inclusive families. This approach is being increasingly used in collections in herbaria and botanic gardens.
In the same volume of the journal, two related papers were published. One gives a linear ordering of the families in APG III; as with the linear ordering published for APG II, this is intended for ordering herbarium specimens, for example. The other paper gives, for the first time, a classification of the families in APG III which uses formal taxonomic ranks; previously only informal clade names were used above the ordinal level.
APG IV (2016)
In the development of a fourth version there was some controversy over the methodology, and the development of a consensus proved more difficult than in previous iterations. In particular Peter Stevens questioned the validity of discussions regarding family delimitation in the absence of changes of phylogenetic relationships.
Further progress was made by the use of large banks of genes, including those of plastid, mitochondrial and nuclear ribosomal origin, such as that of Douglas Soltis and colleagues (2011). The fourth version was finally published in 2016. It arose from an international conference hosted at the Royal Botanical Gardens in September 2015 and also an online survey of botanists and other users. The broad outline of the system remains unchanged but several new orders are included (Boraginales, Dilleniales, Icacinales, Metteniusales and Vahliales), some new families are recognised (Kewaceae, Macarthuriaceae, Maundiaceae, Mazaceae, Microteaceae, Nyssaceae, Peraceae, Petenaeaceae and Petiveriaceae) and some previously recognised families are lumped (Aristolochiaceae now includes Lactoridaceae and Hydnoraceae; Restionaceae now re-includes Anarthriaceae and Centrolepidaceae; and Buxaceae now includes Haptanthaceae). Due to nomenclatural issues, the family name Asphodelaceae is used instead of Xanthorrhoeaceae, and Francoaceae is used instead of Melianthaceae (and now also includes Vivianiaceae). This brings the total number of orders and families recognized in the APG system to 64 and 416, respectively. Two additional informal major clades, superrosids and superasterids, that each comprise the additional orders that are included in the larger clades dominated by the rosids and asterids are also included. APG IV also uses the linear approach (LAPG) as advocated by Haston et al. (2009) In a supplemental file Byng et al. provide an alphabetical list of families by orders.
Updates
Peter Stevens, one of the authors of all four of the APG papers, maintains a web site, the Angiosperm Phylogeny Website (APWeb), hosted by the Missouri Botanical Garden, which has been regularly updated since 2001, and is a useful source for the latest research in angiosperm phylogeny which follows the APG approach. Other sources include the Angiosperm Phylogeny Poster and The Flowering Plants Handbook.
Members of the APG
a = listed as an author; c = listed as a contributor
References
Bibliography
APG
APG I-IV (1998–2016)
External links
Angiosperm Phylogeny Website hosted by the Missouri Botanical Garden Website
Botany organizations
Plant taxonomy
Taxonomy (biology) organizations
International scientific organizations
Scientific organizations established in 1998 | Angiosperm Phylogeny Group | [
"Biology"
] | 3,032 | [
"Taxonomy (biology) organizations",
"Plant taxonomy",
"Taxonomy (biology)",
"Plants"
] |
631,680 | https://en.wikipedia.org/wiki/Giuseppe%20Mercalli | Giuseppe Mercalli (21 May 1850 – 19 March 1914) was an Italian volcanologist and Catholic priest. He is known best for the Mercalli intensity scale for measuring earthquake intensity.
Biography
Born in Milan, Mercalli was ordained a Roman Catholic priest and soon became a professor of Natural Sciences at the seminary of Milan. The Italian government appointed him a professor at Domodossola, followed by a job at Reggio di Calabria. He was professor of geology at the University of Catania during the late 1880s and finally was given a job at Naples University. He was also director of the Vesuvius Observatory until the time of his death.
Giuseppe Mercalli also observed eruptions of the volcanoes Stromboli and Vulcano in the Aeolian Islands. His descriptions of these eruptions became the basis for two indices of the volcanic explosivity index: 1 – Strombolian eruption, and 2 – Vulcanian eruption. He also photographed Vesuvius immediately after its eruption in 1906.
In 1914, Mercalli burnt to death under suspicious circumstances, allegedly after knocking over a paraffin lamp in his bedroom. He is thought to have been working through the night, as he often did (he once was found working at 11 a.m. when he had set an examination, upon hearing which he replied, "It surely can't be daylight yet!"), when the fatal accident occurred. His body was found, carbonized, by his bed, holding a blanket which he apparently attempted to use to fend off the flames. The authorities, however, stated a few days later that the professor was quite possibly murdered by strangling and soaked in petrol and burned to conceal the crime because they determined that some money (now worth about $1,400) was missing from the professor's apartment.
Intensity scales
Mercalli devised two earthquake intensity scales, both modifications of the Rossi–Forel scale. The first, now largely forgotten, had six degrees whereas the Rossi–Forel scale had ten. The second, now known as the Mercalli intensity scale, had ten degrees, and elaborated the descriptions in the Rossi–Forel scale.
The Mercalli intensity scale is, in modified form, still used. Unlike the Richter scale, which measures the energy released by an earthquake, the Mercalli intensity scale measures the effects of an earthquake on structures and people. It is poorly suited for measuring earthquakes in sparsely populated areas but useful for comparing damage done by various tremors and historical earthquakes, and for earthquake engineering. The scale currently in use assigns indices ranging from I ("Not felt, except by a few under favorable conditions"), to XII ("Damage total; objects thrown into the air").
Italian physicist Adolfo Cancani expanded the ten-degree Mercalli scale with the addition of two degrees at the more intense of the scale: XI (catastrophe) and XII (enormous catastrophe). This was later modified by the German geophysicist August Heinrich Sieberg and became known as the Mercalli–Cancani–Sieberg (MCS) scale. This was modified again and published in English by Harry O. Wood and Frank Neumann in 1931 as the Mercalli–Wood–Neumann (MWN) scale. It was later improved by Charles Richter, developer of the Richter scale.
References
External links
1850 births
1914 deaths
Clergy from Milan
19th-century Italian geologists
Seismologists
Italian volcanologists
Earthquake engineering
Catholic clergy scientists
19th-century Italian Roman Catholic priests
20th-century Italian Roman Catholic priests
Scientists from Milan
20th-century Italian geologists | Giuseppe Mercalli | [
"Engineering"
] | 734 | [
"Earthquake engineering",
"Civil engineering",
"Structural engineering"
] |
631,710 | https://en.wikipedia.org/wiki/Zhang%20Daqing | Zhang Daqing () (born October 23, 1969) is a Chinese amateur astronomer. He is from Henan province.
He co-discovered periodic comet 153P/Ikeya-Zhang. He is the first Chinese amateur astronomer who has a comet name after him. He is also a telescope maker. Periodic comet 153P/Ikeya-Zhang was discovered by his self-made telescope on February 1, 2002.
External links
http://comet.lamost.org/comet/zhang.htm
Discoverers of comets
1969 births
Living people
People from Kaifeng
Scientists from Henan
21st-century Chinese astronomers | Zhang Daqing | [
"Astronomy"
] | 124 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
631,721 | https://en.wikipedia.org/wiki/Tractography | In neuroscience, tractography is a 3D modeling technique used to visually represent nerve tracts using data collected by diffusion MRI. It uses special techniques of magnetic resonance imaging (MRI) and computer-based diffusion MRI. The results are presented in two- and three-dimensional images called tractograms.
In addition to the long tracts that connect the brain to the rest of the body, there are complicated neural circuits formed by short connections among different cortical and subcortical regions. The existence of these tracts and circuits has been revealed by histochemistry and biological techniques on post-mortem specimens. Nerve tracts are not identifiable by direct exam, CT, or MRI scans. This difficulty explains the paucity of their description in neuroanatomy atlases and the poor understanding of their functions.
The most advanced tractography algorithm can produce 90% of the ground truth bundles, but it still contains a substantial amount of invalid results.
MRI technique
Tractography is performed using data from diffusion MRI. The free water diffusion is termed "isotropic" diffusion. If the water diffuses in a medium with barriers, the diffusion will be uneven, which is termed anisotropic diffusion. In such a case, the relative mobility of the molecules from the origin has a shape different from a sphere. This shape is often modeled as an ellipsoid, and the technique is then called diffusion tensor imaging. Barriers can be many things: cell membranes, axons, myelin, etc.; but in white matter the principal barrier is the myelin sheath of axons. Bundles of axons provide a barrier to perpendicular diffusion and a path for parallel diffusion along the orientation of the fibers.
Anisotropic diffusion is expected to be increased in areas of high mature axonal order. Conditions where the myelin or the structure of the axon are disrupted, such as trauma, tumors, and inflammation reduce anisotropy, as the barriers are affected by destruction or disorganization.
Anisotropy is measured in several ways. One way is by a ratio called fractional anisotropy (FA). An FA of 0 corresponds to a perfect sphere, whereas 1 is an ideal linear diffusion. Few regions have FA larger than 0.90. The number gives information about how aspherical the diffusion is but says nothing of the direction.
Each anisotropy is linked to an orientation of the predominant axis (predominant direction of the diffusion). Post-processing programs are able to extract this directional information.
This additional information is difficult to represent on 2D grey-scaled images. To overcome this problem, a color code is introduced. Basic colors can tell the observer how the fibers are oriented in a 3D coordinate system, this is termed an "anisotropic map". The software could encode the colors in this way:
Red indicates directions in the X axis: right to left or left to right.
Green indicates directions in the Y axis: posterior to anterior or from anterior to posterior.
Blue indicates directions in the Z axis: inferior to superior or vice versa.
The technique is unable to discriminate the "positive" or "negative" direction in the same axis.
Mathematics
Using diffusion tensor MRI, one can measure the apparent diffusion coefficient at each voxel in the image, and after multilinear regression across multiple images, the whole diffusion tensor can be reconstructed.
Suppose there is a fiber tract of interest in the sample. Following the Frenet–Serret formulas, we can formulate the space-path of the fiber tract as a parameterized curve:
where is the tangent vector of the curve. The reconstructed diffusion tensor can be treated as a matrix, and we can compute its eigenvalues and eigenvectors . By equating the eigenvector corresponding to the largest eigenvalue with the direction of the curve:
we can solve for given the data for . This can be done using numerical integration, e.g., using Runge–Kutta, and by interpolating the principal eigenvectors.
See also
Connectome
Diffusion MRI
Connectogram
References
Magnetic resonance imaging | Tractography | [
"Chemistry"
] | 838 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
631,930 | https://en.wikipedia.org/wiki/Membrane%20transport | In cellular biology, membrane transport refers to the collection of mechanisms that regulate the passage of solutes such as ions and small molecules through biological membranes, which are lipid bilayers that contain proteins embedded in them. The regulation of passage through the membrane is due to selective membrane permeability – a characteristic of biological membranes which allows them to separate substances of distinct chemical nature. In other words, they can be permeable to certain substances but not to others.
The movements of most solutes through the membrane are mediated by membrane transport proteins which are specialized to varying degrees in the transport of specific molecules. As the diversity and physiology of the distinct cells is highly related to their capacities to attract different external elements, it is postulated that there is a group of specific transport proteins for each cell type and for every specific physiological stage. This differential expression is regulated through the differential transcription of the genes coding for these proteins and its translation, for instance, through genetic-molecular mechanisms, but also at the cell biology level: the production of these proteins can be activated by cellular signaling pathways, at the biochemical level, or even by being situated in cytoplasmic vesicles. The cell membrane regulates the transport of materials entering and exiting the cell.
Background
Thermodynamically the flow of substances from one compartment to another can occur in the direction of a concentration or electrochemical gradient or against it. If the exchange of substances occurs in the direction of the gradient, that is, in the direction of decreasing potential, there is no requirement for an input of energy from outside the system; if, however, the transport is against the gradient, it will require the input of energy, metabolic energy in this case.
For example, a classic chemical mechanism for separation that does not require the addition of external energy is dialysis. In this system a semipermeable membrane separates two solutions of different concentration of the same solute. If the membrane allows the passage of water but not the solute the water will move into the compartment with the greatest solute concentration in order to establish an equilibrium in which the energy of the system is at a minimum. This takes place because the water moves from a high solvent concentration to a low one (in terms of the solute, the opposite occurs) and because the water is moving along a gradient there is no need for an external input of energy.
The nature of biological membranes, especially that of its lipids, is amphiphilic, as they form bilayers that contain an internal hydrophobic layer and an external hydrophilic layer. This structure makes transport possible by simple or passive diffusion, which consists of the diffusion of substances through the membrane without expending metabolic energy and without the aid of transport proteins. If the transported substance has a net electrical charge, it will move not only in response to a concentration gradient, but also to an electrochemical gradient due to the membrane potential.
As few molecules are able to diffuse through a lipid membrane the majority of the transport processes involve transport proteins. These transmembrane proteins possess a large number of alpha helices immersed in the lipid matrix. In bacteria these proteins are present in the beta lamina form. This structure probably involves a conduit through hydrophilic protein environments that cause a disruption in the highly hydrophobic medium formed by the lipids. These proteins can be involved in transport in a number of ways: they act as pumps driven by ATP, that is, by metabolic energy, or as channels of facilitated diffusion.
Thermodynamics
A physiological process can only take place if it complies with basic thermodynamic principles. Membrane transport obeys physical laws that define its capabilities and therefore its biological utility.
A general principle of thermodynamics that governs the transfer of substances through membranes and other surfaces is that the exchange of free energy, ΔG, for the transport of a mole of a substance of concentration C1 in a compartment to another compartment where it is present at C2 is:
When C2 is less than C1, ΔG is negative, and the process is thermodynamically favorable. As the energy is transferred from one compartment to another, except where other factors intervene, an equilibrium will be reached where C2=C1, and where ΔG = 0. However, there are three circumstances under which this equilibrium will not be reached, circumstances which are vital for the in vivo functioning of biological membranes:
The macromolecules on one side of the membrane can bond preferentially to a certain component of the membrane or chemically modify it. In this way, although the concentration of the solute may actually be different on both sides of the membrane, the availability of the solute is reduced in one of the compartments to such an extent that, for practical purposes, no gradient exists to drive transport.
A membrane electrical potential can exist which can influence ion distribution. For example, for the transport of ions from the exterior to the interior, it is possible that:
Where F is Faraday's constant and ΔP the membrane potential in volts. If ΔP is negative and Z is positive, the contribution of the term ZFΔP to ΔG will be negative, that is, it will favor the transport of cations from the interior of the cell. So, if the potential difference is maintained, the equilibrium state ΔG = 0 will not correspond to an equimolar concentration of ions on both sides of the membrane.
If a process with a negative ΔG is coupled to the transport process then the global ΔG will be modified. This situation is common in active transport and is described thus:
Where ΔGb corresponds to a favorable thermodynamic reaction, such as the hydrolysis of ATP, or the co-transport of a compound that is moved in the direction of its gradient.
Transport types
Passive diffusion and active diffusion
As mentioned above, passive diffusion is a spontaneous phenomenon that increases the entropy of a system and decreases the free energy. The transport process is influenced by the characteristics of the transport substance and the nature of the bilayer. The diffusion velocity of a pure phospholipid membrane will depend on:
concentration gradient,
hydrophobicity,
size,
charge, if the molecule has a net charge.
temperature
Active and co-transport
In active transport a solute is moved against a concentration or electrochemical gradient; in doing so the transport proteins involved consume metabolic energy, usually ATP. In primary active transport the hydrolysis of the energy provider (e.g. ATP) takes place directly in order to transport the solute in question, for instance, when the transport proteins are ATPase enzymes. Where the hydrolysis of the energy provider is indirect as is the case in secondary active transport, use is made of the energy stored in an electrochemical gradient. For example, in co-transport use is made of the gradients of certain solutes to transport a target compound against its gradient, causing the dissipation of the solute gradient. It may appear that, in this example, there is no energy use, but hydrolysis of the energy provider is required to establish the gradient of the solute transported along with the target compound. The gradient of the co-transported solute will be generated through the use of certain types of proteins called biochemical pumps.
The discovery of the existence of this type of transporter protein came from the study of the kinetics of cross-membrane molecule transport. For certain solutes it was noted that the transport velocity reached a plateau at a particular concentration above which there was no significant increase in uptake rate, indicating a log curve type response. This was interpreted as showing that transport was mediated by the formation of a substrate-transporter complex, which is conceptually the same as the enzyme-substrate complex of enzyme kinetics. Therefore, each transport protein has an affinity constant for a solute that is equal to the concentration of the solute when the transport velocity is half its maximum value. This is equivalent in the case of an enzyme to the Michaelis–Menten constant.
Some important features of active transport in addition to its ability to intervene even against a gradient, its kinetics and the use of ATP, are its high selectivity and ease of selective pharmacological inhibition
Secondary active transporter proteins
Secondary active transporter proteins move two molecules at the same time: one against a gradient and the other with its gradient. They are distinguished according to the directionality of the two molecules:
antiporter (also called exchanger or counter-transporter): move a molecule against its gradient and at the same time displaces one or more ions along its gradient. The molecules move in opposite directions.
symporter: move a molecule against its gradient while displacing one or more different ions along their gradient. The molecules move in the same direction.
Both can be referred to as co-transporters.
Pumps
A pump is a protein that hydrolyses ATP to transport a particular solute through a membrane, and in doing so, generating an electrochemical gradient membrane potential. This gradient is of interest as an indicator of the state of the cell through parameters such as the Nernst potential. In terms of membrane transport the gradient is of interest as it contributes to decreased system entropy in the co-transport of substances against their gradient.
One of the most important pumps in animal cells is the sodium potassium pump, that operates through the following mechanism:
binding of three Na+ ions to their active sites on the pump which are bound to ATP.
ATP is hydrolyzed leading to phosphorylation of the cytoplasmic side of the pump, this induces a structure change in the protein. The phosphorylation is caused by the transfer of the terminal group of ATP to a residue of aspartate in the transport protein and the subsequent release of ADP.
the structure change in the pump exposes the Na+ to the exterior. The phosphorylated form of the pump has a low affinity for Na+ ions so they are released.
once the Na+ ions are liberated, the pump binds two molecules of K+ to their respective bonding sites on the extracellular face of the transport protein. This causes the dephosphorylation of the pump, reverting it to its previous conformational state, transporting the K+ ions into the cell.
The unphosphorylated form of the pump has a higher affinity for Na+ ions than K+ ions, so the two bound K+ ions are released into the cytosol. ATP binds, and the process starts again.
Membrane selectivity
As the main characteristic of transport through a biological membrane is its selectivity and its subsequent behavior as a barrier for certain substances, the underlying physiology of the phenomenon has been studied extensively. Investigation into membrane selectivity have classically been divided into those relating to electrolytes and non-electrolytes.
Electrolyte selectivity
The ionic channels define an internal diameter that permits the passage of small ions that is related to various characteristics of the ions that could potentially be transported. As the size of the ion is related to its chemical species, it could be assumed a priori that a channel whose pore diameter was sufficient to allow the passage of one ion would also allow the transfer of others of smaller size, however, this does not occur in the majority of cases. There are two characteristics alongside size that are important in the determination of the selectivity of the membrane pores: the facility for dehydration and the interaction of the ion with the internal charges of the pore.
In order for an ion to pass through a pore it must dissociate itself from the water molecules that cover it in successive layers of solvation. The tendency to dehydrate, or the facility to do this, is related to the size of the ion: larger ions can do it more easily that the smaller ions, so that a pore with weak polar centres will preferentially allow passage of larger ions over the smaller ones.
When the interior of the channel is composed of polar groups from the side chains of the component amino acids, the interaction of a dehydrated ion with these centres can be more important than the facility for dehydration in conferring the specificity of the channel. For example, a channel made up of histidines and arginines, with positively charged groups, will selectively repel ions of the same polarity, but will facilitate the passage of negatively charged ions. Also, in this case, the smallest ions will be able to interact more closely due to the spatial arrangement of the molecule (stericity), which greatly increases the charge-charge interactions and therefore exaggerates the effect.
Non-electrolyte selectivity
Non-electrolytes, substances that generally are hydrophobic and lipophilic, usually pass through the membrane by dissolution in the lipid bilayer, and therefore, by passive diffusion. For those non-electrolytes whose transport through the membrane is mediated by a transport protein the ability to diffuse is, generally, dependent on the partition coefficient K.
Partially charged non-electrolytes, that are more or less polar, such as ethanol, methanol or urea, are able to pass through the membrane through aqueous channels immersed in the membrane. There is no effective regulation mechanism that limits this transport, which indicates an intrinsic vulnerability of the cells to the penetration of these molecules.
Creation of membrane transport proteins
There are several databases which attempt to construct phylogenetic trees detailing the creation of transporter proteins. One such resource is the Transporter Classification database
See also
Cellular transport
Scramblases
References
Membrane transport | Membrane transport | [
"Chemistry"
] | 2,786 | [
"Membrane biology",
"Molecular biology"
] |
632,030 | https://en.wikipedia.org/wiki/Allelopathy | Allelopathy is a biological phenomenon by which an organism produces one or more biochemicals that influence the germination, growth, survival, and reproduction of other organisms. These biochemicals are known as allelochemicals and can have beneficial (positive allelopathy) or detrimental (negative allelopathy) effects on the target organisms and the community. Allelopathy is often used narrowly to describe chemically-mediated competition between plants; however, it is sometimes defined more broadly as chemically-mediated competition between any type of organisms. The original concept developed by Hans Molisch in 1937 seemed focused only on interactions between plants, between microorganisms and between microorganisms and plants. Allelochemicals are a subset of secondary metabolites, which are not directly required for metabolism (i.e. growth, development and reproduction) of the allelopathic organism.
Allelopathic interactions are an important factor in determining species distribution and abundance within plant communities, and are also thought to be important in the success of many invasive plants. For specific examples, see black walnut (Juglans nigra), tree of heaven (Ailanthus altissima), black crowberry (Empetrum nigrum), spotted knapweed (Centaurea stoebe), garlic mustard (Alliaria petiolata), Casuarina/Allocasuarina spp., and nutsedge.
It can often be difficult in practice to distinguish allelopathy from resource competition. While the former is caused by the addition of a harmful chemical agent to the environment, the latter is caused by the removal of essential resources (nutrients, light, water, etc.). Often, both mechanisms can act simultaneously. Moreover, some allelochemicals may function by reducing nutrient availability. Further confounding the issue, the production of allelochemicals can itself be affected by environmental factors such as nutrient availability, temperature and pH. Today, most ecologists recognize the existence of allelopathy, however many particular cases remain controversial. Furthermore, the specific modes of action of allelochemicals on different organisms are largely open to speculation and investigation.
History
The term allelopathy from the Greek-derived compounds - () and - () (meaning "mutual harm" or "suffering"), was first used in 1937 by the Austrian professor Hans Molisch in the book Der Einfluss einer Pflanze auf die andere - Allelopathie (The Effect of Plants on Each Other - Allelopathy) published in German. He used the term to describe biochemical interactions by means of which a plant inhibits the growth of neighbouring plants. In 1971, Whittaker and Feeny published a review in the journal Science, which proposed an expanded definition of allelochemical interactions that would incorporate all chemical interactions among organisms. In 1984, Elroy Leon Rice in his monograph on allelopathy enlarged the definition to include all direct positive or negative effects of a plant on another plant or on micro-organisms by the liberation of biochemicals into the natural environment. Over the next ten years, the term was used by other researchers to describe broader chemical interactions between organisms, and by 1996 the International Allelopathy Society (IAS) defined allelopathy as "Any process involving secondary metabolites produced by plants, algae, bacteria and fungi that influences the growth and development of agriculture and biological systems." In more recent times, plant researchers have begun to switch back to the original definition of substances that are produced by one plant that inhibit another plant. Confusing the issue more, zoologists have borrowed the term to describe chemical interactions between invertebrates like corals and sponges.
Long before the term allelopathy was used, people observed the negative effects that one plant could have on another. Theophrastus, who lived around 300 BC noticed the inhibitory effects of pigweed on alfalfa. In China around the first century CE, the author of Shennong Ben Cao Jing, a book on agriculture and medicinal plants, described 267 plants that had pesticidal abilities, including those with allelopathic effects. In 1832, the Swiss botanist De Candolle suggested that crop plant exudates were responsible for an agriculture problem called soil sickness.
Allelopathy is not universally accepted among ecologists. Many have argued that its effects cannot be distinguished from the exploitation competition that occurs when two (or more) organisms attempt to use the same limited resource, to the detriment of one or both. In the 1970s, great effort went into distinguishing competitive and allelopathic effects by some researchers, while in the 1990s others argued that the effects were often interdependent and could not readily be distinguished. However, by 1994, D. L. Liu and J. V. Lowett at the Department of Agronomy and Soil Science, University of New England in Armidale, New South Wales, Australia, wrote two papers in the Journal of Chemical Ecology that developed methods to separate the allelochemical effects from other competitive effects, using barley plants and inventing a process to examine the allelochemicals directly. In 1994, M.C. Nilsson at the Swedish University of Agricultural Sciences in Umeå showed in a field study that allelopathy exerted by Empetrum hermaphroditum reduced growth of Scots pine seedlings by ~ 40%, and that below-ground resource competition by E. hermaphroditum accounted for the remaining growth reduction. For this work she inserted PVC-tubes into the ground to reduce below-ground competition or added charcoal to soil surface to reduce the impact of allelopathy, as well as a treatment combining the two methods. However, the use of activated carbon to make inferences about allelopathy has itself been criticized because of the potential for the charcoal to directly affect plant growth by altering nutrient availability.
Some high profile work on allelopathy has been mired in controversy. For example, the discovery that (−)-catechin was purportedly responsible for the allelopathic effects of the invasive weed Centaurea stoebe was greeted with much fanfare after being published in Science in 2003. One scientist, Dr. Alastair Fitter, was quoted as saying that this study was "so convincing that it will 'now place allelopathy firmly back on center stage.'" However, many of the key papers associated with these findings were later retracted or majorly corrected, after it was found that they contained fabricated data showing unnaturally high levels of catechin in soils surrounding C. stoebe. Subsequent studies from the original lab have not been able to replicate the results from these retracted studies, nor have most independent studies conducted in other laboratories. Thus, it is doubtful whether the levels of (−)-catechin found in soils are high enough to affect competition with neighboring plants. The proposed mechanism of action (acidification of the cytoplasm through oxidative damage) has also been criticized, on the basis that (−)-catechin is actually an antioxidant.
Examples
Plants
Many invasive plant species interfere with native plants through allelopathy. A famous case of purported allelopathy is in desert shrubs. One of the most widely known early examples was Salvia leucophylla, because it was on the cover of the journal Science in 1964. Bare zones around the shrubs were hypothesized to be caused by volatile terpenes emitted by the shrubs. However, like many allelopathy studies, it was based on artificial lab experiments and unwarranted extrapolations to natural ecosystems. In 1970, Science published a study where caging the shrubs to exclude rodents and birds allowed grass to grow in the bare zones.
A detailed history of this story can be found in Halsey 2004.
Garlic mustard is another invasive plant species that may owe its success partly to allelopathy. Its success in North American temperate forests may be partly due to its excretion of glucosinolates like sinigrin that can interfere with mutualisms between native tree roots and their mycorrhizal fungi.
Allelopathy has been shown to play a crucial role in forests, influencing the composition of the vegetation growth, and also provides an explanation for the patterns of forest regeneration. The black walnut (Juglans nigra) produces the allelochemical juglone, which affects some species greatly while others not at all. However, most of the evidence for allelopathic effects of juglone come from laboratory assays and it thus remains controversial to what extent juglone affects the growth of competitors under field conditions. The leaf litter and root exudates of some Eucalyptus species are allelopathic for certain soil microbes and plant species. The tree of heaven, Ailanthus altissima, produces allelochemicals in its roots that inhibit the growth of many plants. Spotted knapweed (Centaurea) is considered an invasive plant that also utilizes allelopathy.
Another example of allelopathy is seen in Leucaena leucocephala, known as the miracle tree. This plant contains toxic amino acids that inhibit other plants’ growth but not its own species growth. Different crops react differently to these allelochemicals, so wheat yield decreases, while rice increases in the presence of L. leucocephala.
Capsaicin is an allelochemical found in many peppers that are cultivated by humans as a spice/food source. It is considered an allelochemical because it is not required for plant growth and survival, but instead deters herbivores and prevents other plants from sprouting in its immediate vicinity. Among the plants it has been studied on are grasses, lettuce, and alfalfa, and on average, it will inhibit the growth of these plants by about 50%. Capsaicin has been shown to deter both herbivores and certain parasites’ performance. Herbivores such as caterpillars show decreased development when fed a diet high in capsaicin.
Applications
Allelochemicals are a useful tool in sustainable farming due to their ability to control weeds. The possible application of allelopathy in agriculture is the subject of much research. Using allelochemical producing plants in agriculture results in significant suppression of weeds and various pests. Some plants will even reduce the germination rate of other plants by 50%. Current research is focused on the effects of weeds on crops, crops on weeds, and crops on crops. This research furthers the possibility of using allelochemicals as growth regulators and natural herbicides, to promote sustainable agriculture. Agricultural practices may be enhanced through the utilization of allelochemical producing plants. When used correctly, these plants can provide pesticide, herbicide, and antimicrobial qualities to crops. number of such allelochemicals are commercially available or in the process of large-scale manufacture. For example, leptospermone is an allelochemical in lemon bottlebrush (Callistemon citrinus). Although it was found to be too weak as a commercial herbicide, a chemical analog of it, mesotrione (tradename Callisto), was found to be effective. It is sold to control broadleaf weeds in corn but also seems to be an effective control for crabgrass in lawns. Sheeja (1993) reported the allelopathic interaction of the weeds Chromolaena odorata (Eupatorium odoratum) and Lantana camara on selected major crops.
Many crop cultivars show strong allelopathic properties, of which rice (Oryza sativa) has been most studied. Rice allelopathy depends on variety and origin: Japonica rice is more allelopathic than Indica and Japonica-Indica hybrid. More recently, critical review on rice allelopathy and the possibility for weed management reported that allelopathic characteristics in rice are quantitatively inherited and several allelopathy-involved traits have been identified. The use of allelochemicals in agriculture provide for a more environmentally friendly approach to weed control, as they do not leave behind residues. Currently used pesticides and herbicides leak into waterways and result in unsafe water qualities. This problem could be eliminated or significantly reduced by using allelochemicals instead of harsh herbicides. The use of cover crops also results in less soil erosion and lessens the need for nitrogen heavy fertilizers.
See also
Forest pathology
Allomone
Phytochemical
Semiochemical
References
Further reading
anon. (Inderjit). 2002. Multifaceted approach to study allelochemicals in an ecosystem. In: Allelopathy, from Molecules to Ecosystems, M.J. Reigosa and N. Pedrol, Eds. Science Publishers, Enfield, New Hampshire.
Bhowmick N, Mani A, Hayat A (2016), "Allelopathic effect of litchi leaf extract on seed germination of Pea and lafa", Journal of Agricultural Engineering and Food Technology, 3 (3): 233-235.
Einhellig, F.A. 2002. The physiology of allelochemical action: clues and views. In: Allelopathy, from Molecules to Ecosystems, M.J. Reigosa and N. Pedrol, Eds. Science Publishers, Enfield, New Hampshire.
Harper, J. L. 1977. Population Biology of Plants. Academic Press, London.
Jose S. 2002. Black walnut allelopathy: current state of the science. In: Chemical Ecology of Plants: Allelopathy in aquatic and terrestrial ecosystems, A. U. Mallik and anon. (Inderjit), Eds. Birkhauser Verlag, Basel, Switzerland.
Mallik, A. U. and anon. (Inderjit). 2002. Problems and prospects in the study of plant allelochemicals: a brief introduction. In: Chemical Ecology of Plants: Allelopathy in aquatic and terrestrial ecosystems, Mallik, A.U. and anon., Eds. Birkhauser Verlag, Basel, Switzerland.
Reigosa, M. J., N. Pedrol, A. M. Sanchez-Moreiras, and L. Gonzales. 2002. Stress and allelopathy. In: Allelopathy, from Molecules to Ecosystems, M.J. Reigosa and N. Pedrol, Eds. Science Publishers, Enfield, New Hampshire.
Rice, E.L. 1974. Allelopathy. Academic Press, New York.
Sheeja B.D. 1993. Allelopathic effects of Eupatorium odoratum L. and Lantana camara, L. on four major crops. M. Phil dissertation submitted to Manonmaniam Sundaranar University, Tirunelveli.
Webster 1983. Webster's Ninth New Collegiate Dictionary. Merriam-Webster, Inc., Springfield, Mass.
Willis, R. J. 1999. Australian studies on allelopathy in Eucalyptus: a review. In: Principles and practices in plant ecology: Allelochemical interactions, anon. (Inderjit), K.M.M. Dakshini, and C.L. Foy, Eds. CRC Press, and Boca Raton, FL.
External links
Allelopathy Journal
International Allelopathy Society
Botany
Chemical ecology | Allelopathy | [
"Chemistry",
"Biology"
] | 3,144 | [
"Biochemistry",
"Chemical ecology",
"Plants",
"Botany"
] |
632,224 | https://en.wikipedia.org/wiki/Compare-and-swap | In computer science, compare-and-swap (CAS) is an atomic instruction used in multithreading to achieve synchronization. It compares the contents of a memory location with a given value and, only if they are the same, modifies the contents of that memory location to a new given value. This is done as a single atomic operation. The atomicity guarantees that the new value is calculated based on up-to-date information; if the value had been updated by another thread in the meantime, the write would fail. The result of the operation must indicate whether it performed the substitution; this can be done either with a simple boolean response (this variant is often called compare-and-set), or by returning the value read from the memory location (not the value written to it), thus "swapping" the read and written values.
Overview
A compare-and-swap operation is an atomic version of the following pseudocode, where denotes access through a pointer:
function cas(p: pointer to int, old: int, new: int) is
if *p ≠ old
return false
*p ← new
return true
This operation is used to implement synchronization primitives like semaphores and mutexes, as well as more sophisticated lock-free and wait-free algorithms. Maurice Herlihy (1991) proved that CAS can implement more of these algorithms than atomic read, write, or fetch-and-add, and assuming a fairly large amount of memory, that it can implement all of them. CAS is equivalent to load-link/store-conditional, in the sense that a constant number of invocations of either primitive can be used to implement the other one in a wait-free manner.
Algorithms built around CAS typically read some key memory location and remember the old value. Based on that old value, they compute some new value. Then they try to swap in the new value using CAS, where the comparison checks for the location still being equal to the old value. If CAS indicates that the attempt has failed, it has to be repeated from the beginning: the location is re-read, a new value is re-computed and the CAS is tried again. Instead of immediately retrying after a CAS operation fails, researchers have found that total system performance can be improved in multiprocessor systems—where many threads constantly update some particular shared variable—if threads that see their CAS fail use exponential backoff—in other words, wait a little before retrying the CAS.
Example application: atomic adder
As an example use case of compare-and-swap, here is an algorithm for atomically incrementing or decrementing an integer. This is useful in a variety of applications that use counters. The function performs the action , atomically (again denoting pointer indirection by , as in C) and returns the final value stored in the counter. Unlike in the pseudocode above, there is no requirement that any sequence of operations is atomic except for .
function add(p: pointer to int, a: int) returns int
done ← false
while not done
value ← *p // Even this operation doesn't need to be atomic.
done ← cas(p, value, value + a)
return value + a
In this algorithm, if the value of changes after (or while!) it is fetched and before the CAS does the store, CAS will notice and report this fact, causing the algorithm to retry.
ABA problem
Some CAS-based algorithms are affected by and must handle the problem of a false positive match, or the ABA problem. It is possible that between the time the old value is read and the time CAS is attempted, some other processors or threads change the memory location two or more times such that it acquires a bit pattern which matches the old value. The problem arises if this new bit pattern, which looks exactly like the old value, has a different meaning: for instance, it could be a recycled address, or a wrapped version counter.
A general solution to this is to use a double-length CAS (DCAS). E.g., on a 32-bit system, a 64-bit CAS can be used. The second half is used to hold a counter. The compare part of the operation compares the previously read value of the pointer and the counter, with the current pointer and counter. If they match, the swap occurs - the new value is written - but the new value has an incremented counter. This means that if ABA has occurred, although the pointer value will be the same, the counter is exceedingly unlikely to be the same (for a 32-bit value, a multiple of 232 operations would have to have occurred, causing the counter to wrap and at that moment, the pointer value would have to also by chance be the same).
An alternative form of this (useful on CPUs which lack DCAS) is to use an index into a freelist, rather than a full pointer, e.g. with a 32-bit CAS, use a 16-bit index and a 16-bit counter. However, the reduced counter lengths begin to make ABA possible at modern CPU speeds.
One simple technique which helps alleviate this problem is to store an ABA counter in each data structure element, rather than using a single ABA counter for the whole data structure.
A more complicated but more effective solution is to implement safe memory reclamation (SMR). This is in effect lock-free garbage collection. The advantage of using SMR is the assurance a given pointer will exist only once at any one time in the data structure, thus the ABA problem is completely solved. (Without SMR, something like a freelist will be in use, to ensure that all data elements can be accessed safely (no memory access violations) even when they are no longer present in the data structure. With SMR, only elements actually currently in the data structure will be accessed).
Costs and benefits
CAS, and other atomic instructions, are sometimes thought to be unnecessary in uniprocessor systems, because the atomicity of any sequence of instructions can be achieved by disabling interrupts while executing it. However, disabling interrupts has numerous downsides. For example, code that is allowed to do so must be trusted not to be malicious and monopolize the CPU, as well as to be correct and not accidentally hang the machine in an infinite loop or page fault. Further, disabling interrupts is often deemed too expensive to be practical. Thus, even programs only intended to run on uniprocessor machines will benefit from atomic instructions, as in the case of Linux's futexes.
In multiprocessor systems, it is usually impossible to disable interrupts on all processors at the same time. Even if it were possible, two or more processors could be attempting to access the same semaphore's memory at the same time, and thus atomicity would not be achieved. The compare-and-swap instruction allows any processor to atomically test and modify a memory location, preventing such multiple-processor collisions.
On server-grade multi-processor architectures of the 2010s, compare-and-swap is cheap relative to a simple load that is not served from cache. A 2013 paper points out that a CAS is only 1.15 times more expensive than a non-cached load on Intel Xeon (Westmere-EX) and 1.35 times on AMD Opteron (Magny-Cours).
Implementations
Compare-and-swap (and compare-and-swap-double) has been an integral part of the IBM 370 (and all successor) architectures since 1970. The operating systems that run on these architectures make extensive use of this instruction to facilitate process (i.e., system and user tasks) and processor (i.e., central processors) parallelism while eliminating, to the greatest degree possible, the "disabled spinlocks" which had been employed in earlier IBM operating systems. Similarly, the use of test-and-set was also eliminated. In these operating systems, new units of work may be instantiated "globally", into the global service priority list, or "locally", into the local service priority list, by the execution of a single compare-and-swap instruction. This substantially improved the responsiveness of these operating systems.
In the x86 (since 80486) and Itanium architectures this is implemented as the compare and exchange (CMPXCHG) instruction (on a multiprocessor the prefix must be used).
As of 2013, most multiprocessor architectures support CAS in hardware, and the compare-and-swap operation is the most popular synchronization primitive for implementing both lock-based and non-blocking concurrent data structures.
The atomic counter and atomic bitmask operations in the Linux kernel typically use a compare-and-swap instruction in their implementation.
The SPARC-V8 and PA-RISC architectures are two of the very few recent architectures that do not support CAS in hardware; the Linux port to these architectures uses a spinlock.
Implementation in C
Many C compilers support using compare-and-swap either with the C11 <stdatomic.h> functions, or some non-standard C extension of that particular C compiler, or by calling a function written directly in assembly language using the compare-and-swap instruction.
The following C function shows the basic behavior of a compare-and-swap variant that returns the old value of the specified memory location; however, this version does not provide the crucial guarantees of atomicity that a real compare-and-swap operation would:
int compare_and_swap(int* reg, int oldval, int newval)
{
ATOMIC();
int old_reg_val = *reg;
if (old_reg_val == oldval)
*reg = newval;
END_ATOMIC();
return old_reg_val;
}
old_reg_val is always returned, but it can be tested following the compare_and_swap operation to see if it matches oldval, as it may be different, meaning that another process has managed to succeed in a competing compare_and_swap to change the reg value from oldval.
For example, an election protocol can be implemented such that every process checks the result of compare_and_swap against its own PID (= newval). The winning process finds the compare_and_swap returning the initial non-PID value (e.g., zero). For the losers it will return the winning PID.
This is the logic in the Intel Software Manual Vol 2A:
bool compare_and_swap(int *accum, int *dest, int newval)
{
if (*accum == *dest) {
*dest = newval;
return true;
} else {
*accum = *dest;
return false;
}
}
Extensions
Since CAS operates on a single pointer-sized memory location, while most lock-free and wait-free algorithms need to modify multiple locations, several extensions have been implemented.
Double compare-and-swap (DCAS) Compares two unrelated memory locations with two expected values, and if they're equal, sets both locations to new values. The generalization of DCAS to multiple (non-adjacent) words is called MCAS or CASN. DCAS and MCAS are of practical interest in the convenient (concurrent) implementation of some data structures like deques or binary search trees. DCAS and MCAS may be implemented however using the more expressive hardware transactional memory present in some recent processors such as IBM POWER8 or in Intel processors supporting Transactional Synchronization Extensions (TSX).
Double-wide compare-and-swap Operates on two adjacent pointer-sized locations (or, equivalently, one location twice as big as a pointer). On later x86 processors, the CMPXCHG8B and CMPXCHG16B instructions serve this role, although early 64-bit AMD CPUs did not support CMPXCHG16B (modern AMD CPUs do). Some Intel motherboards from the Core 2 era also hamper its use, even though the processors support it. These issues came into the spotlight at the launch of Windows 8.1 because it required hardware support for CMPXCHG16B.
Single compare, double swap Compares one pointer but writes two. The Itanium's cmp8xchg16 instruction implements this, where the two written pointers are adjacent.
Multi-word compare-and-swap Is a generalisation of normal compare-and-swap. It can be used to atomically swap an arbitrary number of arbitrarily located memory locations. Usually, multi-word compare-and-swap is implemented in software using normal double-wide compare-and-swap operations. The drawback of this approach is a lack of scalability.
Persistent compare-and-swap Is a combination of persist operation and the normal compare-and-swap. It can be used to atomically compare-and-swap a value and then persist the value, so there is no gap between concurrent visibility and crash visibility. The extension solves the read-of-non-persistent-write problem.
See also
References
External links
Basic algorithms implemented using CAS
2003 discussion "Lock-Free using cmpxchg8b..." on Intel x86, with pointers to various papers and source code
Implementations of CAS
AIX compare_and_swap Kernel Service
Java package implements 'compareAndSet' in various classes
.NET Class methods Interlocked::CompareExchange
Windows API InterlockedCompareExchange
Computer arithmetic
Concurrency control | Compare-and-swap | [
"Mathematics"
] | 2,836 | [
"Computer arithmetic",
"Arithmetic"
] |
632,312 | https://en.wikipedia.org/wiki/Storm%20cellar | A storm shelter or storm cellar is a type of underground bunker designed to protect the occupants from severe weather, particularly tornadoes. They are most frequently seen in the Midwest ("Tornado Alley") and Southeastern ("Dixie Alley") United States where tornadoes are generally frequent and the low water table permits underground livings.
Average storm shelter
An average storm cellar for a single family is built close enough to the home to allow instant access in an emergency, but not so close that the house could tumble on the door during a storm, trapping the occupants inside. This is also the reason the main door on most storm cellars is mounted at an angle rather than perpendicular with the ground. An angled door allows for debris to blow up and over the door, or sand to slide off, without blocking it, and the angle also reduces the force necessary to open the door if rubble has piled up on top. The floor area is generally around , with an arched roof like that of a Quonset hut, but entirely underground. In most cases the entire structure is built of blocks faced with cement and rebar through the bricks for protection from the storm. Doing so makes it nearly impossible for the bricks to collapse. New ones are sometimes made of septic tanks that have been modified with a steel door and vents. Some new shelters are rotationally molded from polyethylene.
Most storm cellars are accessible by a covered stairwell, and at the opposite end of the structure there can be conduits for air that reach the surface, and perhaps a small window to serve as an emergency exit and also to provide some light. Storm cellars, when connected to the house, may potentially compromise security.
Fully enclosed underground storm shelters offer superior tornado protection to that of a traditional basement (cellar) because they provide separate overhead cover without the risk of occupants being trapped or killed by collapsing rubble from above. For this reason they also provide the only reliable form of shelter against "violent" (EF4 and EF5) tornadoes which tend to rip the house from its foundation, removing the overhead cover which was protecting the occupant.
There are several different styles of storm cellars. There are the generic underground storm/tornado cellar, also called storm or tornado shelters, as well as the new above-ground safe rooms. A "cellar" is an underground unit, but for the sake of the specified use of a "storm cellar" to protect one from high-wind storms, it seems relevant to mention saferooms. There are two basic styles of underground storm cellars. One is the "hillside" or "embankment" and the other is the "flat" ground.
One other style of shelter is the under garage. While similar to other underground shelters, its main difference is that it is installed in a garage rather than outside. Having it installed in the garage allows access to it without having to go outside during a storm. It is sometimes not an option to have a shelter installed outside either due to insufficient space, or local ordinances.
Hillside/embankment shelters
Hillside or embankment models are usually installed in one of two ways. It can be installed in an existing hill/embankment or dirt is built up around a freestanding unit, forming a hill around it. The door can be set at an angle or vertically. There can be steps leading into the unit, or it can be installed to where the floor is level with the ground outside. The embankment storm cellar can be made from concrete, steel, fiberglass, or any other structurally sound material or composite and is usually installed in a hill or embankment, leaving only the door exposed. In some situations, they can hold an entire neighborhood or town as with a community shelter. More often, they are built to hold one or two families, specified as a residential shelter. All underground "storm or tornado" shelters must be properly anchored.
Above ground shelters
Above ground shelters are used in many areas of the country and by a wide variety of homeowners and businesses. Groundwater tables may make it impossible to install or build a shelter below ground, elderly or people with limited mobility may be unable to access a below ground shelter, or people may have significant phobias pertaining to below ground sheltering. FEMA P-320, Taking Shelter from the Storm: Building a Safe Room for Your Home or Small Business (2014) and ICC/NSSA Standard for the Design and Construction of Storm Shelters provide engineering and testing requirements to ensure that above ground shelters manufactured to the published specifications will withstand winds in excess of (EF5 tornado). Above ground shelters may be built of different materials such as steel reinforced concrete or 1/8" 10 ga. hot rolled steel and may be installed inside a home, garage, or outbuilding, or as a stand-alone unit. These types of shelters are typically prefabricated and installed on a home site or commercial location. Walls can be provided which form a deflector baffle entry so that the path of the storm debris must touch two impact resistant surfaces before it penetrates into the protected area of the occupants.
Wind engineering specialists from Texas Tech University's National Wind Institute have done extensive research that concludes that sheltering in an above ground storm shelter that meets the engineering criteria outlined in FEMA Pub. 320 and 361 and ICC/NSSA Standard for the Design and Construction of Storm Shelters is as safe as seeking below ground shelter during massive EF4 and EF5 tornadoes. TTU engineer Joseph Dannemiller presented the research findings at a TEDxTexasTechUniversity symposium in February 2014.
Below-ground shelters
The below-ground shelters are designed so that the door is flat with the ground and can be made from any one of the materials previously described. This unit is put in a hole deep enough to cover the bottom section, and then the excavated dirt is filled in around the top and packed down. Storm shelters must be designed, built, tested, and installed properly for them to meet any of the US FEMA-320, FEMA-361, ICC-500, NPCTS (National Performance Criteria for Tornado Shelters), or ICC/NSSA Standards.
Geolocation services
Many storm shelter manufacturers include geolocation services or incorporate GPS technologies to assist in ensuring recovery from the shelter after a storm or other catastrophic event. In addition, shelter owners may opt to incorporate their own geolocation services in their shelter. Shelter owners can provide their shelter's GPS coordinates to an emergency response center that is linked to a nationwide severe weather notification system. If a storm occurs, the emergency response center places a phone call to the shelter owner and then secondary contacts, lastly contacting local emergency response if unable to contact the shelter owner.
Additional uses
Functionally underground bunkers, storm cellars are readily provisioned as bomb shelters and/or fallout shelters (although they are not usually dug as deeply or equipped with filtered ventilation, respectively). Too, since their underground construction makes them steadily cool and dark, storm cellars on farmsteads in the Midwest and elsewhere have traditionally used as root cellars to store seasonal canned goods for consumption during the winter.
See also
Storm door
Storm drain
Storm room
Storm windows
Tornado preparedness
Hurricane preparedness
References
Further reading
Agricultural buildings
Civil defense
Farms
Rooms
Security
Subterranea (geography) | Storm cellar | [
"Engineering"
] | 1,477 | [
"Rooms",
"Architecture"
] |
632,331 | https://en.wikipedia.org/wiki/Transit%20of%20Venus | A transit of Venus takes place when Venus passes directly between the Sun and the Earth (or any other superior planet), becoming visible against (and hence obscuring a small portion of) the solar disk. During a transit, Venus is visible as a small black circle moving across the face of the Sun.
Transits of Venus reoccur periodically. A pair of transits takes place eight years apart in December (Gregorian calendar) followed by a gap of 121.5 years, before another pair occurs eight years apart in June, followed by another gap, of 105.5 years. The dates advance by about two days per 243-year cycle. The periodicity is a reflection of the fact that the orbital periods of Earth and Venus are close to 8:13 and 243:395 commensurabilities. The last pairs of transits occurred on 8 June 2004 and 5–6 June 2012. The next pair of transits will occur on 10–11 December 2117 and 8 December 2125.
Transits of Venus were in the past used to determine the size of the Solar System. The 2012 transit has provided research opportunities, particularly in the refinement of techniques to be used in the search for exoplanets.
Conjunctions
The orbit of Venus has an inclination of 3.39° relative to that of the Earth, and so passes under (or over) the Sun when viewed from the Earth. A transit occurs when Venus reaches conjunction with the Sun whilst also passing through the Earth's orbital plane, and passes directly across the face of the Sun. Sequences of transits usually repeat every 243 years, after which Venus and Earth have returned to nearly the same point in their respective orbits. During the Earth's 243 sidereal orbital periods, which total 88,757.3 days, Venus completes 395 sidereal orbital periods of 224.701 days each, which is equal to 88,756.9 Earth days. This period of time corresponds to 152 synodic periods of Venus.
A pair of transits takes place eight years apart in December, followed by a gap of 121.5 years, before another pair occurs eight years apart in June, followed by another gap, of 105.5 years. Other patterns are possible within the 243-year cycle, because of the slight mismatch between the times when the Earth and Venus arrive at the point of conjunction. Prior to 1518, the pattern of transits was 8, 113.5, and 121.5 years, and the eight inter-transit gaps before the AD 546 transit were 121.5 years apart. The current pattern will continue until 2846, when it will be replaced by a pattern of 105.5, 129.5, and 8 years. Thus, the 243-year cycle is relatively stable, but the number of transits and their timing within the cycle vary over time. Since the 243:395 Earth:Venus commensurability is only approximate, there are different sequences of transits occurring 243 years apart, each extending for several thousand years, which are eventually replaced by other sequences. For instance, there is a series which ended in 541 BC, and the series which includes 2117 only started in AD 1631.
History of observation of the transits
Ancient Indian, Greek, Egyptian, Babylonian, and Chinese observers knew of Venus and recorded the planet's motions. Pythagoras is credited with realizing that the so-called morning and evening stars were really both the planet Venus. There is no evidence that any of these cultures observed planetary transits. It has been proposed that frescoes found at the Maya site at Mayapan may contain a pictorial representation of the 12th or 13th century transits.
The Persian polymath Avicenna claimed to have observed Venus as a spot on the Sun. There was a transit on 24 May 1032, but Avicenna did not give the date of his observation, and modern scholars have questioned whether he could have observed the transit from his location; he may have mistaken a sunspot for Venus. He used his alleged transit observation to help establish that Venus was, at least sometimes, below the Sun in Ptolemaic cosmology, i.e., the sphere of Venus comes before the sphere of the Sun when moving out from the Earth in the then prevailing geocentric model.
1631 and 1639 transits
The German astronomer Johannes Kepler predicted the 1631 transit in 1627, but his methods were not sufficiently accurate to predict that it could not be seen throughout most of Europe. As a consequence, astronomers were unable to use his prediction to observe the event.
The first recorded observation of a transit of Venus was made by the English astronomer Jeremiah Horrocks from his home at Carr House in Much Hoole, near Preston, on 4 December 1639 (24 November O.S.). His friend William Crabtree observed the transit from nearby Broughton. Kepler had predicted transits in 1631 and 1761 and a near miss in 1639. Horrocks corrected Kepler's calculation for the orbit of Venus, realized that transits of Venus would occur in pairs 8 years apart, and so predicted the transit of 1639. Although he was uncertain of the exact time, he calculated that the transit was to begin at approximately 15:00. Horrocks focused the image of the Sun through a simple telescope and onto paper, where he could observe the Sun without damaging his eyesight. After waiting for most of the day, he eventually saw the transit when clouds obscuring the Sun cleared at about 15:15, half an hour before sunset. His observations allowed him to make a well-informed guess for the diameter of Venus and an estimate of the mean distance between the Earth and the Sun (). His observations were not published until 1661, well after Horrocks's death. Horrocks based his calculation on the (false) presumption that each planet's size was proportional to its rank from the Sun, not on the parallax effect as used by the 1761 and 1769 and following experiments.
1761 transit
In 1663, the Scottish mathematician James Gregory had suggested in his that observations of a transit of Mercury, at widely spaced points on the surface of the Earth, could be used to calculate the solar parallax, and hence the astronomical unit by means of triangulation. Aware of this, the English astronomer Edmond Halley made observations of such a transit on 28 October O.S. 1677 from the island of Saint Helena, but was disappointed to find that only Richard Towneley in the Lancashire town of Burnley, Lancashire had made another accurate observation of the event, whilst Gallet, at Avignon, had simply recorded that it had occurred. Halley was not satisfied that the resulting calculation of the solar parallax of 45" was accurate.
In a paper published in 1691, and a more refined one in 1716, Halley proposed that more accurate calculations could be made using measurements of a transit of Venus, although the next such event was not due until 1761 (6 June N.S., 26 May O.S.). In an attempt to observe the first transit of the pair, astronomers from Britain (William Wales and Captain James Cook), Austria (Maximilian Hell), and France (Jean-Baptiste Chappe d'Auteroche and Guillaume Le Gentil) took part in expeditions to places that included Siberia, Newfoundland, and Madagascar. Most of them observed at least part of the transit. Jeremiah Dixon and Charles Mason succeeded in observing the transit at the Cape of Good Hope, but Nevil Maskelyne and Robert Waddington were less successful on Saint Helena, although they put their voyage to good use by trialling the lunar-distance method of finding longitude.
Venus was generally thought to possess an atmosphere prior to the transit of 1761, but the possibility that it could be detected during a transit seems not to have been considered. The discovery of the planet’s atmosphere has long been attributed to the Russian scientist Mikhail Lomonosov, after he observed the 1761 transit from the Imperial Academy of Sciences of St. Petersburg. The attribution to Lomonosov seems to have arisen from comments made in 1966 by the astronomy writer Willy Ley, who wrote that Lomonosov had inferred the existence of an atmosphere from his observation of a luminous arc. The attribution has since then been questioned.
1769 transit
For the 1769 transit, scientists travelled to places all over the world. The Czech astronomer Christian Mayer was invited by the Russian empress Catherine the Great to observe the transit in Saint Petersburg with Anders Johan Lexell, while other members of the Russian Academy of Sciences went to eight other locations in the Russian Empire under the general coordination of Stepan Rumovsky. King George III of the United Kingdom had the King's Observatory built near his summer residence at Richmond Lodge, so that he and the Astronomer Royal, Stephen Demainbray, could observe the transit. Hell and his assistant János Sajnovics travelled to Vardø, Norway. Wales and Joseph Dymond went to Hudson Bay to observe the event. In Philadelphia, the American Philosophical Society erected three temporary observatories and appointed a committee led by David Rittenhouse. Observations were made by a group led by Dr. Benjamin West in Providence, Rhode Island, Observations were also made from Tahiti by James Cook and Charles Green at a location still known as Point Venus.
D'Auteroche went to San José del Cabo in what was then New Spain to observe the transit with two Spanish astronomers (Vicente de Doz and Salvador de Medina). For his trouble he died in an epidemic of yellow fever there shortly after completing his observations. Only 9 of 28 in the entire party returned home alive. Le Gentil spent over eight years travelling in an attempt to observe either of the transits. Whilst abroad he was declared dead, and as a result he lost his wife and possessions. Upon his return he regained his seat in the French Academy and remarried. Under the influence of the Royal Society, the astronomer Ruđer Bošković travelled to Istanbul, but arrived after the transit had happened.
In 1771, using the combined 1761 and 1769 transit data, the French astronomer Jérôme Lalande calculated the astronomical unit to have a value of . The precision was less than had been hoped for because of the black drop effect. The value obtained was still an improvement on the calculations made by Horrocks. Hell published his results in 1770, which included a value for the astronomical unit of . Lalande challenged the accuracy and authenticity of observations obtained by the Hell expedition, but later wrote an article in Journal des sçavans (1778), in which he retracted his comments.
1874 and 1882 transits
Observations of the transits of 1874 and 1882 worked to refine the value obtained for the astronomical unit. Three expeditions—from Germany, the United Kingdom, and the United States—were sent to the Kerguelen Archipelago for the 1874 observations. The American astronomer Simon Newcomb combined the data from the last four transits, and he arrived at a value of .
2004 and 2012 transits
Scientific organisations led by the European Southern Observatory organised a network of amateur astronomers and students to measure Earth's distance from the Sun during the transit of 2004. The participants' observations allowed a calculation of the astronomical unit (AU) of , which differed from the accepted value by 0.007%.
During the 2004 transit, scientists attempted to measure the loss of light as Venus blocked out some of the Sun's light, in order to refine techniques for discovering extrasolar planets.
The 2012 transit of Venus provided scientists with research opportunities as well, in particular in regard to the study of exoplanets. The event additionally was the first of its kind to be documented from space, photographed aboard the International Space Station by NASA astronaut Don Pettit. The measurement of the dips in a star's brightness during a transit is one observation that can help astronomers find exoplanets. Unlike the 2004 Venus transit, the 2012 transit occurred during an active phase of the 11-year activity cycle of the Sun, and it gave astronomers an opportunity to practise picking up a planet's signal around a "spotty" variable star. Measurements made of the apparent diameter of a planet such as Venus during a transit allows scientists to estimate exoplanet sizes. Observation made of the atmosphere of Venus from Earth-based telescopes and the Venus Express gave scientists a better opportunity to understand the intermediate level of Venus's atmosphere than was possible from either viewpoint alone, and provided new information about the climate of the planet. Spectrographic data of the atmosphere of Venus can be compared to studies of the atmospheres of exoplanets. The Hubble Space Telescope used the Moon as a mirror to study light from the atmosphere of Venus, and so determine its composition.
Future transits
Transits usually occur in pairs, because the length of eight Earth years is almost the same as 13 years on Venus. This approximate conjunction is not precise enough to produce a triplet, as Venus arrives 22 hours earlier each time. The last transit not to be part of a pair was in 1396 (the planet passed slightly above the disc of the Sun in 1388); the next one will be in 3089.
After 243 years the transits of Venus return. The 1874 transit is a member of the 243-year cycle #1. The 1882 transit is a member of #2. The 2004 transit is a member of #3, and the 2012 transit is a member of #4. The 2117 transit is a member of #1, and so on. However, the ascending node (December transits) of the orbit of Venus moves backwards after each 243 years so the transit of 2854 is the last member of series #3 instead of series #1. The descending node (June transits) moves forwards, so the transit of 3705 is the last member of #2.
Over longer periods of time, new series of transits will start and old series will end. Unlike the saros series for lunar eclipses, it is possible for a transit series to restart after a hiatus. The transit series also vary much more in length than the saros series.
Grazing and simultaneous transits
Sometimes Venus only grazes the Sun during a transit. In this case it is possible that in some areas of the Earth a full transit can be seen while in other regions there is only a partial transit (no second or third contact). The last transit of this type was on 6 December 1631, and the next such transit will occur on 13 December 2611. It is also possible that a transit of Venus can be seen in some parts of the world as a partial transit, while in others Venus misses the Sun. Such a transit last occurred on 19 November 541 BC, and the next transit of this type will occur on 14 December 2854. These effects are due to parallax, since the size of the Earth affords different points of view with slightly different lines of sight to Venus and the Sun. It can be demonstrated by closing an eye and holding a finger in front of a smaller more distant object; when the viewer opens the other eye and closes the first, the finger will no longer be in front of the object.
The simultaneous occurrence of transits of Mercury and Venus does occur, but extremely infrequently. Such an event last occurred on 22 September 373,173 BC and will next occur on 26 July 69,163, and again on 29 March 224,504. The simultaneous occurrence of a solar eclipse and a transit of Venus is currently possible, but very rare. The next solar eclipse occurring during a transit of Venus will be on 5 April 15,232.
In popular culture
The Canadian rock band Three Days Grace titled their fourth studio album Transit of Venus and announced the album title and release date on June 5, 2012, the date of the last transit of Venus. The album's first song, "Sign of the Times", references the transit in the lyric "Venus is passing by".
The progressive rock band Big Big Train have a song titled "The Transit of Venus Across the Sun". It is the fifth track on their ninth album Folklore (Big Big Train album).
The Transit of Venus March was written by John Philip Sousa in 1883 to commemorate the 1882 transit.
See also
Transit of Mercury
Transit of minor planets
Notes
References
Sources
Further reading
External links
Chasing Venus: Observing the Transits of Venus, 1631–2004 (Smithsonian Libraries)
2012 Transit of Venus – International Astronomical Union
National Solar Observatory – Transit of Venus 5–6 June 2012
Venus
Stellar occultation | Transit of Venus | [
"Astronomy"
] | 3,397 | [
"Astronomical events",
"Stellar occultation",
"Astronomical transits"
] |
632,374 | https://en.wikipedia.org/wiki/Reinsurance | Reinsurance is insurance that an insurance company purchases from another insurance company to insulate itself (at least in part) from the risk of a major claims event. With reinsurance, the company passes on ("cedes") some part of its own insurance liabilities to the other insurance company. The company that purchases the reinsurance policy is referred to as the "ceding company" or "cedent". The company issuing the reinsurance policy is referred to as the "reinsurer". In the classic case, reinsurance allows insurance companies to remain solvent after major claims events, such as major disasters like hurricanes or wildfires. In addition to its basic role in risk management, reinsurance is sometimes used to reduce the ceding company's capital requirements, or for tax mitigation or other purposes.
The reinsurer may be either a specialist reinsurance company, which only undertakes reinsurance business, or another insurance company. Insurance companies that accept reinsurance refer to the business as "assumed reinsurance".
There are two basic methods of reinsurance:
Facultative Reinsurance, which is negotiated separately for each insurance policy that is reinsured. Facultative reinsurance is normally purchased by ceding companies for individual risks not covered, or insufficiently covered, by their reinsurance treaties, for amounts in excess of the monetary limits of their reinsurance treaties and for unusual risks. Underwriting expenses, and in particular personnel costs, are higher for such business because each risk is individually underwritten and administered. However, as they can separately evaluate each risk reinsured, the reinsurer's underwriter can price the contract more accurately to reflect the risks involved. Ultimately, a facultative certificate is issued by the reinsurance company to the ceding company reinsuring that one policy, and is used for high-value or hazardous risks.
Treaty Reinsurance means that the ceding company and the reinsurer negotiate and execute a reinsurance contract under which the reinsurer covers the specified share of all the insurance policies issued by the ceding company which come within the scope of that contract. The reinsurance contract may obligate the reinsurer to accept reinsurance of all contracts within the scope (known as "obligatory" reinsurance), or it may allow the insurer to choose which risks it wants to cede, with the reinsurer obligated to accept such risks (known as "facultative-obligatory" or "fac oblig" reinsurance). These types of contracts are typically annual.
There are two main types of treaty reinsurance, 'proportional and non-proportional, which are detailed below. Under proportional reinsurance, the reinsurer's share of the risk is defined for each separate policy, while under non-proportional reinsurance the reinsurer's liability is based on the aggregate claims incurred by the ceding office. In the past 30 years there has been a major shift from proportional to non-proportional reinsurance in the property and casualty fields.
Functions
Almost all insurance companies have a reinsurance program. The ultimate goal of that program is to reduce their exposure to loss by passing part of the risk of loss to a reinsurer or a group of reinsurers.
Risk transfer
With reinsurance, the insurer can issue policies with higher limits than would otherwise be allowed, thus being able to take on more risk because some of that risk is now transferred to the re-insurer.
Income smoothing
Reinsurance can make an insurance company's results more predictable by absorbing large losses. This is likely to reduce the amount of capital needed to provide coverage. The risks are spread, with the reinsurer or reinsurers bearing some of the loss incurred by the insurance company. The income smoothing arises because the losses of the cedent are limited. This fosters stability in claim payouts and caps indemnification costs.
Surplus relief
Proportional Treaties (or "pro-rata" treaties) provide the cedent with "surplus relief"; surplus relief being the capacity to write more business and/or at larger limits.
Arbitrage
The insurance company may be motivated by arbitrage in purchasing reinsurance coverage at a lower rate than they charge the insured for the underlying risk, whatever the class of insurance.
In general, the reinsurer may be able to cover the risk at a lower premium than the insurer because:
The reinsurer may have some intrinsic cost advantage due to economies of scale or some other efficiency.
Reinsurers may operate under weaker regulation than their clients. This enables them to use less capital to cover any risk, and to make less conservative assumptions when valuing the risk.
Reinsurers may operate under a more favourable tax regime than their clients.
Reinsurers will often have better access to underwriting expertise and to claims experience data, enabling them to assess the risk more accurately and reduce the need for contingency margins in pricing the risk
Even if the regulatory standards are the same, the reinsurer may be able to hold smaller actuarial reserves than the cedent if it thinks the premiums charged by the cedent are excessively conservative.
The reinsurer may have a more diverse portfolio of assets and especially liabilities than the cedent. This may create opportunities for hedging that the cedent could not exploit alone. Depending on the regulations imposed on the reinsurer, this may mean they can hold fewer assets to cover the risk.
The reinsurer may have a greater risk appetite than the insurer.
Reinsurer's expertise
The insurance company may want to avail itself of the expertise of a reinsurer, or the reinsurer's ability to set an appropriate premium, in regard to a specific (specialised) risk. The reinsurer will also wish to apply this expertise to the underwriting in order to protect their own interests. This is especially the case in Facultative Reinsurance.
Creating a manageable and profitable portfolio of insured risks
By choosing a particular type of reinsurance method, the insurance company may be able to create a more balanced and homogeneous portfolio of insured risks. This would make its results more predictable on a net basis (i.e. allowing for the reinsurance). This is usually one of the objectives of reinsurance arrangements for the insurance companies.
Types of reinsurance
Proportional
Under proportional reinsurance, one or more reinsurers take a stated percentage share of each policy that an insurer issues ("writes"). The reinsurer will then receive that stated percentage of the premiums and will pay the stated percentage of claims. In addition, the reinsurer will allow a "ceding commission" to the insurer to cover the costs incurred by the ceding insurer (mainly acquisition and administration, as well as the expected profit that the cedent is giving up).
The arrangement may be "quota share" or "surplus reinsurance" (also known as surplus of line or variable quota share treaty) or a combination of the two. Under a quota share arrangement, a fixed percentage (say 75%) of each insurance policy is reinsured. Under a surplus share arrangement, the ceding company decides on a "retention limit": say $100,000. The ceding company retains the full amount of each risk, up to a maximum of $100,000 per policy or per risk, and the excess over this retention limit is reinsured.
The ceding company may seek a quota share arrangement for several reasons. First, it may not have sufficient capital to prudently retain all of the business that it can sell. For example, it may only be able to offer a total of $100 million in coverage, but by reinsuring 75% of it, it can sell four times as much, and retain some of the profits on the additional business via the ceding commission.
The ceding company may seek surplus reinsurance to limit the losses it might incur from a small number of large claims as a result of random fluctuations in experience. In a 9 line surplus treaty the reinsurer would then accept up to $900,000 (9 lines). So if the insurance company issues a policy for $100,000, they would keep all of the premiums and losses from that policy. If they issue a $200,000 policy, they would give (cede) half of the premiums and losses to the reinsurer (1 line each). The maximum automatic underwriting capacity of the cedent would be $1,000,000 in this example. Any policy larger than this would require facultative reinsurance.
Non-proportional
Under non-proportional reinsurance the reinsurer only pays out if the total claim(s) suffered by the insurer exceed a stated amount, which is called the "retention" or "priority". For instance the insurer may be prepared to accept a total loss up to $1 million, and purchases a layer of reinsurance of $4 million in excess of this $1 million. If a loss of $3 million were then to occur, the insurer would bear $1 million of the loss and would recover $2 million from its reinsurer. In this example, the insurer also retains any loss over $5 million unless it has purchased a further excess layer of reinsurance.
The main forms of non-proportional reinsurance are excess of loss and stop loss.
Excess of loss reinsurance can have three forms - "Per Risk XL" (Working XL), "Per Occurrence or Per Event XL" (Catastrophe or Cat XL), and "Aggregate XL".
In per risk, the cedent's insurance policy limits are greater than the reinsurance retention. For example, an insurance company might insure commercial property risks with policy limits up to $10 million, and then buy per risk reinsurance of $5 million in excess of $5 million. In this case a loss of $6 million on that policy will result in the recovery of $1 million from the reinsurer. These contracts usually contain event limits to prevent their misuse as a substitute for Catastrophe XLs.
In catastrophe excess of loss, the cedent's retention is usually a multiple of the underlying policy limits, and the reinsurance contract usually contains a two risk warranty (i.e. they are designed to protect the cedent against catastrophic events that involve more than one policy, usually very many policies). For example, an insurance company issues homeowners' policies with limits of up to $500,000 and then buys catastrophe reinsurance of $22,000,000 in excess of $3,000,000. In that case, the insurance company would only recover from reinsurers in the event of multiple policy losses in one event (e.g., hurricane, earthquake, flood).
Aggregate XL affords a frequency protection to the reinsured. For instance if the company retains $1 million net any one vessel, $5 million annual aggregate limit in excess of $5m annual aggregate deductible, the cover would equate to 5 total losses (or more partial losses) in excess of 5 total losses (or more partial losses). Aggregate covers can also be linked to the cedent's gross premium income during a 12-month period, with limit and deductible expressed as percentages and amounts. Such covers are then known as "stop loss" contracts.
Risks attaching basis
A basis under which reinsurance is provided for claims arising from policies commencing during the period to which the reinsurance
relates. The insurer knows there is coverage during the whole policy period even if claims are only discovered or made later on.
All claims from cedent underlying policies incepting during the period of the reinsurance contract are covered even if they occur after the expiration date of the reinsurance contract. Any claims from cedent underlying policies incepting outside the period of the reinsurance contract are not covered even if they occur during the period of the reinsurance contract.
Losses occurring basis
A Reinsurance treaty under which all claims occurring during the period of the contract, irrespective of when the underlying policies incepted, are covered. Any losses occurring after the contract expiration date are not covered.
As opposed to claims-made or risks attaching contracts. Insurance coverage is provided for losses occurring in the defined period. This is the usual basis of cover for short tail business.
Claims-made basis
A policy which covers all claims reported to an insurer within the policy period irrespective of when they occurred.
Contracts
Most of the above examples concern reinsurance contracts (treaty contracts) that cover more than one policy. Reinsurance can also be purchased on a per policy basis, in which case it is known as facultative reinsurance. Facultative reinsurance can be written on either a proportional or excess of loss basis. Facultative reinsurance contracts are commonly memorialized in relatively brief contracts known as facultative certificates and often are used for large or unusual risks that do not fit within standard reinsurance treaties due to their exclusions. The term of a facultative agreement coincides with the term of the policy. Facultative reinsurance is usually purchased by the insurance underwriter who underwrote the original insurance policy, whereas treaty reinsurance is typically purchased by an outwards reinsurance manager, or other senior executive at the insurance company.
The reinsurer's liability will usually cover the whole lifetime of the original insurance, once it is written. However the question arises of when either party can choose to cease the reinsurance in respect of future new business. Reinsurance treaties can either be written on a "continuous" or "term" basis. A continuous contract has no predetermined end date, but generally either party can give 90 days notice to cancel or amend the treaty for new business. A term agreement has a built-in expiration date. It is common for insurers and reinsurers to have long-term relationships that span many years. Reinsurance treaties are typically longer documents than facultative certificates, containing many of their own terms that are distinct from the terms of the direct insurance policies that they reinsure. However, even most reinsurance treaties are relatively short documents considering the number and variety of risks and lines of business that the treaties reinsure and the dollars involved in the transactions. They rely heavily on industry practice. There are not "standard" reinsurance contracts. However, many reinsurance contracts do include some commonly used provisions and provisions imbued with considerable industry common and practice.
Fronting
Sometimes insurance companies wish to offer insurance in jurisdictions where they are not licensed, or where it considers that local regulations are too onerous: for example, an insurer may wish to offer an insurance programme to a multinational company, to cover property and liability risks in many countries around the world. In such situations, the insurance company may find a local insurance company which is authorised in the relevant country, arrange for the local insurer to issue an insurance policy covering the risks in that country, and enter into a reinsurance contract with the local insurer to transfer the risks to itself. In the event of a loss, the policyholder would claim against the local insurer under the local insurance policy, the local insurer would pay the claim and would claim reimbursement under the reinsurance contract. Such an arrangement is called "fronting". Fronting is also sometimes used where an insurance buyer requires its insurers to have a certain financial strength rating and the prospective insurer does not satisfy that requirement: the prospective insurer may be able to persuade another insurer, with the requisite credit rating, to provide the coverage to the insurance buyer, and to take out reinsurance in respect of the risk. An insurer which acts as a "fronting insurer" receives a fronting fee for this service to cover administration and the potential default of the reinsurer. The fronting insurer is taking a risk in such transactions, because it has an obligation to pay its insurance claims even if the reinsurer becomes insolvent and fails to reimburse the claims.
Many reinsurance placements are not placed with a single reinsurer but are shared between a number of reinsurers. For example, a $30,000,000 excess of $20,000,000 layer may be shared by 30 or more reinsurers. The reinsurer who sets the terms (premium and contract conditions) for the reinsurance contract is called the lead reinsurer; the other companies subscribing to the contract are called following reinsurers. Alternatively, one reinsurer can accept the whole of the reinsurance and then retrocede it (pass it on in a further reinsurance arrangement) to other companies.
Using game-theoretic modeling, Professors Michael R. Powers (Temple University) and Martin Shubik (Yale University) have argued that the number of active reinsurers in a given national market should be approximately equal to the square-root of the number of primary insurers active in the same market. Econometric analysis has provided empirical support for the Powers-Shubik rule.
Ceding companies often choose their reinsurers with great care as they are exchanging insurance risk for credit risk. Risk managers monitor reinsurers' financial ratings (S&P, A.M. Best, etc.) and aggregated exposures regularly.
Because of the governance effect insurance/cedent companies can have on society, reinsurers can indirectly have societal impact as well, due to reinsurer underwriting and claims philosophies imposed on those underlying carriers which affects how the cedents offer coverage in the market. However, reinsurer governance is voluntarily accepted by cedents via contract to allow cedents the opportunity to rent reinsurer capital to expand cedent market share or limit their risk.
See also
Assumption reinsurance
Catastrophe bond
Catastrophe modeling
Financial reinsurance
Industry Loss Warranties
Life insurance securitization
Reinsurance sidecar
Stop-loss insurance
Year loss table
References
External links
Captive Review Captive Review
Reinsurance, by Gary Patrick
Youtube: "What is Reinsurance?", by Sebastian Lischewski
Actuarial science | Reinsurance | [
"Mathematics"
] | 3,696 | [
"Applied mathematics",
"Actuarial science"
] |
632,394 | https://en.wikipedia.org/wiki/Transit%20of%20Mercury | A transit of Mercury across the Sun takes place when the planet Mercury passes directly between the Sun and a superior planet. During a transit, Mercury appears as a tiny black dot moving across the Sun as the planet obscures a small portion of the solar disk. Because of orbital alignments, transits viewed from Earth occur in May or November. The last four such transits occurred on May 7, 2003; November 8, 2006; May 9, 2016; and November 11, 2019. The next will occur on November 13, 2032. A typical transit lasts several hours. Mercury transits are much more frequent than transits of Venus, with about 13 or 14 per century, primarily because Mercury is closer to the Sun and orbits it more rapidly.
On June 3, 2014, the Mars rover Curiosity observed the planet Mercury transiting the Sun, marking the first time a planetary transit has been observed from a celestial body besides Earth.
Scientific investigation
The orbit of the planet Mercury lies interior to that of the Earth, and thus it can come into an inferior conjunction with the Sun. When Mercury is near the node of its orbit, it passes through the orbital plane of the Earth. If an inferior conjunction occurs as Mercury is passing through its orbital node, the planet can be seen to pass across the disk of the Sun in an event called a transit. Depending on the chord of the transit and the position of the planet Mercury in its orbit, the maximum length of this event is 7h 50m.
Transit events are useful for studying the planet and its orbit. Examples of the scientific investigations based on transits of Mercury are:
Measuring the scale of the Solar System.
Investigations of the variability of the Earth's rotation and of the tidal acceleration of the Moon.
Measuring the mass of Venus from secular variations in Mercury's orbit.
Looking for long term variations in the solar radius.
Investigating the black drop effect, including calling into question the purported discovery of the atmosphere of Venus during the 1761 transit.
Assessing the likely drop in light level in an exoplanet transit.
Occurrence
Transits of Mercury can only occur when the Earth is aligned with a node of Mercury's orbit. Currently that alignment occurs within a few days of May 8 (descending node) and November 10 (ascending node), with the angular diameter of Mercury being about 12″ for May transits, and 10″ for November transits. The average date for a transit increases over centuries as a result of Mercury's nodal precession and Earth's axial precession.
Transits of Mercury occur on a regular basis. As explained in 1882 by Newcomb, the interval between passages of Mercury through the ascending node of its orbit is 87.969 days, and the interval between the Earth's passage through that same longitude is 365.254 days. Using continued fraction approximations of the ratio of these values, it can be shown that Mercury will make an almost integral number of revolutions about the Sun over intervals of 6, 7, 13, 33, 46, and 217 years.
In 1894 Crommelin noted that at these intervals, the successive paths of Mercury relative to the Sun are consistently displaced northwards or southwards. He noted the displacements as:
{| class="wikitable"
|+Displacements at subsequent transits
! Interval!! May transits !! November transits
|-
| After 6 years|| 65′ 37″ S|| 31′ 35″ N
|-
| After 7 years|| 48′ 21″ N|| 23′ 16″ S
|-
| Hence after 13 years (6 + 7)|| 17′ 16″ S|| 8′ 19″ N
|-
| ... 20 years (6 + 2 × 7)|| 31′ 05″ N|| 14′ 57″ S
|-
| ... 33 years (2 × 6 + 3 × 7)|| 13′ 49″ N|| 6′ 38″ S
|-
| ... 46 years (3 × 13 + 7)|| 3′ 27″ S|| 1′ 41″ N
|-
| ... 217 years (14 × 13 + 5 × 7)|| 0′ 17″ N || 0′ 14″ N
|}
Comparing these displacements with the solar diameter (about 31.7′ in May, and 32.4′ in November) the following may be deduced about the interval between transits:
For May transits, intervals of 6 and 7 years are not possible. For November transits, an interval of 6 years is possible but rare (the last such pair was 1993 and 1999, with both transits being very close to the solar limb), while an interval of 7 years is to be expected.
An interval of 13 years is to be expected for both May and November transits.
An interval of 20 years is possible but rare for a May transit, but is to be expected for November transits.
An interval of 33 years is to be expected for both May and November transits.
A transit having a similar path across the sun will occur 46 (and 171) years later – for both November and May transits.
A transit having an almost identical path across the Sun will occur 217 years later – for both November and May transits.
Transits that occur 46 years apart can be grouped into a series. For November transits each series includes about 20 transits over 874 years, with the path of Mercury across the Sun passing further north than for the previous transit. For May transits each series includes about 10 transits over 414 years, with the path of Mercury across the Sun passing further south than for the previous transit. Some authors have allocated a series number to transits on the basis of this 46-year grouping.
Similarly transits that occur 217 years apart can be grouped into a series. For November transits each series would include about 135 transits over 30,000 years. For May transits each series would include about 110 transits over 24,000 years. For both the May and November series, the path of Mercury across the Sun passes further north than for the previous transit. Series numbers have not been traditionally allocated on the basis of the 217 year grouping.
Predictions of transits of Mercury covering many years are available at NASA, SOLEX, and Fourmilab.
Observation
At inferior conjunction, the planet Mercury subtends an angle of , which, during a transit, is too small to be seen without a telescope. A common observation made at a transit is recording the times when the disk of Mercury appears to be in contact with the limb of the Sun. Those contacts are traditionally referred to as the 1st, 2nd, 3rd and 4th contacts – with the 2nd and 3rd contacts occurring when the disk of Mercury is fully on the disk of the sun. As a general rule, 1st and 4th contacts cannot be accurately detected, while 2nd and 3rd contacts are readily visible within the constraints of the Black Drop effect, irradiation, atmospheric conditions, and the quality of the optics being used.
Observed contact times for transits between 1677 and 1881 are given in S Newcomb's analysis of transits of Mercury. Observed 2nd and 3rd contacts times for transits between 1677 and 1973 are given in Royal Greenwich Observatory Bulletin No.181, 359-420 (1975).
Partial
Sometimes Mercury appears to only graze the Sun during a transit. There are two possible scenarios:
Firstly, it is possible for a transit to occur such that, at mid-transit, the disk of Mercury has fully entered the disk of the Sun as seen from some parts of the world, while as seen from other parts of the world the disk of Mercury has only partially entered the disk of the Sun. The transit of November 15, 1999 was such a transit, with the transit being a full transit for most of the world, but only a partial transit for Australia, New Zealand, and Antarctica. The previous such transit was on October 28, 743 and the next will be on May 11, 2391. While these events are very rare, two such transits will occur within years in December 6149 and June 6152.
Secondly, it is possible for a transit to occur in which, at mid-transit, the disk of Mercury has partially entered the disk of the Sun as seen from some parts of the world, while as seen from other parts of the world Mercury completely misses the Sun. Such a transit last occurred on May 11, 1937, when a partial transit occurred in southern Africa and southern Asia and no transit was visible from Europe and northern Asia. The previous such transit was on October 21, 1342 and the next will be on May 13, 2608.
The possibility that, at mid-transit, Mercury is seen to be fully on the solar disk from some parts of the world, and completely miss the Sun as seen from other parts of the world cannot occur.
History
The first observation of a Mercury transit was observed on November 7, 1631 by Pierre Gassendi. He was surprised by the small size of the planet compared to the Sun. Johannes Kepler had predicted the occurrence of transits of Mercury and Venus in his ephemerides published in 1630.
Images of the November 15, 1999 transit from the Transition Region and Coronal explorer (TRACE) satellite were on Astronomy Picture of the Day (APOD) on November 19. Three APODs featured the May 9, 2016 transit.
1832 event
The Shuckburgh telescope of the Royal Observatory, Greenwich in London was used for the 1832 Mercury transit. It was equipped with a micrometer by Dollond and was used for a report of the events as seen through the small refractor. By observing the transit in combination with timing it and taking measures, a diameter for the planet was taken. They also reported the peculiar effects that they compared to pressing a coin into the Sun. The observer remarked:
1907 event
For the 1907 Mercury transit, telescopes used at the Paris Observatory included:
Foucault-Eichens reflector ( aperture)
Foucault-Eichens reflector ( aperture)
Martin-Eichens reflector ( aperture)
Several small refractors
The telescopes were mobile and were placed on the terrace for the several observations.
Chronology
The table below includes all historical transits of Mercury from 1605 on:
See also
Mercury Passing Before the Sun, 1914 painting
Transit of Mercury from Mars
Transit of minor planets
Transit of Venus
Vulcan (hypothetical planet)
Gallery
References
External links
NASA: Transits of Mercury, Seven Century Catalog: 1601 CE to 2300 CE
Shadow & Substance.com: Transit of Mercury Animated for November 8, 2006
Transits of Mercury – Fourteen century catalog: 1 601 AD – 3 000 AD
Transits of Mercury on Earth – Fifteen millennium catalog: 5 000 BC – 10 000 AD
Scroll a little bit down and then click on 40540. You will get then a table from −125,000 till +125,000.
Time Lapse of the 9th May 2016 Transit of Mercury
Links to high-resolution video from a major solar telescope and more about several transits
Mercury
Stellar occultation | Transit of Mercury | [
"Astronomy"
] | 2,262 | [
"Astronomical events",
"Stellar occultation",
"Astronomical transits"
] |
632,451 | https://en.wikipedia.org/wiki/AEG%20%28German%20company%29 | ; AEG) was a German producer of electrical equipment. It was established in 1883 by Emil Rathenau as the Deutsche Edison-Gesellschaft für angewandte Elektricität in Berlin.
The company's initial focus was driven by electrical lighting, as in 1881, Rathenau had acquired the rights to the electric light bulb at the International Exposition of Electricity in Paris. Using small power stations, his company introduced electrical lighting to cafés, restaurants, and theaters, despite the high costs and limitations. By the end of the 19th century, AEG had constructed 248 power stations, providing a total of 210,000 hp of electricity for lighting, tramways, and household devices.
During the Second World War, AEG worked with the Nazi Party and benefited from forced labour from concentration camps. After the war, its headquarters moved to Frankfurt am Main.
In 1967, AEG joined with its subsidiary Telefunken AG, creating Allgemeine Elektricitäts-Gesellschaft AEG-Telefunken. In 1985, Daimler-Benz purchased the AEG-Telefunken Aktiengesellschaft (which was renamed to AEG Aktiengesellschaft) and wholly integrated the company in 1996 into Daimler-Benz AG (1998: DaimlerChrysler). The remains of AEG became part of Adtranz (later Bombardier Transportation) and Deutsche Aerospace (1998: DASA, today part of Airbus SE).
After acquiring the AEG household subsidiary AEG Hausgeräte GmbH in 1994, Electrolux obtained the rights to the AEG brand name in 2005, which it now uses on some of its products. The AEG name is also licensed to various brand partners under the Electrolux Global Brand Licensing program.
History
Summary
In 1883, Emil Rathenau founded Deutsche Edison-Gesellschaft für angewandte Elektricität in Berlin. In 1888, it was renamed as Allgemeine Elektricitäts-Gesellschaft. Initially producing electrical equipment (such as light bulbs, motors and generators), the company soon became involved in AC electric transmission systems. In 1907, Peter Behrens was appointed as artistic consultant to AEG. This led to the creation of the company's initial corporate identity, with products and advertising sharing common design features.
The company expanded in the first half of the 20th century, and it is credited with a number of firsts and inventions in electrical engineering. During the same period, it entered the automobile and airplane markets. Electrical equipment for railways was produced during this time, beginning a long history of supplying the German railways with electrical equipment. According to the 1930 Encyclopedia Britannica: "Prior to 1923 it was the largest electrical manufacturing concern in Germany and one of the most important industrial undertakings in the world."
During the Second World War, AEG joined with other large companies such as IG Farben, Thyssen and Krupp in their support of the Nazis. The company benefited from the use of large numbers of forced labourers as well as concentration camp prisoners, under inhuman conditions of work.
After WWII, the company lost its businesses in the eastern part of Germany. After a merger in 1967, the company was renamed Allgemeine Elektricitäts-Gesellschaft AEG-Telefunken (from 1979 on only AEG-Telefunken). The company experienced financial difficulties during the 1970s, resulting in the sale of some assets. In 1983, the consumer electronics division Telefunken Fernseh und Rundfunk GmbH was sold. In 1985, the company re-took the name AEG and the remainder of the company was acquired by Daimler-Benz; the parts that remained were primarily related to electric power distribution and electric motor technology. Under Daimler-Benz ownership, the former AEG companies eventually became part of the newly named Adtranz in 1995, and the AEG name was no longer used. Electrolux, which had already acquired the household subsidiary AEG Hausgeräte GmbH in 1994, now own the rights to use and license the AEG brand.
Foundation to 1940
The company originated in 1882, when Emil Rathenau acquired licences to use some of Thomas Edison's lamp patents in Germany. The Deutsche Edison Gesellschaft ("German Edison Company") was founded in 1883 with the financial backing of banks and private individuals, with Emil Rathenau as company director.
In 1884, Munich-born engineer Oskar von Miller (who later founded Deutsches Museum) joined the executive board. The same year, the company entered negotiations with the Berlin Magistrat (the municipal body) to supply electricity to a large area from a central supply, which resulted in the formation of the Städtischen Elektrizitätswerke (A.G.StEW) ("City electricity works company (Berlin)") on 8 May 1884.
The original factory was located near Stettiner Bahnhof. In 1887 the company acquired land in the Berlin-Gesundbrunnen area on which the Weddingsche Maschinenfabrik (founded by Wilhelm Wedding) was previously located. In the same year, in addition to a restructuring and expansion of the production range, the AEG name was adopted.
In 1887 Mikhail Dolivo-Dobrowolsky joined the company as chief engineer, later becoming vice-director. His work on polyphase electric power led him to become the world's leading engineer in three-phase electric power systems at the end of the 1880s.
In 1891 Miller and Dobrovolski demonstrated the transmission of electrical power over a distance of from a hydro electric power plant in Lauffen am Neckar to Frankfurt, where it lit 1000 light bulbs and drove an artificial waterfall at the International Electrotechnical Exhibition in Frankfurt am Main. This success marked one of beginnings of the general use of alternating current for electrification in Germany, and showed that distance transmission of electric power could be economically useful. In the same year the Stadtbahn Halle/Saale (City railway Halle–Saale) opened the first electric tram system (of notable size) in Germany.
Tropp Paul began his work for the AEG 1889/90 until 1893, and Franz Schwechten designed the facades of the Acker- und Hussitenstraße in 1894–95.
In 1894 the site of the former Berlin Viehmarktgasse (cattle market alley) was purchased. This had a railroad siding connecting to the Berlin rail network, but there was no rail connection between the two plants. In 1895 an underground railway link between the two plots was built in a tunnel 270 meters long. The tunnel was built by Siemens & Halske (S & H) (later to become Siemens) under the direction of C. Schwebel and Wilhelm Lauter who were also connected in the building of what is now the Spree tunnel Stralau used by the U-Bahn.
By 1889 AEG were known as specialists in the construction of industrial portable drilling machines, some of these were driven by flexible shafts from electric motors. AEG also developed a toothed belt drive to reduce motor speed down to that required by machine tools.
In 1903 the competing radio companies AEG and Siemens & Halske merged, forming a joint subsidiary named Telefunken.
In 1907 architect Peter Behrens became an artistic adviser. Responsible for the design of all products, advertising and architecture, he has since become considered as the world's first corporate designer. Behren's philosophy was to create a building which is solid, strong and simple in its structure. It is perfect for doing its job of producing large, heavy machinery. The dimensions of the building were chosen to allow turbines to be transported above other machinery.
In the 1920s AEG became a global supplier of electrical know-how and equipment. In 1923, for example, it provided most of the essential materials and a team of engineers to oversee the electrification of British-ruled Palestine. British firms, at the time, could not compete with the prices of AEG
The activity of the company soon extended to all areas of electrical power engineering, including electric lighting, electric power, electric railways, electro-chemical plants, as well as the construction of steam turbines, automobiles, cables and cable materials. In the first decades, the company had many factories in and around Berlin:
Maschinenfabrik Brunnenstrasse (steam turbines, dynamos, electric motors)
Apparatewerk Ackerstrasse (carbon-filament and metal thread light bulbs, Nernst lamps, switches, fuses, resistors, electrical measuring equipment, dynamos, electric motors)
Kabelwerk Oberspree (KWO, cables, copper and metal works, rubber fabrication, insulator fabrication)
Transformatorenwerk Oberspree (TRO, transformers)
Glühlampenfabrik Moabit (1907–1912, carbon-filament and metal thread light bulbs, Nernst lamps, Vacuum tubes) — later became part of Osram, from 1939 on Telefunken
Turbinenfabrik (1909, steam turbines) — famous as an example of industrial architecture
Apparate-Werke Treptow (AT - 1926, arc lamps, switches, fuses, controls, starters, electrical measuring equipment)
A number of other notable events involving AEG occurred in this period:
1900: Invention of the hairdryer.
1901: The Neue Automobil Gesellschaft ("New Automotive Company") became part of AEG through the takeover of Allgemeine Automobil-Gesellschaft
27 October 1903: An AEG-equipped experimental three-phase railcar achieved a speed of on the test track of the Königlich Preußische Militär-Eisenbahn (Royal Prussian Military Railway) between Marienfelde and Zossen. This world speed record for rail vehicles was held until 1931.
1904: Merger of AEG with the Union-Elektricitäts-Gesellschaft (UEG) (literal: Union-electricity Company)
1910: Factory Hennigsdorf. Entry into the aircraft building market.
1929: AEG produced its first compressor-driven refrigerators and temperature controlled irons.
1933: AEG joined other large manufacturing companies to support Adolf Hitler
1935: Presentation of the world's first tape device Magnetophon K1 based on work by Eduard Schüller at the Berlin Radio Show
1941: AEG bought Siemens & Halske shares in Telefunken and the company became a subsidiary.
On 20 June 1915, founder Emil Rathenau died at age 77.
The Nazi era and World War II
AEG donated 60,000 Reichsmarks to the Nazi party after the Secret Meeting of 20 February 1933 at which the twin goals of complete power and national rearmament were explained by Hitler. They joined with other large companies, such as IG Farben, Thyssen and Krupp, in their support of the Nazis, especially in promoting re-armament of the Wehrmacht, Luftwaffe, and Kriegsmarine. During the war itself, they were to use large numbers of forced labourers as well as concentration camp prisoners, under inhuman conditions of work.
AEG worked extensively with the Nazi party in Poland. AEG was forced to relinquish Kabelwerk Krakow, a cable manufacturing plant, to the Nazi party. Kabelwerk Krakow was located in Krakow-Plaszow and used forced Jewish labor manufacturing cables from 1942 to 1944. In 1943, AEG began to relocate goods and evacuate workers. Goods were relocated to various places, including Berlin and Sudetenland. When installing electric and lighting systems for the Waffen-SS training grounds in Dębica, AEG used forced labor from Jews placed in the Pustkow labor camp located in south east Poland.
During World War II, an AEG factory near Riga used female slave labour. AEG was also contracted for the production of electrical equipment at Auschwitz concentration camp.
AEG used slave labour from Camp No. 36 at the new sub-camp of Auschwitz III and also known as Monowitz, called "Arbeitslager Blechhammer". Most of them would die in 1945 during the death marches and finally in Buchenwald.
AEG was a major supplier of grips for P38 pistols manufactured by Walther Arms, Mauser, as well as on the early wartime Spreewerk P38s.
In an effort to express regret for its use of Jewish slave labour in World War II, AEG joined with Rheinmetall, Siemens, Krupp, and I G Farben to pay DEM75 million in reparations to the Jewish Claims Conference.
1945 to 1970
In 1945, after the Second World War, the production in the factories in the western sectors of Berlin - what today is the building of the headquarters of DW (TV)Deutsche Welle - and Nuremberg, Stuttgart and Mulheim an der Ruhr resumed and further new works were erected, among others an Electric meter plant in Hameln.
The steam and electric locomotive plant in Hennigsdorf (Fabriken Hennigsdorf) became a Volkseigener Betrieb (VEB) (people owned enterprise) as the Lokomotivbau Elektrotechnische Werke (LEW) ("electric locomotive works"). The cable plant (Draht-, Kabel- und Metallwerk Oberspree) and apparatus factory (Apparatefabrik Treptow) and other facilities also lay in East Germany and became Sowjetische Aktiengesellschaft (SAG) (Soviet joint stock companies). Over 90% of assets in Berlin lay in the Russian occupied zone and were lost.
The headquarters for the non-expropriated parts of the company was moved first to Hamburg and then finally to Frankfurt am Main, the headquarters in Berlin having been destroyed.
1948: The AEG factories Kassel (FK) were founded on the site of the former MWK Motorenbau Werk Kassel at Lilienthalstrasse 150 in Kassel/Hesse/Germany. The first factory part was the high voltage switchgear factory (HSF), later the refrigerator factory (KSF), the ticketprinter factory (FDF), the isolating material factory (IF) as well as the worldwide accepted high voltage institute (HI)were founded. In the early sixties more than 5000 people worked for AEG in Kassel. Today, the site Lilienthalstrasse still produces high voltage switchgear.
1950: The new corporate headquarters is at the Friedensbrücke (Peace Bridge) in Frankfurt / Main. The number of employees in the Group rose from 20,900 in September 1948 to 55,400 persons in September 1957. In the same year the turnover exceeded one billion DM for the first time, however the high level of investment in the rebuilding of the company (1948 to 1956 over 500 million DM) placed a considerable strain on the balance sheet.
1958: The slogan "Aus Erfahrung Gut" (benefit from experience) is introduced to explain the company name and acronym, leading to unflattering parodies such as "Auspacken, Einschalten, Geht nicht" (unpack, switch on, does not work) or "Alles Ein Gammel" (everything is 'gammy').
1962: The Group has 127,000 employees and generates annual sales of 3.1 billion DM. In Springe a new factory is opened in February 1962 a new factory for the production of fluid control units with 200 employees.
1962: Walter Bruch at Telefunken in Hannover develops PAL color television.
1966: The largest industrial space in Europe is created (175 m long, 45 m wide and 26 m high) for the construction using cranes of engines and generators with weights up to 400 tonnes. Robert F. Kennedy attends the opening.
1 January 1967: Merger with Telefunken creates AEG-Telefunken, headquartered in Frankfurt am Main.
1970s onwards
In 1970, AEG-Telefunken had 178,000 employees worldwide, and was the 12th largest electrical company in the world. The company was burdened by, among other things, unsuccessful projects such as an automated baggage conveyor system at Frankfurt Airport and nuclear powerplant construction. In particular, the nuclear power plant at Würgassen, the commissioning of which was delayed by several years due to technical problems cost AEG hundreds of millions of DM. As a result, the company paid its last dividend in 1972.
The entertainment arm (Telefunken Fernseh und Rundfunk GmbH) headquartered in Hanover was sold. This was followed by the computer mainframe business (TR 4, TR 10, ) (a partnership under the name Telefunken Computer GmbH with the company Nixdorf) was sold to Siemens. The process computer (TR 84, TR 86, AEG 60–10, AEG 80–20, AEG 80–60) continued as Geschäftsbereich Automatisierungstechnik (after 1980 as ATM Computer GmbH).
In 1975 the former Telefunken Headquarter at Berlin-Charlottenburg, Ernst-Reuter-Platz 7 was sold. The building had been previously rented to Technische Universität Berlin.
In 1976, to circumvent the requirement of equal participation of employees in the supervisory board, Dr. Walter Cipa (Dipl.-Geol.) (AEG boss from 1976 to 1980) created four further companies as wholly owned joint stock companies in addition to the two household appliance companies. (The numbers in parentheses refer to percentage of turnover in 1980)
AEG-Telefunken Anlagentechnik AG (37%)
AEG-Telefunken Serienprodukte AG (16%)
AEG-Telefunken Kommunikationstechnik AG (6%)
Olympia-Werke AG (business office technology, 7%)
AEG-Hausgeräte GmbH (22%)
Telefunken Fernseh und Rundfunk GmbH (12%)
In 1979 Allgemeine Elektricitäts-Gesellschaft AEG-Telefunken was renamed AEG-Telefunken AG by dropping the supplement "Allgemeine Elektricitäts-Gesellschaft", used since 1887. In February 1980, Heinz Dürr became board Chairman (until 1990).
In August 1982 a restructuring plan, backed with federal guarantees of 600 million DM and new bank loans of 275 million DM, fell apart at the first disagreement between the banks. A banking consortium provided an administrative loan of DM 1.1 billion to the AEG Group until June 1983; 400 million of which only to be available on a guarantee by the federal government. Not only was AEG-Telefunken AG affected, but also its subsidiaries Küppersbusch AG in Gelsenkirchen, Hermann Zanker Maschinenfabrik GmbH & Co. KG in Tübingen and Carl Neff GmbH in Bretten.
The Alno-Möbelwerke GmbH & Co. KG in Pfullendorf was taken over by the minority shareholders, and separated from the group.
The suppliers to AEG were affected and some filed for bankruptcy—including Becher & Co. Möbelfabriken KG in Bühlertann—with lack of continuity of company policy a factor. The site at Brunnenstraße in the former Berlin district of Wedding was also sold, as were the firms AEG-Fabrik Essen and Bauknecht.
1983/84: the consumer electronics division (Telefunken television and broadcasting GmbH) was sold to the French group Thomson-Brandt.
1985: AEG was taken over by Daimler-Benz AG. Daimler-Benz executive Edzard Reuter (from 1987 Daimler CEO), decides two companies should form an "integrated technology group" with beneficial synergy.
1988: On its 60th anniversary the AEG-Forschungsinstituts (AEG Research Institute) creates the Carl-Ramsauer Prize for scientific/technical dissertations.
1990: AEG Westinghouse Transportation Systems GmbH is formed in association with Westinghouse Transportation Systems Inc.
1992: Merger (or re-uniting) of the railway business with the Lokomotivbau Elektrotechnische Werke (LEW) in Hennigsdorf, resulting in the formation of AEG Schienenfahrzeuge GmbH (AEG locomotives)
1992: The Swedish company Atlas Copco acquires AEG Power Tools Ltd; divested in 2004 to Techtronic Industries.
1994: sale of the Automation division to Schneider Electric and of AEG Hausgeräte AG to Electrolux.
1995: AEG Schienenfahrzeuge GmbH becomes part of ABB Daimler-Benz Transportation (Adtranz) (subsequently becoming part of Bombardier Transportation in 2001; and more latterly becoming Alstom on January 29, 2021).
1996: The Annual General Meeting of Daimler-Benz AG chaired by Juergen Schrempp decides upon the dissolution of the lossmaking group.
1996: GEC ALSTHOM acquires AEG Power T&D business
September 1996: The company is deleted from the commercial register.
Products
Locomotives and railway technology
AEG played an important role in the history of the German railways; the company was involved in the development and manufacture of the electrical parts of almost all German electric locomotive series and contributed to the introduction of electrical power in German railways.
Additionally many steam locomotives were made in AEG factories. In 1931 the company acquired Borsig and transferred the locomotive production to the AEG-Borsig works (Borsig Lokomotiv-Werke GmbH) from the Borsig plant in Tegel. In 1948 the plant became VEB Lokomotivbau Elektrotechnische Werke. In addition to numerous electric locomotives produced for the DR steam locomotive production continued until 1954.
When the Federal Republic of Germany began implementing AC propulsion systems AEG found itself in competition with Brown, Boveri & Cie. The prototype DB Class E320 was built with Krupp as dual voltage (15 kV and 25 kV AC) test machine, the technology ultimately leading to locomotives such as DB Class 120 and ICE 1.
Only after German reunification and the adoption of the LEW plant in Hennigsdorf did AEG's name return to whole locomotive manufacturing, but only for a short time. "AEG locomotives GmbH " became part of ABB Daimler-Benz Transportation (later ADtranz) and currently the technology developed in the past, in part, now enables Alstom to build the very successful Traxx series of locomotives.
AEG also built the Hellenic Railways TRAINOSE Class 520 DMUs between 1989/1990/1991 and 1994/1995/1996.
Aircraft
AEG manufactured a range of aircraft from 1912 to 1918. The first aircraft in 1912 was of wooden construction and modeled after the Wright brothers biplane. It had a wingspan of ; was powered by an eight-cylinder engine producing 75 hp; unloaded weight was 850 kg; and could attain a speed of . From 1912, the construction of airplanes proceeded in mixed wood and steel tube construction with fabric covering.
One of the planes designed and built was a Riesenflugzeug ("giant aircraft") AEG R.I. This aircraft was powered by four Mercedes D.IVa engines linked to a combination leather cone and dog clutch. The first flight tests were satisfactory, but on 3 September 1918, the R.I broke up in the air killing its seven crewmen.
The most successful in terms of production figures of all the AEG aircraft designs was that of the G.IV Grossflugzeuge ("large aircraft") heavy tactical bomber, of which one still survives of the 320 built, as the sole surviving World War One German multi-engine bomber.
During the Second World War AEG produced machines for reconnaissance purposes, including a helicopter platform driven by an AC motor. This was a tethered craft that could not fly freely; the power supply was carried by three cables from the ground. The machine reached an altitude of 300 m.
Cars
AEG bought Kühlstein in 1902, founding the division Neue Automobil Gesellschaft (New Automobile Company), to make cars. AEG withdrew from car production in 1908.
Models produced include:
AAG (1900 automobile)
NAG Typ A
NAG Typ B
NAG Typ B2
Film projectors
AEG also produced for a long period a series of film projectors:
Stillstandsmaschine 1919 Projektor 35 mm
Theatermaschine 1920 Projektor 35 mm
Triumphator I–III 1924–1935 Projektor 35 mm ACR 0710
Successor (Lehrmeister) 1925–1935 Projektor 35 mm
Kofferkino 1927 encased Projektor 35 mm
Lehrmeister 1929 Projektor 35 mm ACR 0709 (Leitz)
Mechau Modell 4 1929 – 1934 Projektor 35 mm
Euro K 1938–42 Projektor 35 mm
Euro M 1936 Projektor 35 mm
Euro G 1938 Projektor 35 mm, Interlock-Version (G-MB)
Euro M2 1939–1944 Projektor 35 mm
Leadership
The AEG brand today
As a result of the breakup and dissolution of the original company, Electrolux acquired the brand rights in 2005 and the name is also licensed to various companies: Currently the brand is being actively promoted by Electrolux; it includes many of the same products that it formerly manufactured, such as power solutions energy devices, telecommunication devices (phones and mobile phones), automation, car accessories, home appliances, power tools, projectors, printing equipment and supplies, water treatment devices, and personal care devices under the AEG brand.
AEG Hausgeräte - became part of Electrolux, produces white goods, such as washing machines, dishwashers, ovens, fridges etc.
ITM Technology AG produces consumer electronics and telecommunication (mobile phone, home phone, etc.) equipment under the AEG name.
Binatone manufactures mobile accessories, mobile phones, landline phones and two way radios under the AEG brand.
AEG Elektrowerkzeuge (AEG Power Tools), licensed to Techtronic Industries (TTI) since 2009, produces hand power tools.
AEG Haustechnik (licensed to Stiebel Eltron) produces home heating and climate control (humidifiers, airconditioners) products
AEG Industrial engineering produces electrical power equipment, including generators up to 55MW, control gear and switchgear, electrical motors, transformers etc. as well as high power inverters and DC supplies for industrial use.
AEG SVS Schweiss-Technik: manufacturer resistance welding machines and equipment
AEG Gesellschaft fur moderne Informationssysteme mbH (AEG-MIS): Develops custom LCDs for information systems
AEG ID: produces RFID tags and readers
AEG Power Solutions (formerly Saft Power systems or AEG Power Supply Systems): produces uninterruptible/backup/stable power supply systems for electric supply sensitive equipment (e.g. computers)
AEG Professional Printing Equipment and Supplies: Produces wide format printers, inks, and media products for printing, as well as photoconductor drums and toners for printing applications (e.g. laser printer/photocopier)
AViTEQ Vibrationstechnik GmbH
Lloyd Dynamowerke GmbH & Co KG
Lafert Group
References
Notes
Bibliography
Gerd Flaig, Firmengeschichte der AEG ("History of AEG") Compiled by former AEG employee from AEG Telefunken archives gerdflaig.de
Further reading
Buddensieg, Tilman. Industriekkultur: Peter Behrens and the AEG, 1907-1914 (1984)
Buse, Dieter K. and Doerr, Juergen C., eds. Modern Germany: An Encyclopedia of History, People, and Culture, 1871-1990 (2 vol. Garland, 1998) pp 10–11.
Flaningam, M. L. "International Co-operation and Control in the Electrical Industry: The General Electric Company and Germany, 1919-1944." American Journal of Economics and Sociology 5.1 (1945): 7-25.
Erdmann Thiele (ed.): Telefunken nach 100 Jahren — Das Erbe einer deutschen Weltmarke. Nicolaische Verlagsbuchhandlung Berlin, 2003.
Aus der Geschichte der AEG: Vor 25 Jahren: Bau der ersten AEG-Flugzeuge. In: AEG-Mitteilungen. Jahrgang 1937, Heft 10 (Oktober), pp. 359–362.
Peter Obst: Die Industrie am Humboldthain (Maschinenfabrik), AEG 1896–1984. Innovations-Zentrum Berlin Management (IZBM) GmbH.
S. Müller, K. Wittig, S. Hoffmann (2006): Empirische Befunde zum Konsumentenboykott. Der Fall AEG/Electrolux. Dresdner Beiträge zur Betriebswirtschaftslehre Nr. 116/06. Marketing-Verein, TU Dresden - Empirische Befunde zum Konsume
Hans-Heinrich von Fersen: Autos in Deutschland 1920–1939.
50 Jahre AEG, als Manuskript gedruckt. Allgemeine Elektricitäts-Gesellschaft Abt. Presse, Berlin 1956.
Gert Hautsch: Das Imperium AEG-Telefunken, ein multinationaler Konzern. Frankfurt/Main 1979.
Felix Pinner: Emil Rathenau und das elektrische Zeitalter. Akademische Verlagsgesellschaft mbH, Leipzig 1918.
Harri Czepuck: Ein Symbol zerbricht, zur Geschichte und Politik der AEG. Dietz Verlag, Berlin 1983.
Tilmann Buddensieg: Peter Behrens und die AEG, Neue Dokumente zur Baugeschichte der Fabriken am Humboldthain. In: Schloss Charlottenburg Berlin-Preußen. Deutscher Kunstverlag, München 1971.
Peter Strunk: Die AEG. Aufstieg und Niedergang einer Industrielegende. Nicolai, Berlin 2000.
Jahresringe Verband für Vorruhestand und aktives Alter, Land Brandenburg e. V. (ed.): Zeitzeugnisse 1945–1990. Part I (1999) and II (2000).
External links
AEG-Electrolux — company website aeg.de (in German)
AEG Design case History of AEG logos goodlogo.com
AEG — Allgemeine Elektricitäts Gesellschaft AEG — general electric company — brief history of the company, with images of old products and share certificates (German language)
Aufstieg und Fall der AEG: Nur die drei Buchstaben haben überlebt Rise and Fall of AEG: only three letters remain. Article about history of AEG. heise.de
Seidel/Dame: 1920 – Versorgungsbauten für Groß-Berlin (AEG-Bauten); eine ausführliche und bebilderte Darstellung zu AEG in Berlin Architectural history of AEG buildings. Authors : Cira López Miró, Gladys Griffault, Eric Sommerlatte, Christoph Bickenbach laufwerk-b.de
AEG - A brand makes history (12 min documentation on YouTube)
Electronics companies of Germany
Defunct manufacturing companies of Germany
Defunct aircraft manufacturers of Germany
Defunct mobile phone manufacturers
Defunct motor vehicle manufacturers of Germany
Electrical engineering companies of Germany
Electrical equipment manufacturers
Electrolux brands
Home appliance manufacturers
Marine engine manufacturers
Power tool manufacturers
Tool manufacturing companies of Germany
Manufacturing companies based in Frankfurt
Manufacturing companies established in 1883
Manufacturing companies disestablished in 1996
German brands
Companies involved in the Holocaust
German companies established in 1883
German companies disestablished in 1996
Companies formerly in the MDAX
1996 mergers and acquisitions | AEG (German company) | [
"Engineering"
] | 6,675 | [
"Electrical engineering organizations",
"Electrical equipment manufacturers"
] |
632,487 | https://en.wikipedia.org/wiki/List%20of%20algorithm%20general%20topics | This is a list of algorithm general topics.
Analysis of algorithms
Ant colony algorithm
Approximation algorithm
Best and worst cases
Big O notation
Combinatorial search
Competitive analysis
Computability theory
Computational complexity theory
Embarrassingly parallel problem
Emergent algorithm
Evolutionary algorithm
Fast Fourier transform
Genetic algorithm
Graph exploration algorithm
Heuristic
Hill climbing
Implementation
Las Vegas algorithm
Lock-free and wait-free algorithms
Monte Carlo algorithm
Numerical analysis
Online algorithm
Polynomial time approximation scheme
Problem size
Pseudorandom number generator
Quantum algorithm
Random-restart hill climbing
Randomized algorithm
Running time
Sorting algorithm
Search algorithm
Stable algorithm (disambiguation)
Super-recursive algorithm
Tree search algorithm
See also
List of algorithms for specific algorithms
List of computability and complexity topics for more abstract theory
List of complexity classes, complexity class
List of data structures.
Mathematics-related lists | List of algorithm general topics | [
"Mathematics"
] | 161 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
632,489 | https://en.wikipedia.org/wiki/Quantum%20algorithm | In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
Problems that are undecidable using classical computers remain undecidable using quantum computers. What makes quantum algorithms interesting is that they might be able to solve some problems faster than classical algorithms because the quantum superposition and quantum entanglement that quantum algorithms exploit generally cannot be efficiently simulated on classical computers (see Quantum supremacy).
The best-known algorithms are Shor's algorithm for factoring and Grover's algorithm for searching an unstructured database or an unordered list. Shor's algorithm runs much (almost exponentially) faster than the best-known classical algorithm for factoring, the general number field sieve. Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task, a linear search.
Overview
Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by a quantum circuit that acts on some input qubits and terminates with a measurement. A quantum circuit consists of simple quantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as the Hamiltonian oracle model.
Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms include phase kick-back, phase estimation, the quantum Fourier transform, quantum walks, amplitude amplification and topological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems.
Algorithms based on the quantum Fourier transform
The quantum Fourier transform is the quantum analogue of the discrete Fourier transform, and is used in several quantum algorithms. The Hadamard transform is also an example of a quantum Fourier transform over an n-dimensional vector space over the field F2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number of quantum gates.
Deutsch–Jozsa algorithm
The Deutsch–Jozsa algorithm solves a black-box problem that requires exponentially many queries to the black box for any deterministic classical computer, but can be done with a single query by a quantum computer. However, when comparing bounded-error classical and quantum algorithms, there is no speedup, since a classical probabilistic algorithm can solve the problem with a constant number of queries with small probability of error. The algorithm determines whether a function f is either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half).
Bernstein–Vazirani algorithm
The Bernstein–Vazirani algorithm is the first quantum algorithm that solves a problem more efficiently than the best known classical algorithm. It was designed to create an oracle separation between BQP and BPP.
Simon's algorithm
Simon's algorithm solves a black-box problem exponentially faster than any classical algorithm, including bounded-error probabilistic algorithms. This algorithm, which achieves an exponential speedup over all classical algorithms that we consider efficient, was the motivation for Shor's algorithm for factoring.
Quantum phase estimation algorithm
The quantum phase estimation algorithm is used to determine the eigenphase of an eigenvector of a unitary gate, given a quantum state proportional to the eigenvector and access to the gate. The algorithm is frequently used as a subroutine in other algorithms.
Shor's algorithm
Shor's algorithm solves the discrete logarithm problem and the integer factorization problem in polynomial time, whereas the best known classical algorithms take super-polynomial time. It is unknown whether these problems are in P or NP-complete. It is also one of the few quantum algorithms that solves a non-black-box problem in polynomial time, where the best known classical algorithms run in super-polynomial time.
Hidden subgroup problem
The abelian hidden subgroup problem is a generalization of many problems that can be solved by a quantum computer, such as Simon's problem, solving Pell's equation, testing the principal ideal of a ring R and factoring. There are efficient quantum algorithms known for the Abelian hidden subgroup problem. The more general hidden subgroup problem, where the group is not necessarily abelian, is a generalization of the previously mentioned problems, as well as graph isomorphism and certain lattice problems. Efficient quantum algorithms are known for certain non-abelian groups. However, no efficient algorithms are known for the symmetric group, which would give an efficient algorithm for graph isomorphism and the dihedral group, which would solve certain lattice problems.
Estimating Gauss sums
A Gauss sum is a type of exponential sum. The best known classical algorithm for estimating these sums takes exponential time. Since the discrete logarithm problem reduces to Gauss sum estimation, an efficient classical algorithm for estimating Gauss sums would imply an efficient classical algorithm for computing discrete logarithms, which is considered unlikely. However, quantum computers can estimate Gauss sums to polynomial precision in polynomial time.
Fourier fishing and Fourier checking
Consider an oracle consisting of n random Boolean functions mapping n-bit strings to a Boolean value, with the goal of finding n n-bit strings z1,..., zn such that for the Hadamard-Fourier transform, at least 3/4 of the strings satisfy
and at least 1/4 satisfy
This can be done in bounded-error quantum polynomial time (BQP).
Algorithms based on amplitude amplification
Amplitude amplification is a technique that allows the amplification of a chosen subspace of a quantum state. Applications of amplitude amplification usually lead to quadratic speedups over the corresponding classical algorithms. It can be considered as a generalization of Grover's algorithm.
Grover's algorithm
Grover's algorithm searches an unstructured database (or an unordered list) with N entries for a marked entry, using only queries instead of the queries required classically. Classically, queries are required even allowing bounded-error probabilistic algorithms.
Theorists have considered a hypothetical generalization of a standard quantum computer that could access the histories of the hidden variables in Bohmian mechanics. (Such a computer is completely hypothetical and would not be a standard quantum computer, or even possible under the standard theory of quantum mechanics.) Such a hypothetical computer could implement a search of an N-item database in at most steps. This is slightly faster than the steps taken by Grover's algorithm. However, neither search method would allow either model of quantum computer to solve NP-complete problems in polynomial time.
Quantum counting
Quantum counting solves a generalization of the search problem. It solves the problem of counting the number of marked entries in an unordered list, instead of just detecting whether one exists. Specifically, it counts the number of marked entries in an -element list with an error of at most by making only queries, where is the number of marked elements in the list. More precisely, the algorithm outputs an estimate for , the number of marked entries, with accuracy .
Algorithms based on quantum walks
A quantum walk is the quantum analogue of a classical random walk. A classical random walk can be described by a probability distribution over some states, while a quantum walk can be described by a quantum superposition over states. Quantum walks are known to give exponential speedups for some black-box problems. They also provide polynomial speedups for many problems. A framework for the creation of quantum walk algorithms exists and is a versatile tool.
Boson sampling problem
The Boson Sampling Problem in an experimental configuration assumes an input of bosons (e.g., photons) of moderate number that are randomly scattered into a large number of output modes, constrained by a defined unitarity. When individual photons are used, the problem is isomorphic to a multi-photon quantum walk. The problem is then to produce a fair sample of the probability distribution of the output that depends on the input arrangement of bosons and the unitarity. Solving this problem with a classical computer algorithm requires computing the permanent of the unitary transform matrix, which may take a prohibitively long time or be outright impossible. In 2014, it was proposed that existing technology and standard probabilistic methods of generating single-photon states could be used as an input into a suitable quantum computable linear optical network and that sampling of the output probability distribution would be demonstrably superior using quantum algorithms. In 2015, investigation predicted the sampling problem had similar complexity for inputs other than Fock-state photons and identified a transition in computational complexity from classically simulable to just as hard as the Boson Sampling Problem, depending on the size of coherent amplitude inputs.
Element distinctness problem
The element distinctness problem is the problem of determining whether all the elements of a list are distinct. Classically, queries are required for a list of size ; however, it can be solved in queries on a quantum computer. The optimal algorithm was put forth by Andris Ambainis, and Yaoyun Shi first proved a tight lower bound when the size of the range is sufficiently large. Ambainis and Kutin independently (and via different proofs) extended that work to obtain the lower bound for all functions.
Triangle-finding problem
The triangle-finding problem is the problem of determining whether a given graph contains a triangle (a clique of size 3). The best-known lower bound for quantum algorithms is , but the best algorithm known requires O(N1.297) queries, an improvement over the previous best O(N1.3) queries.
Formula evaluation
A formula is a tree with a gate at each internal node and an input bit at each leaf node. The problem is to evaluate the formula, which is the output of the root node, given oracle access to the input.
A well studied formula is the balanced binary tree with only NAND gates. This type of formula requires queries using randomness, where . With a quantum algorithm, however, it can be solved in queries. No better quantum algorithm for this case was known until one was found for the unconventional Hamiltonian oracle model. The same result for the standard setting soon followed.
Fast quantum algorithms for more complicated formulas are also known.
Group commutativity
The problem is to determine if a black-box group, given by k generators, is commutative. A black-box group is a group with an oracle function, which must be used to perform the group operations (multiplication, inversion, and comparison with identity). The interest in this context lies in the query complexity, which is the number of oracle calls needed to solve the problem. The deterministic and randomized query complexities are and , respectively. A quantum algorithm requires queries, while the best-known classical algorithm uses queries.
BQP-complete problems
The complexity class BQP (bounded-error quantum polynomial time) is the set of decision problems solvable by a quantum computer in polynomial time with error probability of at most 1/3 for all instances. It is the quantum analogue to the classical complexity class BPP.
A problem is BQP-complete if it is in BQP and any problem in BQP can be reduced to it in polynomial time. Informally, the class of BQP-complete problems are those that are as hard as the hardest problems in BQP and are themselves efficiently solvable by a quantum computer (with bounded error).
Computing knot invariants
Witten had shown that the Chern-Simons topological quantum field theory (TQFT) can be solved in terms of Jones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial, which as far as we know, is hard to compute classically in the worst-case scenario.
Quantum simulation
The idea that quantum computers might be more powerful than classical computers originated in Richard Feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems, yet quantum many-body systems are able to "solve themselves." Since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. Efficient (i.e., polynomial-time) quantum algorithms have been developed for simulating both Bosonic and Fermionic systems, as well as the simulation of chemical reactions beyond the capabilities of current classical supercomputers using only a few hundred qubits. Quantum computers can also efficiently simulate topological quantum field theories. In addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimating quantum topological invariants such as Jones and HOMFLY polynomials, and the Turaev-Viro invariant of three-dimensional manifolds.
Solving a linear system of equations
In 2009, Aram Harrow, Avinatan Hassidim, and Seth Lloyd, formulated a quantum algorithm for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
Provided that the linear system is sparse and has a low condition number , and that the user is interested in the result of a scalar measurement on the solution vector (instead of the values of the solution vector itself), then the algorithm has a runtime of , where is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in (or for positive semidefinite matrices).
Hybrid quantum/classical algorithms
Hybrid Quantum/Classical Algorithms combine quantum state preparation and measurement with classical optimization. These algorithms generally aim to determine the ground-state eigenvector and eigenvalue of a Hermitian operator.
QAOA
The quantum approximate optimization algorithm takes inspiration from quantum annealing, performing a discretized approximation of quantum annealing using a quantum circuit. It can be used to solve problems in graph theory. The algorithm makes use of classical optimization of quantum operations to maximize an "objective function."
Variational quantum eigensolver
The variational quantum eigensolver (VQE) algorithm applies classical optimization to minimize the energy expectation value of an ansatz state to find the ground state of a Hermitian operator, such as a molecule's Hamiltonian. It can also be extended to find excited energies of molecular Hamiltonians.
Contracted quantum eigensolver
The contracted quantum eigensolver (CQE) algorithm minimizes the residual of a contraction (or projection) of the Schrödinger equation onto the space of two (or more) electrons to find the ground- or excited-state energy and two-electron reduced density matrix of a molecule. It is based on classical methods for solving energies and two-electron reduced density matrices directly from the anti-Hermitian contracted Schrödinger equation.
See also
Quantum machine learning
Quantum optimization algorithms
Quantum sort
Primality test
References
External links
The Quantum Algorithm Zoo: A comprehensive list of quantum algorithms that provide a speedup over the fastest known classical algorithms.
Andrew Childs' lecture notes on quantum algorithms
The Quantum search algorithm - brute force .
Surveys
Quantum computing
Theoretical computer science | Quantum algorithm | [
"Mathematics"
] | 3,241 | [
"Theoretical computer science",
"Applied mathematics"
] |
632,507 | https://en.wikipedia.org/wiki/CVA-01 | CVA-01 was a proposed United Kingdom aircraft carrier, designed during the 1960s. The ship was intended to be the first of a class that would replace all of the Royal Navy's carriers, most of which had been designed before or during the Second World War. CVA-01 and CVA-02 were intended to replace and , while CVA-03 and CVA-04 would have replaced and respectively.
The planned four carrier class was soon reduced to three before further being reduced to two and finally, following a government review, in the form of the 1966 Defence White Paper, the project was cancelled, along with the proposed Type 82 destroyer class, which were intended primarily as escorts for carrier groups. Factors contributing to the cancellation of CVA-01 included inter-service rivalries, the huge financial costs of the proposed carrier against ongoing budgetary constraints, and the technical complexity and difficulties it would have presented in construction, operation, and maintenance. Some historians also cite the increased role played by land-based aircraft in providing a nuclear deterrent and that naval leadership at the time presented their need for the carriers poorly in government.
Had CVA-01 and CVA-02 been built, it is likely they would have been named HMS Queen Elizabeth and HMS Duke of Edinburgh respectively.
Origin
In the 1960s, the Royal Navy was still one of the premier carrier fleets in the world, second only to the US Navy, which was in the process of building the 80,000-ton s.
The British fleet included the fleet carriers and , and two smaller carriers, the completely reconstructed , and the somewhat newer light carrier , both with 3D Type 984 radar and C3, but limited to air groups of 25 aircraft: at the most 20 fighters and strike aircraft and five helicopters, or alternately 16 fighters and strike aircraft, four turboprop Fairey Gannet AEW, and five helicopters. A fifth carrier, , was modernised to the minimum standard to operate second-generation Supermarine Scimitars and de Havilland Sea Vixens in 1959, but was never satisfactory or safe for operating nuclear strike aircraft and was a purely interim capability while Eagle was refitting.
While all four of the Navy's large carriers were capable of operating the S.2 version of the Blackburn Buccaneer strike aircraft, only Ark Royal and Eagle were realistically big enough to accommodate both a squadron of Buccaneers (up to 14 aircraft) and a squadron of redesigned McDonnell Douglas F-4 Phantoms, which the Royal Navy intended to procure as its new fleet air defence aircraft. With the remainder of the air group, this would give a total of approximately 40 aircraft, which compared poorly to the 90 available to a Kitty Hawk-class ship. The increasing weight and size of modern jet fighters meant that a larger deck area was required for takeoffs and landings. Although the Royal Navy had come up with increasingly innovative ways to allow ever-larger aircraft to operate from the small flight decks of their carriers, the limited physical life left in the existing ships (only Hermes was considered capable of reliable and efficient extension past 1975), and the inability of both Victorious and Hermes, the most effectively and expensively modernised of the carriers, to operate the F-4 or an effective and useful number of Buccaneers, made the order of at least two new large fleet carriers essential by the mid-1960s.
Design
Considerations
Once the Chiefs of Staff had given their approval to the idea of new carriers being necessary, in January 1962 in the strategic paper COS(621)1 British Strategy in the Sixties, the Admiralty Board had to sift through six possible designs. These ranged from 42,000 to 68,000 tons at full load. The largest design, based on the American , had space for four full-sized steam catapults but was rejected early on as being significantly too costly, particularly in terms of the dockyard upgrades that would be needed to service them.
The advantages of size were immediately apparent; a 42,000-ton carrier could only hold 27 aircraft, while a 55,000-ton carrier could carry 49 Buccaneers or Sea Vixens. This was an 80% increase in the size of the air group for a 30% increase in displacement. The Board of Admiralty decided in 1961 that the minimum would be 48,000 tons. The carriers would have two main roles: strike carrier (including attacks on airfields) and defence of the fleet. They would also operate early warning aircraft and - later - anti-submarine helicopters. Even with these smaller designs, the cost was a serious issue. The Treasury and the Air Ministry were pushing for a new set of long-range strike aircraft operating from a string of bases around the globe. For the former, this appeared a cost-effective solution for the East of Suez issue, and for the latter, it meant that the Royal Navy would not get a majority of the defence budget.
Four ships were planned but the addition of construction of four Polaris missile nuclear submarines (ordered in April 1963) introduced delays of ten months in expected production. Considerations included the availability of berths at shipyards, sufficient trained welders for use of QT35 steel, drawing office capacity at the shipyards, number of electrical fitters. A new dry dock at Portsmouth was also needed.
By July 1963 it was announced that only one carrier would be built, though there was a possibility that one would be ordered by the Australian Navy.
Details
The "sketch" approved by the Admiralty in July 1963 was for a , at the waterline, vessel. Three shafts powered by a new steam plant design would give 27-28 knots and one shaft could be shut down at a time for maintenance. The electrical distribution system, using step-down transformers from 3.3kV, was also new to the Royal Navy.
The CVA-01 would have displaced no more than 54,500 tons, with a flight deck length (including the bridle arrester boom) of and wide. Overall width was . The size of the flight deck, combined with steam catapults and arrester gear would have enabled the carriers to operate the latest jets. The two long catapults, which could operate aircraft of maximum weight of were set at 4 degrees apart. There were four take-off positions to operate V/STOL aircraft.
Initially, no armour was planned but was added to the magazines, ship sides, and hangar bringing displacement up to 54,500 tons.
The sketch included 30 Buccaneer strike and Sea Vixen fighter aircraft. The variable geometry aircraft under design to Operational Requirement OR.346 was expected to be carried out later.
In an Intervention Study (IWP/65) against Indonesia made in 1965, the Royal Navy assumed two carriers, CVA-01 and HMS Hermes (R12) operating 400 nautical miles off the southern coast of Java, would have 31 Buccaneers (24 for CVA-01 and 7 for Hermes) and 24 Phantoms (12 for CVA-01 and 12 for Hermes) to take on the Indonesian air force of the mid-1970's equipped with 20 Tu-16 Badger bombers and 20 Yak-28 Brewer bombers, deployed at six airfields.
The aircraft complement in the design approved on 27 January 1966 was a mix of 36 British specification McDonnell Douglas Phantom II fleet defence fighters (with secondary strike role) and Blackburn Buccaneer low-level strike aircraft, four early-warning aircraft, five anti-submarine helicopters, and two search-and-rescue helicopters.
Defences included an Ikara anti-submarine system and a Sea Dart anti-aircraft missile (then under development) on the quarterdeck. The Ikara was deleted from the design in February 1965.
The large 'Broomstick' radar dome above the central island on the carrier was planned to be a Type 988 Anglo-Dutch 3D radar, which would subsequently be fitted on the Royal Netherlands Navy s, although this would not have been fitted to the final carrier as Britain pulled out of the project.
Cancellation
In mid-1963 the Minister of Defence Peter Thorneycroft announced in Parliament that one new aircraft carrier would be built, at an estimated cost of £60 million, although the Treasury thought that the final cost was likely to be nearer £100 million. This was based on the carrier using the same aircraft as the Royal Air Force, the Hawker Siddeley P.1154 supersonic V/STOL aircraft (a larger version of what would become the Hawker Siddeley Harrier). The single new carrier would be part of a three carrier fleet with a refitted Eagle and Hermes until 1980. After the General Election of October 1964, however, the new Labour Government wanted to cut back defence spending, and the RAF attacked the Royal Navy's carrier in an attempt to safeguard first its BAC TSR-2 strike/reconnaissance aircraft and then its proposed replacement, the General Dynamics F-111, from the cuts.
The new government, and by extension the Treasury, were particularly concerned about the size issues involved, as these were fluctuating quite frequently. They, therefore, demanded that the Admiralty keep to 53,000 tons. With the navy unwilling to alter the size of the carrier and its air group accordingly the difficulties spiraled, and the final tonnage was much more likely to be nearer 55,000 tons. The design issues also increased, including dramatically reduced top speed, deck space, armour, and radar equipment. When the Cabinet met in February 1966, the new Secretary of State for Defence, Denis Healey, strongly supported the RAF and their plan for long-range strike aircraft, by now the F-111, partially due to the cost issues of running fleet carriers, and partially due to opposition to a strong British military. This meeting resulted in the 1966 Defence White Paper. In this paper, the CVA-01 was finally canceled, along with the remainder of the Type 82 destroyers that would have been built as escorts, of which only was eventually completed. Instead, plans were made for the modernisation of Eagle and Ark Royal. The final chief designer of CVA-01 said that by the time project was cancelled, so many design compromises had been made because of size and budget restrictions, that the whole project had become risky. The following year, a supplement to the review marked the ending of a global presence with the withdrawal of British presence "East of Suez". The year after, the purchase of F-111s was cancelled.
One argument about the cancellation of CVA-01 states that the RAF moved Australia by 500 miles in its documents to support the air force's preferred strategy of land-based aircraft. Regardless of the story's veracity, the principal reason for the cancellation was that the Defence Review Board believed adequate cover could be better provided East of Suez by RAF strike aircraft flying from bases in Australia and uninhabited islands in the Indian Ocean, rather than by a small carrier fleet in the 1970s which would have still included Hermes. The Review asserted the carrier's only effective use was to project British power East of Suez, and that the RN carriers were too 'vulnerable' for the RN's other major theatre in the North Atlantic. When the British government later decided in 1967 that it would withdraw from east of Suez, the case for carriers weakened further. The 1966 Review stated that the ability of the RAF to cover 300 miles offshore was enough for the 1970s, regardless of the RAF's contested claim of being able to provide air cover out to 700 miles. The cancellation of 150 TSR2 aircraft by Labour in mid-1965 was the basis of the RAF's argument for the 'island hopping strategy'.
Subsequent Royal Navy carriers
Eagle and Ark Royal
The cancellation of CVA-01 was planned to be compensated for by the minimum updating of both Eagle and Ark Royal to enable them to operate the 52 Phantoms ordered. However, a decision was taken later to completely phase out fixed-wing flying in the Royal Navy by 1972 in line with withdrawal from "East of Suez". Victorious was withdrawn in 1969 and Hermes was converted to a "commando carrier" to replace her sister Albion, in 1971-73.
At the time of the announcement, Ark Royal was beginning a reconstruction with an austere refit of radar systems, communications, partial electrical rewiring, and fittings needed to allow operation of the Phantom (despite the fact that it was a worse base for such a conversion than Eagle), and it was deemed unacceptable either to cancel the much-needed work, or to spend such a large amount of money (approx. £32m) for less than three years continued use. A change of government led, as a consequence, to retain Ark Royal following her 1967–1970 refit, but not to proceed with a refit of Eagle. Eagle was decommissioned in 1972, partly due to damage inflicted in a partial grounding a year before; repairs would have probably required a minimum 18-month refit in 1972–1973 at a cost of around £40 million to operate till 1977. Many of the second squadron of F-4 Phantoms intended for Eagle were immediately transferred to the RAF. Eagle remained officially in reserve as a source of spares to maintain Ark Royal until 1978, but could never have been brought back into service.
"Through Deck Cruiser"
The Royal Navy did not however completely surrender aircraft carrier capability, despite the eventual withdrawal of Ark Royal in 1978. The concept of the "through-deck command cruiser" was first raised in the late 1960s when it became clear that there was a good chance of the Fleet Air Arm losing fixed-wing capability. The "through-deck cruiser" name was chosen to avoid the stigma of great expense attached to full-size aircraft carriers, with these 20,000-ton ships having significantly less fixed-wing aviation capability than the planned CVA-01 carriers. However, they were to function as part of combined NATO fleets, with a primary mission of providing Cold War anti-submarine helicopter patrols in the north-east Atlantic Ocean, in support of the American carrier battle groups of NATO's "Forward Maritime Strategy". Three Invincible-class aircraft carriers were built.
In order to ensure the safety of the battle group around the "cruiser", the facility to carry the Sea Harrier was added at a late stage of development, the intention being that it could give the battle group the capability to intercept Soviet reconnaissance aircraft without having to rely either on land-based or US Navy interceptors. The ultimate result of this was the Royal Navy being able to deploy carrier-based aircraft during the Falklands War. One officer who worked on the CVA-01 believed, however, that had the United Kingdom "built two or three ships to this design, they would now [in 1999] be seen to have been the bargain of the century and they would have made the Falklands War a much less risky operation" due to greater functionality.
CVF
The United Kingdom returned to the fleet carrier idea with the construction of the Queen Elizabeth-class aircraft carriers, which are larger than the cancelled CVA-01s. The two new carriers, initially dubbed CVF (F for 'Future'), are named and . The contract for these vessels was announced on 25 July 2007 by the Secretary of State for Defence Des Browne. Following Queen Elizabeth'''s commissioning on 7 December 2017, Prince of Wales'' was commissioned on 10 December 2019.
Notes
References
Bibliography
Royal United Services Institute Journal – Aug 2006, Vol. 151, No. 4 By Simon Elliott – CVA-01 and CVF – What Lessons Can the Royal Navy Learn from the Cancelled 1960s Aircraft Carrier for its New Flat-top?
External links
A comprehensive essay on the history of the CVA-01 design and related issues
Island Stance
Aircraft carriers of the Royal Navy
Proposed aircraft carriers
Abandoned military projects of the United Kingdom
Proposed ships of the Royal Navy
Cancelled aircraft carriers | CVA-01 | [
"Engineering"
] | 3,210 | [
"Military projects",
"Proposed aircraft carriers"
] |
632,539 | https://en.wikipedia.org/wiki/Numerical%20Recipes | Numerical Recipes is the generic title of a series of books on algorithms and numerical analysis by William H. Press, Saul A. Teukolsky, William T. Vetterling and Brian P. Flannery. In various editions, the books have been in print since 1986. The most recent edition was published in 2007.
Overview
The Numerical Recipes books cover a range of topics that include both classical numerical analysis (interpolation, integration, linear algebra, differential equations, and so on), signal processing (Fourier methods, filtering), statistical treatment of data, and a few topics in machine learning (hidden Markov model, support vector machines). The writing style is accessible and has an informal tone. The emphasis is on understanding the underlying basics of techniques, not on the refinements that may, in practice, be needed to achieve optimal performance and reliability. Few results are proved with any degree of rigor, although the ideas behind proofs are often sketched, and references are given. Importantly, virtually all methods that are discussed are also implemented in a programming language, with the code printed in the book. Each variant of the book is keyed to a specific language.
According to the publisher, Cambridge University Press, the Numerical Recipes books are historically the all-time best-selling books on scientific programming methods. In recent years, Numerical Recipes books have been cited in the scientific literature more than 3000 times per year according to ISI Web of Knowledge (e.g., 3962 times in the year 2008). And as of the end of 2017, the book had over 44000 citations on Google Scholar.
History
The first publication was in 1986 with the title,”Numerical Recipes, The Art of Scientific Computing”, containing code in both Fortran and Pascal; an accompanying book, “Numerical Recipes Example Book (Pascal)” was first published in 1985. (A preface note in “Examples" mentions that the main book was also published in 1985, but the official note in that book says 1986.) Supplemental editions followed with code in Pascal, BASIC, and C. Numerical Recipes took, from the start, an opinionated editorial position at odds with the conventional wisdom of the numerical analysis community:
However, as it turned out, the 1980s were fertile years for the "black box" side, yielding important libraries such as BLAS and LAPACK, and integrated environments like MATLAB and Mathematica. By the early 1990s, when Second Edition versions of Numerical Recipes (with code in C, Fortran-77, and Fortran-90) were published, it was clear that the constituency for Numerical Recipes was by no means the majority of scientists doing computation, but only that slice that lived between the more mathematical numerical analysts and the larger community using integrated environments. The Second Edition versions occupied a stable role in this niche environment.
By the mid-2000s, the practice of scientific computing had been radically altered by the mature Internet and Web. Recognizing that their Numerical Recipes books were increasingly valued more for their explanatory text than for their code examples, the authors significantly expanded the scope of the book, and significantly rewrote a large part of the text. They continued to include code, still printed in the book, now in C++, for every method discussed. The Third Edition was also released as an electronic book, eventually made available on the Web for free (with nags) or by paid or institutional subscription (with faster, full access and no nags).
In 2015 Numerical Recipes sold its historic two-letter domain name nr.com and became numerical.recipes instead.
Reception
Content
Numerical Recipes is a single volume that covers a very broad range of algorithms. Unfortunately that format skewed the choice of algorithms towards simpler and shorter early algorithms which were not as accurate, efficient or stable as later more complex algorithms. The first edition had also some minor bugs, which were fixed in later editions; however according to the authors for years they were encountering on the internet rumors that Numerical Recipes is "full of bugs". They attributed this to people using outdated versions of the code, bugs in other parts of the code and misuse of routines which require some understanding to use correctly.
The rebuttal does not, however, cover criticisms regarding lack of mentions to code limitations, boundary conditions, and more modern algorithms, another theme in Snyder's comment compilation. A precision issue in Bessel functions has persisted to the third edition according to Pavel Holoborodko.
Despite criticism by numerical analysts, engineers and scientists generally find the book conveniently broad in scope. Norman Gray concurs in the following quote:
Numerical Recipes [nr] does not claim to be a numerical analysis textbook, and it makes a point of noting that its authors are (astro-)physicists and engineers rather than analysts, and so share the motivations and impatience of the book's intended audience. The declared premise of the NR authors is that you will come to grief one way or the other if you use numerical routines you do not understand. They attempt to give you enough mathematical detail that you understand the routines they present, in enough depth that you can diagnose problems when they occur, and make more sophisticated choices about replacements when the NR routines run out of steam. Problems will occur because [...]
License
The code listings are copyrighted and commercially licensed by the Numerical Recipes authors. A license to use the code is given with the purchase of a book, but the terms of use are highly restrictive. For example, programmers need to make sure NR code cannot be extracted from their finished programs and used – a difficult requirement with dubious enforceability.
However, Numerical Recipes does include the following statement regarding copyrights on computer programs:Copyright does not protect ideas, but only the expression of those ideas in a particular form. In the case of a computer program, the ideas consist of the program's methodology and algorithm, including the necessary sequence of steps adopted by the programmer. The expression of those ideas is the program source code... If you analyze the ideas contained in a program, and then express those ideas in your own completely different implementation, then that new program implementation belongs to you.
One early motivation for the GNU Scientific Library was that a free library was needed as a substitute for Numerical Recipes.
Style
Another line of criticism centers on the coding style of the books, which strike some modern readers as "Fortran-ish", though written in contemporary, object-oriented C++. The authors have defended their very terse coding style as necessary to the format of the book because of space limitations and for readability.
Titles in the series (partial list)
The books differ by edition (1st, 2nd, and 3rd) and by the computer language in which the code is given.
Numerical Recipes. The Art of Scientific Computing, 1st Edition, 1986, . (Fortran and Pascal)
Numerical Recipes in C. The Art of Scientific Computing, 1st Edition, 1988, .
Numerical Recipes in Pascal. The Art of Scientific Computing, 1st Edition, 1989, .
Numerical Recipes in Fortran. The Art of Scientific Computing, 1st Edition, 1989, .
Numerical Recipes in BASIC. The Art of Scientific Computing, 1st Edition, 1991, . (supplemental edition)
Numerical Recipes in Fortran 77. The Art of Scientific Computing, 2nd Edition, 1992, .
Numerical Recipes in C. The Art of Scientific Computing, 2nd Edition, 1992, .
Numerical Recipes in Fortran 90. The Art of Parallel Scientific Computing, 2nd Edition, 1996, .
Numerical Recipes in C++. The Art of Scientific Computing, 2nd Edition, 2002, .
Numerical Recipes. The Art of Scientific Computing, 3rd Edition, 2007, . (C++ code)
The books are published by Cambridge University Press.
References
External links
Current electronic edition of Numerical Recipes (limited free page views).
Older versions of Numerical Recipes available electronically (links to C, Fortran 77, and Fortran 90 versions in various formats, plus other hosted books)
W. Van Snyder, Why not use Numerical Recipes? , full four-page mirror by Lek-Heng Lim (includes discussion of alternatives)
Computer science books
Engineering textbooks
Mathematics books
Numerical software | Numerical Recipes | [
"Mathematics"
] | 1,659 | [
"Numerical software",
"Mathematical software"
] |
632,562 | https://en.wikipedia.org/wiki/Thunk | In computer programming, a thunk is a subroutine used to inject a calculation into another subroutine. Thunks are primarily used to delay a calculation until its result is needed, or to insert operations at the beginning or end of the other subroutine. They have many other applications in compiler code generation and modular programming.
The term originated as a whimsical irregular form of the verb think. It refers to the original use of thunks in ALGOL 60 compilers, which required special analysis (thought) to determine what type of routine to generate.
Background
The early years of compiler research saw broad experimentation with different evaluation strategies. A key question was how to compile a subroutine call if the arguments can be arbitrary mathematical expressions rather than constants. One approach, known as "call by value", calculates all of the arguments before the call and then passes the resulting values to the subroutine. In the rival "call by name" approach, the subroutine receives the unevaluated argument expression and must evaluate it.
A simple implementation of "call by name" might substitute the code of an argument expression for each appearance of the corresponding parameter in the subroutine, but this can produce multiple versions of the subroutine and multiple copies of the expression code. As an improvement, the compiler can generate a helper subroutine, called a thunk, that calculates the value of the argument. The address and environment of this helper subroutine are then passed to the original subroutine in place of the original argument, where it can be called as many times as needed. Peter Ingerman first described thunks in reference to the ALGOL 60 programming language, which supports call-by-name evaluation.
Applications
Functional programming
Although the software industry largely standardized on call-by-value and call-by-reference evaluation, active study of call-by-name continued in the functional programming community. This research produced a series of lazy evaluation programming languages in which some variant of call-by-name is the standard evaluation strategy. Compilers for these languages, such as the Glasgow Haskell Compiler, have relied heavily on thunks, with the added feature that the thunks save their initial result so that they can avoid recalculating it; this is known as memoization or call-by-need.
Functional programming languages have also allowed programmers to explicitly generate thunks. This is done in source code by wrapping an argument expression in an anonymous function that has no parameters of its own. This prevents the expression from being evaluated until a receiving function calls the anonymous function, thereby achieving the same effect as call-by-name. The adoption of anonymous functions into other programming languages has made this capability widely available.
Object-oriented programming
Thunks are useful in object-oriented programming platforms that allow a class to inherit multiple interfaces, leading to situations where the same method might be called via any of several interfaces. The following code illustrates such a situation in C++.
class A {
public:
virtual int Access() const { return value_; }
private:
int value_;
};
class B {
public:
virtual int Access() const { return value_; }
private:
int value_;
};
class C : public A, public B {
public:
int Access() const override { return better_value_; }
private:
int better_value_;
};
int use(B *b) { return b->Access(); }
int main() {
// ...
B some_b;
use(&some_b);
C some_c;
use(&some_c);
}
In this example, the code generated for each of the classes A, B and C will include a dispatch table that can be used to call on an object of that type, via a reference that has the same type. Class C will have an additional dispatch table, used to call on an object of type C via a reference of type B. The expression will use B's own dispatch table or the additional C table, depending on the type of object b refers to. If it refers to an object of type C, the compiler must ensure that C's implementation receives an instance address for the entire C object, rather than the inherited B part of that object.
As a direct approach to this pointer adjustment problem, the compiler can include an integer offset in each dispatch table entry. This offset is the difference between the reference's address and the address required by the method implementation. The code generated for each call through these dispatch tables must then retrieve the offset and use it to adjust the instance address before calling the method.
The solution just described has problems similar to the naïve implementation of call-by-name described earlier: the compiler generates several copies of code to calculate an argument (the instance address), while also increasing the dispatch table sizes to hold the offsets. As an alternative, the compiler can generate an adjustor thunk along with C's implementation of that adjusts the instance address by the required amount and then calls the method. The thunk can appear in C's dispatch table for B, thereby eliminating the need for callers to adjust the address themselves.
Interoperability
Thunks have been widely used to provide interoperability between software modules whose routines cannot call each other directly. This may occur because the routines have different calling conventions, run in different CPU modes or address spaces, or at least one runs in a virtual machine. A compiler (or other tool) can solve this problem by generating a thunk that automates the additional steps needed to call the target routine, whether that is transforming arguments, copying them to another location, or switching the CPU mode. A successful thunk minimizes the extra work the caller must do compared to a normal call.
Much of the literature on interoperability thunks relates to various Wintel platforms, including MS-DOS, OS/2, Windows and .NET, and to the transition from 16-bit to 32-bit memory addressing. As customers have migrated from one platform to another, thunks have been essential to support legacy software written for the older platforms.
The transition from 32-bit to 64-bit code on x86 also uses a form of thunking (WoW64). However, because the x86-64 address space is larger than the one available to 32-bit code, the old "generic thunk" mechanism could not be used to call 64-bit code from 32-bit code. The only case of 32-bit code calling 64-bit code is in the WoW64's thunking of Windows APIs to 32-bit.
Overlays and dynamic linking
On systems that lack automatic virtual memory hardware, thunks can implement a limited form of virtual memory known as overlays. With overlays, a developer divides a program's code into segments that can be loaded and unloaded independently, and identifies the entry points into each segment. A segment that calls into another segment must do so indirectly via a branch table. When a segment is in memory, its branch table entries jump into the segment. When a segment is unloaded, its entries are replaced with "reload thunks" that can reload it on demand.
Similarly, systems that dynamically link modules of a program together at run-time can use thunks to connect the modules. Each module can call the others through a table of thunks that the linker fills in when it loads the module. This way the modules can interact without prior knowledge of where they are located in memory.
See also
Thunk technologies
DOS Protected Mode Interface (DPMI)
DOS Protected Mode Services (DPMS)
J/Direct
Microsoft Layer for Unicode
Platform Invocation Services
Win32s
Windows on Windows
Compatibility Support Module
WoW64
libffi
Related concepts
Anonymous function
Futures and promises
Remote procedure call
Shim (computing)
Trampoline (computing)
Reducible expression
Notes
References
Computing terminology
Functional programming | Thunk | [
"Technology"
] | 1,649 | [
"Computing terminology"
] |
632,632 | https://en.wikipedia.org/wiki/Shimon%20Even | Shimon Even (; June 15, 1935 – May 1, 2004) was an Israeli computer science researcher. His main topics of interest included algorithms, graph theory and cryptography. He was a member of the Computer Science Department at the Technion since 1974. Shimon Even was the PhD advisor of Oded Goldreich, a prominent cryptographer.
Books
Algorithmic Combinatorics, Macmillan, 1973.
Graph Algorithms, Computer Science Press, 1979. .
See also
Oblivious transfer
External links
Memorial page
Bibliography on DBLP
Prof. Even's "genealogy" (PDF)
1935 births
2004 deaths
Modern cryptographers
Graph theorists
Israeli computer scientists
Israeli cryptographers
Harvard University alumni
Even Shimon
Burials at Yarkon Cemetery | Shimon Even | [
"Mathematics"
] | 147 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
632,685 | https://en.wikipedia.org/wiki/Coimage | In algebra, the coimage of a homomorphism
is the quotient
of the domain by the kernel.
The coimage is canonically isomorphic to the image by the first isomorphism theorem, when that theorem applies.
More generally, in category theory, the coimage of a morphism is the dual notion of the image of a morphism. If , then a coimage of (if it exists) is an epimorphism such that
there is a map with ,
for any epimorphism for which there is a map with , there is a unique map such that both and
See also
Quotient object
Cokernel
References
Abstract algebra
Isomorphism theorems
Category theory
pl:Twierdzenie o izomorfizmie#Pierwsze twierdzenie | Coimage | [
"Mathematics"
] | 171 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory",
"Abstract algebra",
"Algebra"
] |
632,732 | https://en.wikipedia.org/wiki/HEAnet | HEAnet is the national education and research network of Ireland.
HEAnet's e-infrastructure services support approximately 210,000 students and staff (third-level) in Ireland, and approximately 800,000 students and staff (first and second-level) relying on the HEAnet network. In total, the network supports approximately 1 million users.
Established in 1983 by a number of Irish universities, and supported by the Higher Education Authority, HEAnet provides e-infrastructure services to schools, colleges and universities within the Irish education system. Its network connects Irish universities, Institutes of technology in Ireland, the Irish Centre for High-End Computing (National Supercomputing Centre) and other higher education institutions (HEIs). It also provides internet services to primary and post-primary schools in Ireland and research organisations. Their clients also include various state-sponsored bodies, including hosting the online live conferencing service of the Oireachtas, the parliament of Ireland.
HEAnet previously hosted a mirror service, which acted as a mirror for projects such as SourceForge, Debian, and Ubuntu. This service was discontinued in 2024.
In 2014, HEAnet hosted the TERENA Conference in Dublin. It was held between 19 and 22 May 2014 in Dublin.
In 2017, HEAnet announced additional investment in "100 Gbps [services] to boost bandwidth accessed by [...] 216 academic locations around Ireland".
References
External links
HEAnet network map
Education in the Republic of Ireland
Internet in Ireland
Internet mirror services
National research and education networks | HEAnet | [
"Technology"
] | 324 | [
"Computing stubs",
"Computer network stubs"
] |
632,762 | https://en.wikipedia.org/wiki/Haboush%27s%20theorem | In mathematics Haboush's theorem, often still referred to as the Mumford conjecture, states that for any semisimple algebraic group G over a field K, and for any linear representation ρ of G on a K-vector space V, given v ≠ 0 in V that is fixed by the action of G, there is a G-invariant polynomial F on V, without constant term, such that
F(v) ≠ 0.
The polynomial can be taken to be homogeneous, in other words an element of a symmetric power of the dual of V, and if the characteristic is p>0 the degree of the polynomial can be taken to be a power of p.
When K has characteristic 0 this was well known; in fact Weyl's theorem on the complete reducibility of the representations of G implies that F can even be taken to be linear. Mumford's conjecture about the extension to prime characteristic p was proved by W. J. , about a decade after the problem had been posed by David Mumford, in the introduction to the first edition of his book Geometric Invariant Theory.
Applications
Haboush's theorem can be used to generalize results of geometric invariant theory from characteristic 0, where they were already known, to characteristic p>0. In particular Nagata's earlier results together with Haboush's theorem show that if a reductive group (over an algebraically closed field) acts on a finitely generated algebra then the fixed subalgebra is also finitely generated.
Haboush's theorem implies that if G is a reductive algebraic group acting regularly on an affine algebraic variety, then disjoint closed invariant sets X and Y can be separated by an invariant function f (this means that f is 0 on X and 1 on Y).
C.S. Seshadri (1977) extended Haboush's theorem to reductive groups over schemes.
It follows from the work of , Haboush, and Popov that the following conditions are equivalent for an affine algebraic group G over a field K:
G is reductive (its unipotent radical is trivial).
For any non-zero invariant vector in a rational representation of G, there is an invariant homogeneous polynomial that does not vanish on it.
For any finitely generated K algebra on which G act rationally, the algebra of fixed elements is finitely generated.
Proof
The theorem is proved in several steps as follows:
We can assume that the group is defined over an algebraically closed field K of characteristic p>0.
Finite groups are easy to deal with as one can just take a product over all elements, so one can reduce to the case of connected reductive groups (as the connected component has finite index). By taking a central extension which is harmless one can also assume the group G is simply connected.
Let A(G) be the coordinate ring of G. This is a representation of G with G acting by left translations. Pick an element of the dual of V that has value 1 on the invariant vector v. The map V to A(G) by sending w∈V to the element a∈A(G) with a(g) = (g(w)). This sends v to 1∈A(G), so we can assume that V⊂A(G) and v=1.
The structure of the representation A(G) is given as follows. Pick a maximal torus T of G, and let it act on A(G) by right translations (so that it commutes with the action of G). Then A(G) splits as a sum over characters λ of T of the subrepresentations A(G)λ of elements transforming according to λ. So we can assume that V is contained in the T-invariant subspace A(G)λ of A(G).
The representation A(G)λ is an increasing union of subrepresentations of the form Eλ+nρ⊗Enρ, where ρ is the Weyl vector for a choice of simple roots of T, n is a positive integer, and Eμ is the space of sections of the line bundle over G/B corresponding to a character μ of T, where B is a Borel subgroup containing T.
If n is sufficiently large then Enρ has dimension (n+1)N where N is the number of positive roots. This is because in characteristic 0 the corresponding module has this dimension by the Weyl character formula, and for n large enough that the line bundle over G/B is very ample, Enρ has the same dimension as in characteristic 0.
If q=pr for a positive integer r, and n=q−1, then Enρ contains the Steinberg representation of G(Fq) of dimension qN. (Here Fq ⊂ K is the finite field of order q.) The Steinberg representation is an irreducible representation of G(Fq) and therefore of G(K), and for r large enough it has the same dimension as Enρ, so there are infinitely many values of n such that Enρ is irreducible.
If Enρ is irreducible it is isomorphic to its dual, so Enρ⊗Enρ is isomorphic to End(Enρ). Therefore, the T-invariant subspace A(G)λ of A(G) is an increasing union of subrepresentations of the form End(E) for representations E (of the form E(q−1)ρ)). However, for representations of the form End(E) an invariant polynomial that separates 0 and 1 is given by the determinant. This completes the sketch of the proof of Haboush's theorem.
References
Mumford, D.; Fogarty, J.; Kirwan, F. Geometric invariant theory. Third edition. Ergebnisse der Mathematik und ihrer Grenzgebiete (2) (Results in Mathematics and Related Areas (2)), 34. Springer-Verlag, Berlin, 1994. xiv+292 pp.
Representation theory of algebraic groups
Invariant theory
Theorems in representation theory
Conjectures that have been proved | Haboush's theorem | [
"Physics",
"Mathematics"
] | 1,279 | [
"Symmetry",
"Group actions",
"Conjectures that have been proved",
"Mathematical problems",
"Mathematical theorems",
"Invariant theory"
] |
632,782 | https://en.wikipedia.org/wiki/Lament | A lament or lamentation is a passionate expression of grief, often in music, poetry, or song form. The grief is most often born of regret, or mourning. Laments can also be expressed in a verbal manner in which participants lament about something that they regret or someone that they have lost, and they are usually accompanied by wailing, moaning and/or crying. Laments constitute some of the oldest forms of writing, and examples exist across human cultures.
History
Many of the oldest and most lasting poems in human history have been laments. The Lament for Sumer and Ur dates back at least 4000 years to ancient Sumer, the world's first urban civilization. Laments are present in both the Iliad and the Odyssey, and laments continued to be sung in elegiacs accompanied by the aulos in classical and Hellenistic Greece. Elements of laments appear in Beowulf, in the Hindu Vedas, and in ancient Near Eastern religious texts. They are included in the Mesopotamian City Laments such as the Lament for Ur and the Jewish Tanakh, or Christian Old Testament.
In many oral traditions, both early and modern, the lament has been a genre usually performed by women: Batya Weinbaum made a case for the spontaneous lament of women chanters in the creation of the oral tradition that resulted in the Iliad The material of lament, the "sound of trauma" is as much an element in the Book of Job as in the genre of pastoral elegy, such as Shelley's "Adonais" or Matthew Arnold's "Thyrsis".
The Book of Lamentations or Lamentations of Jeremiah figures in the Old Testament. The Lamentation of Christ (under many closely variant terms) is a common subject from the Life of Christ in art, showing Jesus' dead body being mourned after the Crucifixion. Jesus himself lamented over the prospective fall of Jerusalem as he and his disciples entered the city ahead of his passion.
A lament in the Book of Lamentations or in the Psalms, in particular in the Lament/Complaint Psalms of the Tanakh, may be looked at as "a cry of need in a context of crisis when Israel lacks the resources to fend for itself". Another way of looking at it is all the more basic: laments simply being "appeals for divine help in distress". These laments, too, often have a set format: an address to God, description of the suffering/anguish from which one seeks relief, a petition for help and deliverance, a curse towards one's enemies, an expression of the belief of ones innocence or a confession of the lack thereof, a vow corresponding to an expected divine response, and lastly, a song of thanksgiving. Examples of a general format of this, both in the individual and communal laments, can be seen in Psalm 3 and Psalm 44 respectively.
The Lament of Edward II, if it is actually written by Edward II of England, is the sole surviving composition of his.
A heroine's lament is a conventional fixture of baroque opera seria, accompanied usually by strings alone, in descending tetrachords. Because of their plangent cantabile melodic lines, evocatively free, non-strophic construction and adagio pace, operatic laments have remained vividly memorable soprano or mezzo-soprano arias even when separated from the emotional pathos of their operatic contexts. An early example is Ariadne's "Lasciatemi morire", which is the only survivor of Claudio Monteverdi's lost Arianna. Francesco Cavalli's operas extended the lamento formula, in numerous exemplars, of which Ciro's "Negatemi respiri" from Ciro is notable.
Other examples include Dido's Lament ("When I am laid in earth") (Henry Purcell, Dido and Aeneas), "Lascia ch'io pianga" (George Frideric Handel, Rinaldo), "Caro mio ben" (Tomaso or Giuseppe Giordani). The lament continued to represent a musico-dramatic high point. In the context of opera buffa, the Countess's lament, "Dove sono", comes as a surprise to the audience of Wolfgang Amadeus Mozart's The Marriage of Figaro, and in Gioachino Rossini's Barber of Seville, Rosina's plaintive words at her apparent abandonment are followed, not by the expected lament aria, but by a vivid orchestral interlude of storm music. The heroine's lament remained a fixture in romantic opera, and the Marschallin's monologue in act 1 of Der Rosenkavalier can be understood as a penetrating psychological lament.
In modernity, discourses about melancholia and trauma take the functional place ritual laments hold in premodern societies. This entails a shift from a focus on community and convention to individuality and authenticity.
Scottish laments
The purely instrumental lament is a common form in piobaireachd music for the Scottish bagpipes. "MacCrimmon's Lament" dates to the Jacobite uprising of 1745. The tune is held to have been written by Donald Ban MacCrimmon, piper to the MacLeods of Dunvegan, who supported the Hanoverians. It is said that Donald Ban, who was killed at Moy in 1746, had an intimation that he would not return.
A well-known Gaelic lullaby is "Griogal Cridhe" ("Beloved Gregor"). It was composed in 1570 after the execution of Gregor MacGregor by the Campbells. The grief-stricken widow, Marion Campbell, describes what happened as she sings to her child.
"" ("Lament for the Children") is a pìobaireachd composed by Padruig Mór MacCrimmon in the early 1650s. It is generally held to be based on the loss of seven of MacCrimmon's eight sons within a year to smallpox, possibly brought to Skye by a Spanish trading vessel. Poet and writer Angus Peter Campbell, quoting poet Sorley MacLean, has called it "one of the great artistic glories of all Europe". Author Bridget MacKenzie, in Piping Traditions of Argyll, suggests that it refers to the slaughter of the MacLeod's fighting Cromwell's forces at the Battle of Worcester. It may have been inspired by both.
Other Scottish laments from outside of the piobaireachd tradition include "Lowlands Away", "MacPherson's Rant", and "Hector the Hero".
Musical form
There is a short, free musical form appearing in the Baroque and then again in the Romantic periods, called lament. It is typically a set of harmonic variations in homophonic texture, wherein the bass (Lament bass) descends through a tetrachord, usually one suggesting a minor mode.
See also
Dirge
Death poem
Death wail
Elegy
Endecha – Galician lament, subgenre of the planto
Keening
Kinah (plural: kinnot) – Kinnot are traditional Hebrew poems recited on Tisha B'Av lamenting the destruction of the First and Second Temples and other historical catastrophes. (The term "kinah" also appears in the Bible, referring to lamentation).
Kommós
Lament bass
Lithuanian laments
Mawwal, Middle Eastern variant
Threnody
Notes
Further reading
H. Munro Chadwick, Nora Kershaw Chadwick, The Growth of Literature (Cambridge: Cambridge University Press, 1932–40), e.g. vol. 2 p. 229.
Richard Church, The Lamendation of Military Campaigns. PDQ: Steve Ruling, 2000.
Andrew Dalby, Rediscovering Homer (New York: Norton, 2006. ) pp. 141–143.
Gail Holst-Warhaft, Dangerous Voices: Women's Laments and Greek Literature. London: Routledge, 1992. .
Nancy C. Lee, Lyrics of Lament: From Tragedy to Transformation. Minneapolis: Fortress, 2010.
Marcello Sorce Keller, "Expressing, Communicating, Sharing and Representing Grief and Sorrow with Organised Sound (Musings in Eight Short Segments)", in Stephen Wild, Di Roy, Aaron Corn and Ruth Lee Martin (eds), One Common Thread – The Musical World of Lament – Thematic Issue of Humanities Research. Canberra, ANU University Press, vol. XIX, no. 3. 2013, 3–14
Claus Westermann, Praise and Lament in the Psalms. Westminster: John Knox Press, 1981. .
External links
Greek laments (Thrênoi, Moirológia)
Andrea Fishman, "Thrênoi to Moirológia: Female Voices of Solitude, Resistance, and Solidarity" Oral Tradition, 23/2 (2008): 267–295
Roderick Beaton, Folk Poetry of Modern Greece, Cambridge University Press, 2004
Greek lament song (Mοιρολόϊ – Moiroloi) from Mani, performed in a funeral
Greek lament song (Mοιρολόϊ – Moiroloi) from Epirus, instrumental
Social philosophy
Traditions
Genres of poetry
Death customs
Melancholia
Oral communication
Behavior
Grief
Funeral orations
Death music | Lament | [
"Biology"
] | 1,934 | [
"Behavior"
] |
632,786 | https://en.wikipedia.org/wiki/Insulin-like%20growth%20factor%201 | Insulin-like growth factor 1 (IGF-1), also called somatomedin C, is a hormone similar in molecular structure to insulin which plays an important role in childhood growth, and has anabolic effects in adults. In the 1950s IGF-1 was called "sulfation factor" because it stimulated sulfation of cartilage in vitro, and in the 1970s due to its effects it was termed "nonsuppressible insulin-like activity" (NSILA).
IGF-1 is a protein that in humans is encoded by the IGF1 gene. IGF-1 consists of 70 amino acids in a single chain with three intramolecular disulfide bridges. IGF-1 has a molecular weight of 7,649 daltons. In dogs, an ancient mutation in IGF1 is the primary cause of the toy phenotype.
IGF-1 is produced primarily by the liver. Production is stimulated by growth hormone (GH). Most of IGF-1 is bound to one of 6 binding proteins (IGF-BP). IGFBP-1 is regulated by insulin. IGF-1 is produced throughout life; the highest rates of IGF-1 production occur during the pubertal growth spurt. The lowest levels occur in infancy and old age.
Low IGF-1 levels are associated with cardiovascular disease, while high IGF-1 levels are associated with cancer. Mid-range IGF-1 levels are associated with the lowest mortality.
A synthetic analog of IGF-1, mecasermin, is used for the treatment of growth failure in children with severe IGF-1 deficiency. Cyclic glycine-proline (cGP) is a metabolite of hormone insulin-like growth factor-1 (IGF-1). It has a cyclic structure, lipophilic nature, and is enzymatically stable which makes it a more favourable candidate for manipulating the binding-release process between IGF-1 and its binding protein, thereby normalising IGF-1 function.
Synthesis and circulation
The polypeptide hormone IGF-1 is synthesized primarily in the liver upon stimulation by growth hormone (GH). It is a key mediator of anabolic activities in numerous tissues and cells, such as growth hormone-stimulated growth, metabolism and protein translation. Due to its participation in the GH-IGF-1 axis it contributes among other things to the maintenance of muscle strength, muscle mass, development of the skeleton and is a key factor in brain, eye and lung development during fetal development.
Studies have shown the importance of the GH-IGF-1 axis in directing development and growth, where mice with a IGF-1 deficiency had a reduced body- and tissue mass. Mice with an excessive expression of IGF-1 had an increased mass.
The levels of IGF-1 in the body vary throughout life, depending on age, where peaks of the hormone is generally observed during puberty and the postnatal period. After puberty, when entering the third decade of life, there is a rapid decrease in IGF-1 levels due to the actions of GH. Between the third and eight decade of life, the IGF-1 levels decrease gradually, but unrelated to functional decline. However, protein intake is proven to increase IGF-1 levels.
Mechanism of action
IGF-1 is a primary mediator of the effects of growth hormone (GH). Growth hormone is made in the anterior pituitary gland, released into the bloodstream, and then stimulates the liver to produce IGF-1. IGF-1 then stimulates systemic body growth, and has growth-promoting effects on almost every cell in the body, especially skeletal muscle, cartilage, bone, liver, kidney, nerve, skin, hematopoietic, and lung cells. In addition to the insulin-like effects, IGF-1 can also regulate cellular DNA synthesis.
IGF-1 binds to at least two cell surface receptor tyrosine kinases: the IGF-1 receptor (IGF1R), and the insulin receptor. Its primary action is mediated by binding to its specific receptor, IGF1R, which is present on the surface of many cell types in many tissues. Binding to the IGF1R initiates intracellular signaling. IGF-1 is one of the most potent natural activators of the Akt signaling pathway, a stimulator of cell growth and proliferation, and a potent inhibitor of programmed cell death. The IGF-1 receptor and insulin receptor are two closely related members of a transmembrane tetrameric tyrosine kinase receptor family. They control vital brain functions, such as survival, growth, energy metabolism, longevity, neuroprotection and neuroregeneration.
Metabolic effects
As a major growth factor, IGF-1 is responsible for stimulating growth of all cell types, and causing significant metabolic effects. One important metabolic effect of IGF-1 is signaling cells that sufficient nutrients are available for them to undergo hypertrophy and cell division. Its effects also include inhibiting cell apoptosis and increasing the production of cellular proteins. IGF-1 receptors are ubiquitous, which allows for metabolic changes caused by IGF-1 to occur in all cell types. IGF-1's metabolic effects are far-reaching and can coordinate protein, carbohydrate, and fat metabolism in a variety of different cell types. The regulation of IGF-1's metabolic effects on target tissues is also coordinated with other hormones such as growth hormone and insulin.
The IGF system
IGF-1 is part of the insulin-like growth factor (IGF) system. This system consists of three ligands (insulin, IGF-1 and IGF-2), two tyrosine kinase receptors (insulin receptor and IGF-1R receptor) and six ligand binding proteins (IGFBP 1–6). Together they play an essential role in proliferation, survival, regulation of cell growth and affect almost every organ system in the body.
Similarly to IGF-1, IGF-2 is mainly produced in the liver and after it is released into circulation, it stimulates growth and cell proliferation. IGF-2 is thought to be a fetal growth factor, as it is essential for a normal embryonic development and is highly expressed in embryonic and neonatal tissues.
Variants
A splice variant of IGF-1 sharing an identical mature region, but with a different E domain is known as mechano-growth factor (MGF).
Related disorders
Laron syndrome
Acromegaly
Acromegaly is a syndrome caused by the anterior pituitary gland producing excess growth hormone (GH). A number of disorders may increase the pituitary's GH output, although most commonly it involves a tumor called pituitary adenoma, derived from a distinct type of cell (somatotrophs). It leads to anatomical changes and metabolic dysfunction caused by elevated GH and IGF-1 levels.
High level of IGF-1 in acromegaly is related to an increased risk of some cancers, particularly colon cancer and thyroid cancer.
Use as a diagnostic test
Growth hormone deficiency
IGF-1 levels can be analyzed and used by physicians as a screening test for growth hormone deficiency (GHD), acromegaly and gigantism. However, IGF-1 has been shown to be a bad diagnostic screening test for growth hormone deficiency.
The ratio of IGF-1 and insulin-like growth factor-binding protein 3 has been shown to be a useful diagnostic test for GHD.
Liver fibrosis
Low serum IGF-1 levels have been suggested as a biomarker for predicting fibrosis, but not steatosis, in people with metabolic dysfunction–associated steatotic liver disease.
Causes of elevated IGF-1 levels
Medical conditions:
acromegaly (especially when GH is also high)
delayed puberty
pregnancy
hyperthyroidism
some rare tumors, such as carcinoids, secreting IGF-1
Diet:
High-protein diet
consumption of dairy products (except for cheese)
consumption of fish
IGF-1 assay problems
Calorie restriction has been found to have no effect on IGF-1 levels.
Causes of reduced IGF-1 levels
Metabolic dysfunction–associated steatotic liver disease, especially at advanced stages of steatohepatitis and fibrosis
Health effects
Mortality
Both high and low levels of IGF‐1 increase mortality risk, with the mid‐range (120–160 ng/ml) being associated with the lowest mortality.
Cancer
Higher levels of IGF-1 are associated with an increased risk of breast cancer, colon cancer and lung cancer.
Dairy consumption
It has been suggested that consumption of IGF-1 in dairy products could increase cancer risk, particularly prostate cancer. However, significant levels of intact IGF-1 from oral consumption are not absorbed as they are digested by gastric enzymes. IGF-1 present in food is not expected to be active within the body in the way that IGF-1 is produced by the body itself.
The Food and Drug Administration has stated that IGF-I concentrations in milk are not significant when evaluated against concentrations of IGF-I endogenously produced in humans.
A 2018 review by the Committee on Carcinogenicity of Chemicals in Food, Consumer Products and the Environment (COC) concluded that there is "insufficient evidence to draw any firm conclusions as to whether exposure to dietary IGF-1 is associated with an increased incidence of cancer in consumers". Certain dairy processes such as fermentation are known to significantly decrease IGF-1 concentrations. The British Dietetic Association has described the idea that milk promotes hormone related cancerous tumor growth as a myth, stating "no link between dairy containing diets and risk of cancer or promoting cancer growth as a result of hormones".
Cardiovascular disease
Increased IGF-1 levels are associated with a 16% lower risk of cardiovascular disease and a 28% reduction of cardiovascular events.
Diabetes
Low IGF-1 levels are shown to increase the risk of developing type 2 diabetes and insulin resistance. On the other hand, a high IGF-1 bioavailability in people with diabetes may delay or prevent diabetes-associated complications, as it improves impaired small blood vessel function.
IGF-1 has been characterized as an insulin sensitizer.
Low serum IGF‐1 levels can be considered an indicator of liver fibrosis in type 2 diabetes mellitus patients.
See also
Somatopause
References
External links
Peptide hormones
Hormones of the somatotropic axis
Insulin-like growth factor receptor agonists
Insulin receptor agonists
Aging-related proteins
Neurotrophic factors
Developmental neuroscience
de:IGF-1 | Insulin-like growth factor 1 | [
"Chemistry",
"Biology"
] | 2,234 | [
"Signal transduction",
"Senescence",
"Neurotrophic factors",
"Neurochemistry",
"Aging-related proteins"
] |
632,827 | https://en.wikipedia.org/wiki/Turnbuckle | A turnbuckle, stretching screw or bottlescrew is a device for adjusting the tension or length of ropes, cables, tie rods, and other tensioning systems. It normally consists of two threaded eye bolts, one screwed into each end of a small metal frame, one with a conventional right-hand thread and the other with a left-hand thread. The tension can be adjusted by rotating the frame, which causes both eye bolts to be screwed in or out simultaneously, without twisting the eye bolts or attached cables.
Uses
Turnbuckles are most commonly used in applications which require a great deal of tension; they can range in mass from about for thin cable used in a garden fence, to tonnes for structural elements in buildings and suspension bridges.
Aircraft
Turnbuckles have been used in aircraft construction, especially during the early years of aviation. Historically, biplanes might use turnbuckles to adjust the tension on structural wires bracing their wings. Turnbuckles are also widely used on flexible cables in flight control systems. In both cases they are secured with lockwire or specifically designed wire clips to prevent them from turning and losing tension due to vibration.
Shipping
Turnbuckles are used for tensioning a ship's rigging and lashings. A variant of the turnbuckle called a bottle screw features an enclosed tubular body.
Entertainment industry
Turnbuckles are used in nearly all rigging performed in the entertainment industry, including theatre, film, and live concert performances. In entertainment rigging, turnbuckles are more commonly used to make small adjustments in line lengths. This is generally to make a flown (hoisted) unit sit parallel to the stage. Another way a turnbuckle could prove helpful is with making very minor height or angle adjustments.
Pipe systems
Turnbuckles are used in piping systems as a way to provide minor adjustments for field inconsistencies. This also allows for a minimum amount of resistance when transferring the load to the support components.
Orthopaedics
A type of splint is used for upper limb to produce gradual stretching over contracted joint by its turn buckle mechanism. Used to treat stiff elbow and Volkmann Ischemic Contracture.
Gallery
See also
Buffers and chain coupler
Guy-wire
Mechanical joint
References
External links
Hardware (mechanical)
Sailing rigs and rigging
Ring (martial arts) | Turnbuckle | [
"Physics",
"Technology",
"Engineering"
] | 470 | [
"Physical systems",
"Machines",
"Hardware (mechanical)",
"Construction"
] |
632,850 | https://en.wikipedia.org/wiki/Waste%20Isolation%20Pilot%20Plant | The Waste Isolation Pilot Plant, or WIPP, in New Mexico, US, is the world's third deep geological repository (after Germany's Repository for radioactive waste Morsleben and the Schacht Asse II salt mine) licensed to store transuranic radioactive waste for 10,000 years. The storage rooms at the WIPP are 2,150 feet (660 m) underground in a salt formation of the Delaware Basin. The waste is from the research and production of United States nuclear weapons only. The plant started operation in 1999, and the project is estimated to cost $19 billion in total.
It is located approximately east of Carlsbad, in eastern Eddy County, in an area known as the southeastern New Mexico nuclear corridor, which also includes the National Enrichment Facility near Eunice, New Mexico, the Waste Control Specialists low-level waste disposal facility just over the state line near Andrews, Texas, and the International Isotopes, Incorporated facility to be built near Eunice, New Mexico.
Various mishaps at the plant in 2014 brought focus to the problem of what to do with the growing backlog of waste and whether or not WIPP would be a safe repository. The 2014 incidents involved a waste explosion and airborne release of radiological material that exposed 21 plant workers to small doses of radiation that were within safety limits.
History
Geology and site selection
In 1970, the United States Atomic Energy Commission, later merged into the Department of Energy (DOE), proposed a site in Lyons, Kansas for the isolation and storage of radioactive waste. Ultimately the Lyons site was deemed unusable due to local and regional opposition, and in particular the discovery of unmapped oil and gas wells located in the area. These wells were believed to potentially compromise the ability of the planned facility to contain nuclear waste. In 1973, as a result of these concerns, and because of positive interest from the southern New Mexico community, the DOE relocated the site of the proposed nuclear waste repository, now called the Waste Isolation Pilot Plant (WIPP), to the Delaware Basin salt beds located near Carlsbad, New Mexico.
The Delaware Basin is a sedimentary basin formed largely during the Permian Period approximately 250 million years ago. It is one of three sub-basins of the Permian Basin in West Texas and Southeastern New Mexico. It contains a thick column of sedimentary rock that includes some of the most oil- and gas-rich rocks in the United States. An ancient shallow sea repeatedly filled the basin and evaporated while the basin slowly subsided, leaving behind a nearly impermeable layer of evaporites, primarily salt, in the Salado and Castile Formations, geologically similar to other basins created by evaporitic inland seas. Over time, the salt beds were covered by an additional of soil and rock. As drilling in the Salado Formation salt beds began in 1975, scientists discovered that at the edge of the basin there had been geological disturbances that had moved interbed layers into a nearly vertical position. In response, the site was moved toward the more stable center of the basin where the Salado Formation salt beds are the thickest and are perfectly horizontal.
Some observers suggested, early in the investigations, that the geological complexity of the basin was problematic, causing the hollowed-out caverns to be unstable. However, what is considered by some to be instability is considered by others to be a positive aspect of salt as a host rock. As early as 1957, the National Academy of Sciences recommended salt for radioactive waste disposal because at depth it would plastically deform, a motion called "salt creep" in the salt-mining industry. This would gradually fill in and seal any openings created by the mining, and in and around the waste.
Exact placement of the construction site in the Delaware Basin changed multiple times due to safety concerns. Brine deposits located below the salt deposits in the Delaware Basin posed a potential safety problem. The brine was first discovered when a 1975 drilling released a pressurized deposit of the liquid from below the repository level. Constructing the plant near one of these deposits could, under specific circumstances, compromise the facility’s safety. The brine could leak into the repository and either dissolve radioactivity or entrain particulate matter with radioactive waste to the surface. The contaminated brine would then need to be cleaned and properly disposed of. There is no drinking water near the site, so possible water pollution is not a concern. After deep drilling multiple times, a final site was selected. The site is located approximately east of Carlsbad.
Waste is placed in rooms underground that have been excavated within a thick salt formation (Salado and Castile Formations) where salt tectonics have been stable for more than 250 million years. Because of plasticity effects, salt will flow to any cracks that develop, a major reason why the area was chosen as a host medium for the WIPP project.
As of March 2022, the WIPP has received 40% of the authorized amount of waste set by the Land Withdrawal Act. More rooms and panels are to be added to accommodate more waste.
Addressing public concerns via the EEG
In order to address growing public unrest concerning construction of the WIPP, the New Mexico Environmental Evaluation Group (EEG) was created in 1978. This group, charged with overseeing the WIPP, verified statements, facts, and studies conducted and released by the DOE regarding the facility. The stewardship this group provided effectively lowered public fear and let the facility progress with little public opposition in comparison to similar facilities around the nation such as Yucca Mountain in Nevada.
The EEG, in addition to acting as a check for the government agencies overseeing the project, acted as a valuable advisor. In a 1981 drilling, pressurized brine was again discovered. The site was set to be abandoned when the EEG stepped in and suggested a series of tests on the brine and the surrounding area. These tests were conducted and the results showed that the brine deposit was relatively small and was isolated from other deposits. Drilling in the area was deemed safe due to these results. This saved the project valuable money and time by preventing a drastic relocation.
Early construction and testing complications
In 1979, the U.S. Congress authorized construction of the facility. In addition to formal authorization, Congress redefined the level of waste to be stored in the WIPP from high temperature to transuranic, or low level, waste. Transuranic waste often consists of materials which have come in contact with radioactive substances such as plutonium and uranium. This often includes gloves, tools, rags, and assorted machinery often used in the production of nuclear fuel and weapons. Although much less potent than nuclear reactor byproducts, this waste still remains radioactive for approximately 24,000 years. This change in classification led to a decrease in safety parameters for the proposed facility, allowing construction to continue at a faster pace.
The first extensive testing of the facility was due to begin in 1988. The proposed testing procedures involved interring samples of low level waste in the newly constructed caverns. Various structural and environmental tests would then be performed on the facility to verify its integrity and to prove its ability to safely contain nuclear waste. Opposition from various external organizations delayed actual testing into the early 1990s. Attempts at testing were resumed in October 1991 with US Secretary of Energy James Watkins announcing that he would begin transportation of waste to the WIPP.
Despite apparent progress on the facility, construction still remained costly and complicated. Originally conceptualized in the 1970s as a warehouse for waste, the repository now had regulations similar to those of nuclear reactors. As of December 1991, the plant had been under construction for 20 years and was estimated to have cost over one billion dollars (equivalent to $ in dollars). At the time, WIPP officials reported over 28 different organizations claimed authority over operations of the facility.
Congressional approval
In November 1991, a federal judge ruled that Congress must approve WIPP before any waste, even for testing purposes, was sent to the facility. This indefinitely delayed testing until Congress gave its approval. The 102nd United States Congress passed legislation allowing use of the WIPP. The U.S. House of Representatives approved the facility on October 6, 1992 and the U.S. Senate passed a bill allowing the opening of the facility on October 8 of the same year. The bill was met with much opposition in the Senate. Senator Richard H. Bryan fought the bill based on safety issues that concerned a similar facility located in Nevada, the state for which he was serving as senator. His efforts almost prevented the bill from passing. New Mexico senators Pete V. Domenici and Jeff Bingaman effectively reassured Senator Bryan that these issues would be addressed in the 103rd Congress. The final legislation provided safety standards requested by the House of Representatives and an expedited timeline requested by the Senate.
The final legislation mandated that the Environmental Protection Agency (EPA) issue revised safety standards for the facility. It also required the EPA to approve testing plans for the facility within ten months. The legislation stated that the security standards mandated in the bill were only applicable to the WIPP in New Mexico and not to other facilities in the United States. This clause caused Senator Bryan to oppose the bill, as he wanted safety standards mandated by the bill to apply to the facility in Nevada as well.
Testing and final certification
In 1994, Congress ordered Sandia National Laboratories to begin an extensive evaluation of the facility against the standards set forth by the EPA. Evaluation of the facility continued for four years, resulting in a cumulative total of 25 years of evaluation. In May 1998, the EPA concluded that there was "reasonable expectation" that the facility would contain the vast majority of the waste interred there.
The first nuclear waste arrived to the plant on March 26, 1999. This waste shipment was from Los Alamos National Laboratory, a major nuclear weapons research and development facility located north of Albuquerque, New Mexico. Another shipment followed on April 6 of the same year. These shipments marked the beginning of plant operations. As of December 2010, the plant had received and stored 9,207 shipments () of waste. The majority of this waste was transported to the facility via railroad or truck. The final facility contains a total of 56 storage rooms located approximately underground. Each room is in length. The plant is estimated to continue accepting waste for 25 to 35 years and is estimated to cost a grand total of 19 billion dollars.
Incidents at the WIPP
On February 5, 2014 at around 11:00 a.m., a salt haul truck caught fire, prompting an evacuation of the underground facility. Six workers were taken to a local hospital with smoke inhalation and were released by the next day. Lab tests after the fire confirmed that there was zero release of radiological material during, or as a result of, the fire. Underground air-monitoring equipment was out of commission after the truck fire.
In 2020, a subcontractor at the WIPP opened a $32 million lawsuit claiming that "the company that runs the facility breached its contract to rebuild the nuclear waste repository's air system." Due to the 2014 incident, a Texas-based company named Critical Application Alliance LLC was hired to build a new ventilation system. The project was going to fix the flawed design in roof panels, the foundation of the WIPP, and a highly defective control system design.
2014 container explosion
On February 15, 2014, authorities ordered workers to shelter in place at the facility after air monitors had detected unusually high radiation levels at 11:30 p.m. the previous day. None of the facility's 139 workers were underground at the time of the incident. Later, trace amounts of airborne radiation consisting of americium and plutonium particles were discovered above ground, from the facility. In total, 22 workers were exposed to radioactive contaminants equaling that of a standard chest x-ray. The Carlsbad Current-Argus wrote: "the radiation leak occurred on the evening of February 14, according to new information made public at a news conference [on February 20]. Joe Franco, manager of the DOE Carlsbad Field Office, said an underground air monitor detected high levels of alpha and beta radiation activity consistent [with] the waste buried at WIPP." Regarding the elevated levels of plutonium and americium detected outside the nuclear waste repository, Ryan Flynn, New Mexico Environment Secretary stated during a news conference: "Events like this simply should never occur. From the state's perspective, one event is far too many."
On February 26, 2014, the DOE announced that 13 WIPP above-ground workers had tested positive for exposure to radioactive material. Other employees were in the process of being tested. On Thursday, February 27, DOE announced that it sent out "a letter to tell people in two counties what they do know so far. Officials said it is too early to know what that means for the workers' health." Additional testing would be done on employees who were working at the site the day after the leak. Above ground, 182 employees continued to work. A February 27 update included comments on plans to discover what occurred below ground first by using unmanned probes and then people.
The Southwest Research and Information Center released a report on April 15, 2014 that one or more of 258 contact-handled radioactive waste containers located in room 7, panel 7 of the underground repository released radioactive and toxic chemicals. The location of the leak was estimated to be approximately from the air monitor that triggered the contaminants in the filtration system. The contaminants were spread through more than of tunnels, leading to the exhaust shaft into the surrounding above-ground environment. Air-monitoring station #107, located away, detected the radiotoxins. The filter from station #107 was analyzed by the Carlsbad Environmental Monitoring and Research Center (CEMRC) and found to contain 0.64 becquerels (Bq) per cubic meter of air of americium-241 and 0.014 Bq of plutonium-239 and plutonium-240 per cubic meter of air (equivalent to 0.64 and 0.014 radioactive decay events per second per cubic meter of air). The DOE agreed that there was a release of radioactivity from the repository and confirmed that "The event took place starting at 14 February 2014 at 23:14 and continued to 15 February 2014 14:45." The DOE also confirmed that "A large shift in wind direction can be seen to occur around 8:30 AM on 2/15/14." The EPA reported on the radiological release on their WIPP News page.
After analysis by CEMRC, the station A filter was found on February 15, 2014 to be contaminated with 4,335.71 Bq of Am-241 per every , and 671.61 Bq of plutonium-239 and plutonium-240 per every . Bob Alvarez, former DOE official, stated that the long-term ramifications of the WIPP issue are grounded in the fact that the DOE has of transuranic waste that has not been disposed of due to the fact that there are no long-term disposition plans for transuranic waste, including 5 tons of plutonium that are in-situ at the Savannah River Site, as well as water from the Hanford Nuclear Reservation in Washington State. In an article in the Bulletin of the Atomic Scientists, Alvarez wrote that "Wastes containing plutonium blew through the WIPP ventilation system, traveling 2,150 feet to the surface, contaminating at least 17 workers, and spreading small amounts of radioactive material into the environment." The URS Corporation, who oversees WIPP, removed and demoted the contracted manager of the repository. Alvarez ponders the notion of "contract handling" of radioactive waste because it deploys conventional processing practices that do not take into consideration the tens of thousands of containers buried before 1970 at several DOE sites. Alvarez states that the quantity of this pre-1970 plutonium waste is 1,300 times more than the amount permitted to "leak" into the environment at WIPP; however, much of this waste is simply buried a few feet underground at DOE sites.
The source of contamination was later found to be a barrel that exploded on February 14 because contractors at Los Alamos National Laboratory packed it with organic cat litter instead of clay cat litter. Other barrels with the same problem were then sealed in larger containers. Anthropologist Vincent Ialenti has examined the political, social, and financial triggers to this organic cat litter error in detail, linking it to the accelerated pace of the Department of Energy's and State of New Mexico's 3706 nuclear waste cleanup campaign, which ran from 2011 to 2014. Ialenti's study was published in The Bulletin of the Atomic Scientists in July 2018.
The 2014 incidents raised the question of whether or not WIPP would be a safe replacement for the Yucca Mountain nuclear waste repository in Nevada, as a destination for all waste generated at U.S. commercial nuclear power plants. The cost of the 2014 accident was initially expected to exceed $2 billion and disrupted other programs in various nuclear-industry sites. On January 9, 2017, the plant was formally reopened after three years of cleanup costing $500 million, which is significantly less than forecasted. On April 10, the plant received its first shipment of waste since reopening.
Climate
The Waste Isolation Pilot Plant is where the highest temperature ever recorded in New Mexico at occurred during the summer of 1994.
Future
Following the interment of waste in the facility, estimated to be sometime between 2025 and 2035 the storage caverns will be collapsed and sealed with 13 layers of concrete and soil. Salt will then seep into and fill the various fissures and cracks surrounding the casks of waste. After approximately 75 years, the waste will be completely isolated from the environment.
The Yucca Mountain Nuclear Waste Repository is an unfinished, currently defunct deep geological repository in Nye County, Nevada. In 1987, Congress selected Yucca Mountain to be researched as the potential first permanent repository of nuclear waste, and directed the Department of Energy (DOE) to disregard other proposed sites and study Yucca Mountain exclusively. However, federal funding for the site was terminated in 2011 by amendment to the Department of Defense and Full-Year Continuing Appropriations Act, passed on April 14, 2011.
Criteria
Waste that is to be disposed of at WIPP must meet certain "waste acceptance criteria". It accepts transuranic waste generated from DOE activities. The waste must have radioactivity exceeding per gram from TRUs that produce alpha radiation with a half life greater than 20 years. This criterion includes plutonium, uranium, americium, and neptunium among others. The WIPP must not act as a disposal site for any high-level radioactive waste or any nuclear fuel that has already been used. Mixed waste contains both radioactive and hazardous constituents, and WIPP first received mixed waste on September 9, 2000. Mixed waste is joint-regulated by the EPA and the New Mexico Environment Department.
The containers may also contain a limited amount of liquids. The energy released from radioactive materials will dissociate water into hydrogen and oxygen (radiolysis). This could then create a potentially explosive environment inside the container. The containers must be vented, as well, to prevent this from happening. All containers must pass a visual inspection that is documented to insure that all containers are in good condition. "Good condition" is described as "not having significant rusting, is of sound and structural integrity, and does not show signs of leakage."
Principle
Waste is placed in rooms underground that have been excavated within a thick salt formation (Salado and Castile Formations) where salt tectonics have been stable for more than 250 million years. Because of plasticity effects, salt and water will flow to any cracks that develop, a major reason why the area was chosen as a host medium for the WIPP project. Because drilling or excavation in the area will be hazardous long after the area is actively used, there are plans to construct markers to deter inadvertent human intrusion for the next ten thousand years.
The Salado Formation is a massive bedded salt deposit (>99% NaCl) that has a simple hydrogeology. Because massive NaCl is somewhat plastic, so holes close under pressure, the rock becomes non-porous by effectively closing pores and fractures. This has a significant effect on the overall hydraulic conductivities (water permeabilities) and molecular diffusion coefficients. These are on the order of ≤10−14 m/s and ≤10−15 m2/s respectively.
Awareness triggers
Since 1983, the DOE has been working with linguists, archaeologists, anthropologists, materials scientists, science fiction writers, and futurists to come up with a warning system. For the case of the WIPP, the markers, called "passive institutional controls", will include an outer perimeter of thirty-two granite pillars built in a four-mile (6 km) square. These pillars will surround an earthen wall, tall and wide. Enclosed within this wall will be another 16 granite pillars. At the center, directly above the waste site, will sit a roofless, granite room providing more information. The team intends to etch warnings and informational messages into the granite slabs and pillars.
This information will be recorded in the six official languages of the United Nations (English, Spanish, Russian, French, Chinese, Arabic) as well as the Native American Navajo language native to the region, with additional space for translation into future languages. Pictograms are also being considered, such as stick figure images and the iconic The Scream from Edvard Munch's painting. Complete details about the plant will not be stored on site; instead, they would be distributed to archives and libraries around the world. The team plans to submit their final plan to the U.S. Government by around 2028 and they will finalize the warning messages by 2033.
Underground laboratory
A portion of the site is used to house underground physics experiments which require shielding from cosmic rays. Although only moderately deep as such laboratories go (1585 meter water equivalent shielding), the site has several advantages. The salt is easy to excavate, dry (no water to pump out), and salt is much lower in naturally occurring radionuclides than rock.
The WIPP plant suffered an accident in February 2014 that forced all scientific activities to cease; for most experiments, it took one to two years to recover, and not all experiments recovered to continue their activities in WIPP. Especially it is unknown whether the Dark Matter Time Projection Chamber collaboration recovered their operations in WIPP after the February 2014 events.
Currently (2018) the WIPP houses the Enriched Xenon Observatory (EXO) searching for neutrinoless double beta decay. The dark matter experiment collaboration that operated in WIPP before 2014, Dark Matter Time Projection Chamber (DMTPC), is continuing their work and aims to deploy their next detector at SNOLAB. After the 2014 events at the WIPP, the DMTPC experiments were put on hold, but are expected to resume after the building is finished and waste is done being placed in the facility. The detector that the DMTPC collaboration had at WIPP was the 10-L DMTPC prototype detector (with active volume of 10 litres, hence the name 10-L or 10L) which started operations at WIPP October 2010.
Also the EXO collaboration is continuing their activities. The planned end of the EXO operations in WIPP was December 2018, and the collaboration was planning to have the next-stage detector built in SNOLAB. This means that the two biggest experimental infrastructures (EXO and DMTPC) of WIPP intend to relocate to SNOLAB and cease their operations in WIPP before the end of 2019. This would leave the WIPP underground laboratory without any major scientific experiment.
Previous experiments at WIPP include the neutrinoless double beta decay searching MAJORANA Project detectors called Segmented Enriched Germanium Assembly (SEGA) and Multiple Element Germanium Array (MEGA); these were prototype detectors used to develop the measurement apparatus of the collaboration that was deployed in 2004 in WIPP. Since then (2014 onwards), the MAJORANA collaboration has constructed a detector, the MAJORANA Demonstrator, at Sanford Underground Research Facility (SURF) at Lead, South Dakota. The MAJORANA collaboration remained active (as of 2019) and aimed to construct a large neutrinoless double beta decay experiment LEGEND after the MAJORANA Demonstrator phase.
Some smaller neutrino and dark matter experiments that have been mostly technology development oriented have also taken place at WIPP. There have also been a number of biology experiments in WIPP; for example, these experiments have studied the biological conditions of the deep under ground salt deposit. In one experiment, researchers were able to cultivate bacteria from 250 million year old spores found in WIPP. The Low Background Radiation Experiment studies the effects of reduced radiation environment to biological systems. The Low Background Radiation Experiment was stopped along with all other experiments in February 2014, but continued after summer 2016 at WIPP and has been ongoing since.
The 2000 testing of actinide transport within the Culebra Dolomite from the surrounding area of Carlsbad, New Mexico, was one of many experiments at this location to address concerns for lab safety. Other geology/geophysics experiments have taken place at WIPP, as have some special experiments relating to the operations of the Plant as a repository of radioactive waste.
See also
References
Further reading
Weitzberg, Abraham, 1982, "Building on Existing Institutions to Perpetuate Knowledge of Waste Repositories", ONWI-379, available through the National Technical Information Service.
Kaplan, Maureen F., 1982, "Archeological Data as a Basis for Repository Marker Design", ONWI-354, available through the National Technical Information Service.
Berry, Warren E., 1983, "Durability of Marker Materials for Nuclear Waste Isolation Sites", ONWI-474, available through the National Technical Information Service.
Human Interference Task Force, 1984, "Reducing the Likelihood of Future Human Activities that could Affect Geologic High-level Waste Repositories", BMI/ONWI-537, available through the National Technical Information Service.
Sebeok, Thomas A., 1984, "Communication Measures to Bridge Ten Millennia", BMI/ONWI-532, available through the National Technical Information Service.
INTERA Technologies, 1985, "Preliminary Analyses of Scenarios for Potential Human Interference for Repositories in Three Salt Formations", BMI/ONWI-553, available through the National Technical Information Service.
van Wyck, Peter C. Signs of Danger: Waste, Trauma, and Nuclear Threat. Minneapolis, Minnesota: University of Minnesota Press, 2005.
External links
Annotated bibliography for WIPP from the Alsos Digital Library for Nuclear Issues
Buildings and structures in Eddy County, New Mexico
Radioactive waste repositories in the United States
Waste treatment technology
Underground laboratories
United States Department of Energy facilities
Laboratories in the United States
1999 establishments in New Mexico
Industrial accidents and incidents in the United States | Waste Isolation Pilot Plant | [
"Chemistry",
"Engineering"
] | 5,520 | [
"Water treatment",
"Waste treatment technology",
"Environmental engineering"
] |
632,855 | https://en.wikipedia.org/wiki/Reference%20dose | A reference dose is the United States Environmental Protection Agency's maximum acceptable oral dose of a toxic substance, "below which no adverse noncancer health effects should result from a lifetime of exposure". Reference doses have been most commonly determined for pesticides. The EPA defines an oral reference dose (abbreviated RfD) as:
[A]n estimate, with uncertainty spanning perhaps an order of magnitude, of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime.
Definition
The United States Environmental Protection Agency defines a reference dose (abbreviated RfD) as the maximum acceptable oral dose of a toxic substance, below which no adverse non cancerous health effects should result from a lifetime of exposure. It is an estimate, with uncertainty spanning perhaps an order of magnitude, of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime.
Regulatory status
RfDs are no enforceable standards, unlike National Ambient Air Quality Standards. RfDs are risk assessment benchmarks, and the EPA tries to set other regulations, so that people are not exposed to chemicals in amounts that exceed RfDs. According to the EPA from 2008, "[a]n aggregate daily exposure to a [chemical] at or below the RfD (expressed as 100 percent or less of the RfD) is generally considered acceptable by EPA." States can set their own RfDs.
For example, the EPA set an acute RfD for children of 0.0015 mg/kg/day for the organochlorine insecticide endosulfan, based on neurological effects observed in test animals. The EPA then looked at dietary exposure to endosulfan, and found that for the most exposed 0.1 % of children age 1–6, their daily consumption of the endosulfan exceeded this RfD. To remedy this, the EPA revoked the use of endosulfan on the crops that contributed the most to exposure of children: certain beans, peas, spinach, and grapes.
Types
Reference doses are chemical-specific, i.e. the EPA determines a unique reference dose for every substance it evaluates. Often separate acute (0-1 month)and chronic RfDs (more than one month) are determined for the same substance. Reference doses are specific to dietary exposure. When assessing inhalation exposure, EPA uses "reference concentrations" (RfCs), instead of RfDs. Note that RfDs apply only to non-cancer effects. When evaluating carcinogenic effects, the EPA uses the Q1* method.
Determination
RfDs are usually derived from animal studies. Animals (typically rats) are dosed with varying amounts of the substance in question, and the largest dose at which no effects are observed is identified. This dose level is called the No observable effect level, or NOEL. To account for the fact that humans may be more or less susceptible than the test animal, a 10-fold "uncertainty factor" is usually applied to the NOEL. This uncertainty factor is called the "interspecies uncertainty factor" or UFinter. An additional 10-fold uncertainty factor, the "intraspecies uncertainty factor" or UFintra, is usually applied to account for the fact that some humans may be substantially more sensitive to the effects of substances than others. Additional uncertainty factors may also be applied. In general:
Frequently, a "lowest-observed-adverse-effect level" or LOAEL is used in place of a NOEL. If adverse effects are observed at all dose levels tested, then the smallest dose tested, the LOAEL, is used to calculate the RfD. An additional uncertainty factor usually applied in these cases, since the NOAEL, by definition, would be lower than the LOAEL had it been observed. If studies using human subjects are used to determine a RfD, then the interspecies uncertainty factor can be reduced to 1, but generally the 10-fold intraspecies uncertainty factor is retained. Such studies are rare.
Example
As an example, consider the following determination of the RfD for the insecticide chlorpyrifos, adapted from the EPA's Interim Reregistration Eligibility Decision for chlorpyrifos.
The EPA determined the acute RfD to be 0.005 mg/kg/day based on a study in which male rats were administered a one-time dose of chlorpyrifos and blood cholinesterase activity was monitored. Cholinesterase inhibition was observed at all dose levels tested, the lowest of which was 1.5 mg/kg. This level was thus identified at the lowest observed adverse effect level (LOAEL). A NOAEL of 0.5 mg/kg was estimated by dividing the LOAEL by a three-fold uncertainty factor. The NOAEL was then divided by the standard 10-fold inter- and 10-fold intraspecies uncertainty factors to arrive at the RfD of 0.005 mg/kg/day. Other studies showed that fetuses and children are even more sensitive to chlorpyrifos than adults, so the EPA applies an additional ten-fold uncertainty factor to protect that subpopulation. A RfD that has been divided by an additional uncertainty factor that only applies to certain populations is called a "population adjusted dose" or PAD. For chlorpyrifos, the acute PAD (or "aPAD") is thus 5×10−4 mg/kg/day, and it applies to infants, children, and women who are breast feeding.
The EPA also determined a chronic RfD for chlorpyrifos exposure based on studies in which animals were administered low doses of the pesticide for two years. Cholinesterase inhibition was observed at all dose levels tested, and a NOAEL of 0.03 mg/kg/day estimated by dividing a LOAEL of 0.3 mg/kg/day by an uncertainty factor of 10. As with the acute RfD, the chronic RfD of 3×10−4 mg/kg/day was determined by dividing this NOAEL by the inter- and intraspecies uncertainty factors. The chronic PAD ("cPAD") of 3×10−5 mg/kg/day was determined by applying an additional 10-fold uncertainty factor to account for the increased susceptibility of infants and children. Like the aPAD, this cPAD applies to infants, children, and breast feeding women.
Consensus
Because the RfD assumes "a dose below which no adverse noncarcinogenic health effects should result from a lifetime of exposure", the critical step in all chemical risk and regulatory threshold calculations is dependent upon a properly derived dose at which no observed adverse effects (NOAEL) were seen which is then divided by an uncertainty factor that considers inadequacies of the study, animal-to-human extrapolation, sensitive sub-populations, and inadequacies of the database. The RfD that is derived is not always agreed upon. Some may believe it to be overly protective while others may contend that it is not adequately protective of human health.
For example, in 2002 the EPA completed its draft toxicological review of perchlorate and proposed an RfD of 0.00003 milligrams per kilogram per day (mg/kg/day) based primarily on studies that identified neurodevelopmental deficits in rat pups. These deficits were linked to maternal exposure to perchlorate. Subsequently, the National Academy of Sciences (NAS) reviewed the health implications of perchlorate, and in 2005 proposed a much higher alternative reference dose of 0.0007 mg/kg/day based primarily on a 2002 study by Greer et al. During that study, 37 adult human subjects were split into four exposure groups exposed to 0.007 (7 subjects), 0.02 (10 subjects), 0.1 (10 subjects), and 0.5 (10 subjects) mg/kg/day. Significant decreases in iodide uptake were found in the three highest exposure groups. Iodide uptake was not significantly reduced in the lowest exposed group, but four of the seven subjects in this group experienced inhibited iodide uptake. In 2005, the RfD proposed by NAS was accepted by EPA and added to its integrated risk information system (IRIS).
In a 2005 article in the journal Environmental Health Perspectives (EHP), Gary Ginsberg and Deborah Rice argued, that the 2005 NAS RfD was not protective of human health based on the following:
The NAS report described the level of lowest exposure from Greer et al. as a NOEL. However, there was actually an effect at that level although not statistically significant largely due to small size of study population (four of seven subjects showed a slight decrease in iodide uptake).
Reduced iodide uptake was not considered to be an adverse effect even though it is a precursor to an adverse effect, hypothyroidism. Therefore, additional safety factors, they argued, are necessary when extrapolating from the point of departure to the RfD.
Consideration of data uncertainty was insufficient because the Greer, et al. study reflected only a 14-day exposure ( acute) to healthy adults and no additional safety factors were considered to protect sensitive subpopulations like for example, breastfeeding newborns.
Although there has generally been consensus with the Greer et al. study, there is no consensus with regard to developing a perchlorate RfD. One of the key differences results from how the point of departure is viewed (i.e., NOEL or LOAEL), or whether a benchmark dose should be used to derive the RfD. Defining the point of departure as a NOEL or LOAEL has implications when it comes to applying appropriate safety factors to the point of departure to derive the RfD.
In 2010, the Massachusetts Department of Environmental Protection set a 10 fold lower RfD (0.07 μg/kg/day) using a much higher uncertainty factor of 100. They also calculated an Infant drinking water value, which neither US EPA nor CalEPA have done.
See also
Dietary Reference Intake
Acceptable daily intake
References
Concentration indicators
Toxicology | Reference dose | [
"Environmental_science"
] | 2,118 | [
"Toxicology"
] |
632,899 | https://en.wikipedia.org/wiki/Low-level%20waste | Low-level waste (LLW) or low-level radioactive waste (LLRW) is a category of nuclear waste. The definition of low-level waste is set by the nuclear regulators of individual countries, though the International Atomic Energy Agency (IAEA) provides recommendations.
LLW includes items that have become contaminated with radioactive material or have become radioactive through exposure to neutron radiation. This waste typically consists of contaminated protective shoe covers and clothing, wiping rags, mops, filters, reactor water treatment residues, equipments and tools, luminous dials, medical tubes, swabs, injection needles, syringes, and laboratory animal carcasses and tissues.
LLW in the United Kingdom
In the UK, LLW is defined as waste with specific activities below 12 gigabecquerel/ tonne (GBq/t) beta/gamma and below 4 GBq/t alpha emitting nuclides. Waste with specific activities above these thresholds are categorised as either Intermediate-level waste (ILW) or high heat generating waste depending upon the heat output of the waste.
Very Low Level Waste (VLLW) is a sub-category of LLW. VLLW is LLW that is suitable disposal with regular household or industrial waste at specially permitted landfill facilities. The major components of VLLW from nuclear sites are building rubble, soil and steel items. These arise from the dismantling and demolition of nuclear reactors and facilities.
LLW in the United States
LLW in the United States is defined as nuclear waste that does not fit into the categorical definitions: high-level waste (HLW), spent nuclear fuel (SNF), transuranic waste (TRU), or certain byproduct materials known as 11e(2) wastes, such as uranium mill tailings. In essence, it is a definition by exclusion, and LLW is that category of radioactive wastes that do not fit into the other categories. If LLW is mixed with hazardous wastes as classified by RCRA, then it has a special status as mixed low-level waste (MLLW) and must satisfy treatment, storage, and disposal regulations both as LLW and as hazardous waste. While the bulk of LLW is not highly radioactive, the definition of LLW does not include references to its activity, and some LLW may be quite radioactive, as in the case of radioactive sources used in industry and medicine.
It is notable that U.S. regulations do not define the category intermediate-level waste and thus many wastes which would fall into this category under other regulatory regimes are instead classified as LLW. This also means that the radioactive of LLW in the US can range from just above background levels found in nature to very highly radioactive in certain cases such as parts from inside the reactor vessel in a nuclear power plant.
Disposal
Depending on who "owns" the waste, its handling and disposal is regulated differently. All nuclear facilities, whether they are a utility or a disposal site, have to comply with Nuclear Regulatory Commission (NRC) regulations. The four low-level waste facilities in the U.S. are Barnwell, South Carolina; Richland, Washington; Clive, Utah; and as of June 2013, Andrews County, Texas. The Barnwell and the Clive locations are operated by EnergySolutions, the Richland location is operated by U.S. Ecology, and the Andrews County location is operated by Waste Control Specialists. Barnwell, Richland, and Andrews County accept Classes A through C of low-level waste, whereas Clive only accepts Class A LLW. The DOE has dozens of LLW sites under management. The largest of these exist at DOE Reservations around the country (e.g. the Hanford Reservation, Savannah River Site, Nevada Test Site, Los Alamos National Laboratory, Oak Ridge National Laboratory, Idaho National Laboratory, to name the most significant).
Classes of wastes are detailed in 10 C.F.R. § 61.55 Waste Classification, enforced by the Nuclear Regulatory Commission, reproduced in the table below. These are not all the isotopes disposed of at these facilities, just the ones that are of most concern for the long-term monitoring of the sites. Waste is divided into three classes, A through C, where A is the least radioactive and C is the most radioactive. Class A LLW is able to be deposited near the surface, whereas Classes B and C LLW have to be buried progressively deeper.
In 10 C.F.R. § 20.2002, the NRC reserves the right to grant a free release of radioactive waste. The overall activity of such a disposal cannot exceed 1 mrem/yr and the NRC regards requests on a case-by-case basis. Low-level waste passing such strict regulations is then disposed of in a landfill with other garbage. Items allowed to be disposed of in this way include glow-in-the-dark watches (radium) and smoke detectors (americium).
LLW should not be confused with high-level waste (HLW) or spent nuclear fuel (SNF). C Class low level waste has a limit of 100 nano-Curies per gram of alpha-emitting transuranic nuclides with a half life greater than 5 years; any more than 100 nCi, and it must be classified as transuranic waste (TRU). These require different disposal pathways. TRU wastes from the U.S. nuclear weapons complex is currently disposed at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, though other sites also are being considered for on-site disposal of particularly difficult to manage TRU wastes.
See also
Low Level Waste Repository
Mixed waste (radioactive/hazardous)
Radioactive waste
Spent nuclear fuel
Transuranic waste
References
Notes
General references
Fentiman, Audeen W. and James H. Saling. Radioactive Waste Management. New York: Taylor & Francis, 2002. Second ed.
Jorge L. Contreras, "In the Village Square: Risk Misperception and Decisionmaking in the Regulation of Low-Level Radioactive Waste", 19 Ecology Law Quarterly 481 (1992) (SSRN)
External links
NRC description of low-level waste
Radioactive waste | Low-level waste | [
"Chemistry",
"Technology"
] | 1,287 | [
"Environmental impact of nuclear power",
"Radioactive waste",
"Hazardous waste",
"Radioactivity"
] |
632,922 | https://en.wikipedia.org/wiki/Sex-positive%20feminism | Sex-positive feminism, also known as pro-sex feminism, sex-radical feminism, or sexually liberal feminism, is a feminist movement centering on the idea that sexual freedom is an essential component of women's freedom. They oppose legal or social efforts to control sexual activities between consenting adults, whether they are initiated by the government, other feminists, opponents of feminism, or any other institution. They embrace sexual minority groups, endorsing the value of coalition-building with marginalized groups. Sex-positive feminism is connected with the sex-positive movement. Sex-positive feminism brings together anti-censorship activists, LGBT activists, feminist scholars, producers of pornography and erotica, among others. Sex-positive feminists believe that prostitution can be a positive experience if workers are treated with respect, and agree that sex work should not be criminalized.
Key ideas
Gayle Rubin summarizes the conflict over sex within feminism. She says that one feminist stream criticizes the sexual constraints and difficulties faced by sexually active women (e.g., access to abortion), while another stream views sexual liberalization as an extension of "male privilege".
Sex-positive feminists reject the vilification of male sexuality that many attribute to radical feminism, and instead embrace the entire range of human sexuality. They argue that the patriarchy limits sexual expression and are in favor of giving people of all genders more sexual opportunities, rather than restricting pornography. Sex-positive feminists generally reject sexual essentialism, defined by Rubin as "the idea that sex is a natural force that exists prior to social life and shapes institutions". Rather, they see sexual orientation and gender as social constructs that are heavily influenced by society.
Some radical feminists reject the dichotomy of "sex-positive" and "sex-negative" feminism, suggesting that instead, the real divide is between liberal feminism and radical feminism.
Sex-radical feminists in particular, come to a sex-positive stance from a deep distrust in the patriarchy's ability to secure women's best interest in sexually limiting laws. Other feminists identify women's sexual liberation as the real motive behind the women's movement. Naomi Wolf writes, "Orgasm is the body's natural call to feminist politics." Sharon Presley, the National Coordinator of the Association of Libertarian Feminists, writes that in the area of sexuality, government blatantly discriminates against women.
The social background in which sex-positive feminism operates must also be understood: Christian societies are often influenced by what is understood as 'traditional' sexual morality: according to the Christian doctrine, sexual activity must only take place in marriage, and must be vaginal intercourse; sexual acts outside marriage and 'unnatural sex' (i.e. oral, anal sex, termed as "sodomy") are forbidden; yet forced sexual intercourse within marriage is not seen as immoral by a few social and religious conservatives, owing to the existence of so-called 'conjugal rights' defined in the Bible at 1 Corinthians 7:3-5.
Such organization of sexuality has increasingly come under legal and social attack in recent decades.
In addition, in certain cultures, particularly in Mediterranean countries influenced by Roman Catholicism, traditional ideas of masculinity and female purity are still influential. This has led to what many interpret as a double standard between male and female sexuality; men are expected to be sexually assertive as a way of affirming their masculinity, but for a woman to be considered 'good', she must remain pure. Indeed, Cesare Lombroso claimed in his book, The Female Offender, that women could be categorized into three types: the Criminal Woman, the Prostitute, and the Normal Woman. As such, highly sexed women (prostitutes) were deemed as abnormal.
Feminists "ranging from Betty Friedan and Kate Millett to Karen DeCrow, Wendy Kaminer and Jamaica Kincaid" supported the right to consume pornography. Feminists who have advocated a sex-positive position include writer Kathy Acker, academic Camille Paglia, sex educator Megan Andelloux, Susie Bright, Rachel Kramer Bussel, Diana Cage, Avedon Carol, Patrick Califia, Betty Dodson, Nancy Friday, Jane Gallop, Laci Green, porn performer Nina Hartley, Josephine Ho, Amber L. Hollibaugh, Brenda Howard, Laura Kipnis, Wendy McElroy, Inga Muscio, Joan Nestle, Marcia Pally, Carol Queen, Candida Royalle, Gayle Rubin, Annie Sprinkle, Tristan Taormino, Ellen Willis, and Mireille Miller- Young.
Sex positivity
According to sexologist and author Carol Queen, in an interview with researcher and professor Lynn Comella, "[sex positivity] is the cultural philosophy that understands sexuality as a potentially positive force in one's life, and it can be [...] contrasted with sex-negativity, which sees sex as problematic, disruptive, dangerous. Sex-positivity allows for and [...] celebrates sexual diversity, differing desires and relationships structures, and individual choices based on consent... [negative sexual experiences caused by lack of information, support, and choices] are the cultural conditions that sex-positivity allows us to point out as curtailers of healthy, enjoyable sexual experience."
Queen also added, "This sense that many of us were being denied space and credentials to speak for ourselves and speak about issues within our community is what [...] led to the efflorescence of sex-positive feminism. And it is why there is a sex-positive feminism and not just sex-positivity."
Historical roots
Authors such as Gayle Rubin and Wendy McElroy see the roots of sex-positive feminism stemming from the work of sex reformers and workers for sex education and access to contraception, such as Havelock Ellis, Margaret Sanger, Mary Dennett and, later, Alfred Kinsey and Shere Hite. However, the contemporary incarnation of sex-positive feminism appeared more recently, following an increasing feminist focus on pornography as a source of women's oppression in the 1970s.
The rise of second-wave feminism was concurrent with the sexual revolution and rulings that loosened legal restrictions on access to pornography. In the 1970s, radical feminists became increasingly focused on issues around sexuality in a patriarchal society. Some feminist groups began to concern themselves with prescribing what proper feminist sexuality should look like. This was especially characteristic of lesbian separatist groups, but some heterosexual women's groups, such as Redstockings, became engaged with this issue as well. On the other hand, there were also feminists, such as Betty Dodson, who saw women's sexual pleasure and masturbation as central to women's liberation. Pornography was not a major issue during this era; radical feminists were generally opposed to pornography, but the issue was not treated as especially important until the mid-1970s.
There were, however, feminist prostitutes-rights advocates, such as COYOTE, which campaigned for the decriminalization of prostitution.
The late 1970s found American culture becoming increasingly concerned about the aftermath of a decade of greater sexual freedom, including concerns about explicit violent and sexual imagery in the media, the mainstreaming of pornography, increased sexual activity among teenagers, and issues such as the dissemination of child pornography and the purported rise of "snuff films". (Critics maintain that this atmosphere amounted to a moral panic, which reached its peak in the mid-1980s.). These concerns were reflected in the feminist movement, with radical feminist groups claiming that pornography was a central underpinning of patriarchy and a direct cause of violence against women. Robin Morgan summarized this idea in her statement, "Pornography is the theory; rape the practice."
Andrea Dworkin and Robin Morgan began articulating a vehemently anti-porn stance based in radical feminism beginning in 1974, and anti-porn feminist groups, such as Women Against Pornography and similar organizations, became highly active in various US cities during the late 1970s. As anti-porn feminists broadened their criticism and activism to include not only pornography, but prostitution and sadomasochism, other feminists became concerned about the direction the movement was taking and grew more critical of anti-porn feminism.
This included feminist BDSM practitioners (notably Samois), prostitutes-rights advocates, and many liberal and anti-authoritarian feminists for whom free speech, sexual freedom, and advocacy of women's agency were central concerns.
One of the earliest feminist arguments against this anti-pornography trend amongst feminists was Ellen Willis's essay "Feminism, Moralism, and Pornography" first published in October 1979 in the Village Voice. In response to the formation of Women Against Pornography in 1979, Willis wrote an article (the origin of the term, "pro-sex feminism"), expressing worries about anti-pornography feminists' attempts to make feminism into a single-issue movement, arguing that feminists should not issue a blanket condemnation against all pornography and that restrictions on pornography could just as easily be applied to speech that feminists found favorable to themselves.
Rubin calls for a new feminist theory of sex, saying that existing feminist thoughts on sex had frequently considered sexual liberalization as a trend that only increases male privilege. Rubin criticizes anti-pornography feminists who she claims "have condemned virtually every variant of sexual expression as anti-feminist," arguing that their view of sexuality is dangerously close to anti-feminist, conservative sexual morality. Rubin encourages feminists to consider the political aspects of sexuality without promoting sexual repression. She also argues that the blame for women's oppression should be put on targets who deserve it: "the family, religion, education, child-rearing practices, the media, the state, psychiatry, job discrimination, and unequal pay..." rather than on relatively un-influential sexual minorities.
McElroy (1995) argues that for feminists in the 1970s and 1980s, turning to matters of sexual expression was a result of frustration with feminism's apparent failure to achieve success through political channels: in the United States, the Equal Rights Amendment (ERA) had failed, and abortion rights came under attack during the Reagan administration.
Scholar Elaine Jeffreys observes that the 'anti-prostitute' position gained increased critical purchase in China during the establishment of the international movement for prostitutes in 1985, demanding recognition of prostitutes' rights as an emancipation and labor issue rather than of criminality, immorality or disease.
In her 1992 book, Sexual Reality: A Virtual Sex World Reader, sex-positive feminist Susie Bright dedicated a chapter to a salon gathering she co-hosted with fellow feminists Laura Miller, Amy Wallace, and Lisa Palac at Wallace's Berkeley Hills mansion, attended by 16 women writers and served by fully nude men they called "slaveboys". The hosts had advertised for "slaveboys" in the San Francisco Weekly, stating, "Genteel and Bohemian gathering of women writers requires comely slaveboys to serve at our tea party. You will serve nude and will not speak unless spoken to. [...]". The ad received about 100 responses, from which six were selected after "nude auditions". The "slaveboys" served tea and meals, provided foot massages, polished nails, brushed hair, tended the fire, and posed for photographs with the guests. Bright also addresses criticism from unattended friends who called the setup "reverse sexism", to which she responded unapologetically, adding a note of regret for not having sex with them.
By the 2000s, the positive-sex position had driven various international human rights NGOs to actively pressure the Chinese government to abandon its official policy of banning prostitution in post-reform China and recognize voluntary prostitution as legitimate work.
Related major political issues
Pornography
The issue of pornography was perhaps the first issue to unite sex-positive feminists, though current sex-positive views on the subject are wide-ranging and complex. During the 1980s, Andrea Dworkin and Catharine MacKinnon, as well as activists inspired by their writings, worked in favor of anti-pornography ordinances in a number of U.S. cities, as well as in Canada. The first such ordinance was passed by the city council in Minneapolis in 1983. MacKinnon and Dworkin took the tactic of framing pornography as a civil rights issue, arguing that showing pornography constituted sex discrimination against women. The sex-positive movement response to this argument was that legislation against pornography violates women's right to free speech. Soon after, a coalition of anti-porn feminists and right-wing groups succeeded in passing a similar ordinance in Indianapolis. This ordinance was later declared unconstitutional by a Federal court in American Booksellers v. Hudnut.
Rubin writes that anti-pornography feminists exaggerate the dangers of pornography by showing the most shocking pornographic images (such as those associated with sadomasochism) out of context, in a way that implies that the women depicted are actually being raped, rather than emphasizing that these scenes depict fantasies and use actors who have consented to be shown in such a way. Sex-positive feminists argue that access to pornography is as important to women as to men and that there is nothing inherently degrading to women about pornography. However, anti-pornography feminists disagree, often arguing that the very depiction of such acts leads to the actual acts being encouraged and committed.
Feminist curators such as Jasmin Hagendorfer organize feminist and queer porn film festivals (e.g. PFFV in Vienna).
Prostitution and sex workers
Some sex-positive feminists believe that women and men can have positive experiences as sex workers and that where it is illegal, prostitution should be decriminalized. They argue that prostitution is not necessarily bad for women if prostitutes are treated with respect and if the professions within sex work are destigmatized.
Sex workers are adults who receive money (or other goods) in exchange for consensual sexual services. In the United States, sex work is legal. The sex workers' rights movement started in the 1970s, and one of the founding groups was COYOTE. The goal of the sex workers activist is to fight for workers by having a better work environment/ conditions, reducing negative fed back, and stopping prohibition.
Carol Leigh is an American woman who is an artist, filmmaker, and sex worker rights activist. Carol Leigh was the first woman to use the term "sex worker". She wanted to educate others about the understanding of sex workers as well as the rights they should have. In an interview, she stated how she sees her own sex work and the sex work of others as having the possibility to serve a higher, spiritual function in society.
BDSM
Sadomasochism (BDSM) has been criticized by anti porn feminists for eroticizing power and violence and for reinforcing misogyny (Rubin, 1984). They argue that women who choose to engage in BDSM are making a choice that is ultimately bad for women. Sex-positive feminists argue that consensual BDSM activities are enjoyed by many women and validate these women's sexual inclinations. They argue that feminists should not attack other women's sexual desires as being "anti-feminist" or internalizing oppression and that there is no connection between consensual sexually kinky activities and sex crimes.
While some anti-porn feminists suggest connections between consensual BDSM scenes and rape and sexual assault, sex-positive feminists find this to be insulting to women. It is often mentioned that in BDSM, roles are not fixed to gender, but personal preferences. Furthermore, many argue that playing with power (such as rape scenes) through BDSM is a way of challenging and subverting that power, rather than reifying it.
While the negativities about BDSM are discussed a lot, sex-positive feminists are focusing on safety in the BDSM community. Consent is the most important rule when it comes to BDSM.
Cara Dunkley and Lori Brotto discuss the importance of consent in their journal:Consent represents an ongoing interactive and dynamic process that entails several precautionary measures, including negotiations of play, open communication of desires and boundaries, mutually defining terms, the notion of responsibility and transparency, and ensuring protection from harm through competence and skill.Critics discuss that communication with sexual partners is very important.
Sexual orientation
McElroy argues that many feminists have been afraid of being associated with homosexuality. Betty Friedan, one of the founders of second-wave feminism, warned against lesbianism and called it "the lavender menace" (a view she later renounced). Sex-positive feminists believe that accepting the validity of all sexual orientations is necessary in order to allow women full sexual freedom. Rather than distancing themselves from homosexuality and bisexuality because they fear it will hurt mainstream acceptance of feminism, sex-positive feminists believe that women's liberation cannot be achieved without also promoting acceptance of homosexuality and bisexuality.
Gender identity
Some trans exclusionary radical feminists, such as Germaine Greer, have criticized transgender women (male-to-female) as men attempting to appropriate female identity while retaining male privilege, and transgender men (female-to-male) as women who reject solidarity with their gender. One of the main exponents of this point of view is Janice Raymond. In The Whole Woman, Greer went so far as to explicitly compare transgender women to rapists for forcing themselves into women's spaces.
Many transgender people see gender identity as an innate part of a person. Some feminists also criticize this belief, arguing instead that gender roles are societal constructs, and are not related to any natural factor. Sex-positive feminists support the right of all individuals to determine their own gender and promote gender fluidity as one means for achieving gender equality. Patrick Califia has written extensively about issues surrounding feminism and transgender issues, especially in Sex Changes: Transgender Politics.
Debates
Like feminism itself, sex-positive feminism is difficult to define, and few within the movement (particularly the academic arm of the movement) agree on any one ideology or policy agenda.
An example of how feminists may disagree on whether a particular cultural work exemplifies sex-positivity is Betty Dodson's critique of Eve Ensler's The Vagina Monologues. Dodson argues that the play promotes a negative view of sexuality, emphasizing sexual violence against women rather than the redemptive value of female sexuality. Many other sex-positive feminists have embraced Ensler's work for its encouragement of openness about women's bodies and sexuality.
Statutory rape laws
There is debate among sex-positive feminists about whether statutory rape laws are a form of sexism. As illustrated by the controversy over "The Little Coochie Snorcher that Could" from the Vagina Monologues, some sex-positive feminists do not consider all consensual activity between young adolescents and older people as inherently harmful. There has been debate among feminists about whether statutory rape laws benefit or harm teenage girls and about whether the gender of participants should influence the law's treatment of sexual encounters. Some sex-positive feminists argue that statutory rape laws were made with non-gender neutral intentions and are presently enforced as such, with the assumption that teenage girls are naive, nonsexual, and in need of protection.
Sex-positive feminists with this view believe that "teen girls and boys are equally capable of making informed choices in regard to their sexuality" and that statutory rape laws are actually meant to protect "good girls" from sex. Other feminists are opposed or ambivalent about strengthening statutory rape statutes because these preclude young women from entering consensual sexual relationships, even if competent to consent.
These feminists view statutory rape laws as more controlling than protective – and of course part of the law's historic role was protecting the female's chastity as valuable property. One writer also noted that, at that time, in some states, the previous sexual experience of a teenager could be used as a defense by one accused of statutory rape. She argued that this showed that the laws were intended to protect chastity rather than consent.
Critiques
Works that critique sex-positive feminism includes those of Germaine Greer and the essays by Dorchen Leidholdt. According to Ann Ferguson, sex-positive feminists' only restriction on sexual activity should be the requirement of consent, yet she argues that sex-positive feminism has provided inadequate definitions of consent. Sex-positive feminism has also been criticized for focusing on young women, but ignoring middle-aged and elderly women who are unable or unwilling to direct most of their energy into sexuality.
In her 2005 book Female Chauvinist Pigs, Ariel Levy does not oppose sex-positive feminism per se, though sees a popularized form of sex-positivity as constituting a kind of "raunch culture" in which women internalize objectifying male views of themselves and other women. Levy believes it is a mistake to see this as empowering and further holds that women should develop their own forms of sexual expression. The response by sex-positive feminists to Levy's book has been mixed; Susie Bright viewed the book quite favorably, stating that much of what can be seen as "raunch culture" represents a bastardization of the work of earlier sex-positive feminists such as herself. Rachel Kramer Bussel, however, sees Levy as largely ignoring much of the female-empowered sexual expression of the last 20 years, or misinterpreting it as internalization of male fantasy.
More to review and/or consider
Authors and activists who have written important works about sex-positive feminism, and/or contributed to educating the public about it, include Kathy Acker, Megan Andelloux, Susie Bright, Rachel Kramer Bussel, Diana Cage, Avedon Carol, Patrick Califia, Betty Dodson, Nancy Friday, Jane Gallop, Nina Hartley, Josephine Ho, Amber L. Hollibaugh, Brenda Howard, Laura Kipnis, Wendy McElroy, Inga Muscio, Joan Nestle, Erika Lust, Carol Queen, Candida Royalle, Gayle Rubin, Annie Sprinkle, Tristan Taormino and Ellen Willis. Several of these have written from the perspective of feminist women working in the sex industry.
Information on formal organizations that endorse sex-positive feminism seems lacking but one major outpost of sex-positive feminism is the former cooperative business Good Vibrations founded by Joani Blank in 1977 in order to sell sex toys and publications about sex in an environment welcoming to women. Blank also founded Down There Press which has published various educational publications inspired by sex-positivity. There are a number of other sex-positive feminist businesses who thrive on a combination of sex toy sales and distribution of educational materials. Good For Her, a woman-owned sex-toy shop in Toronto, Ontario, holds an annual Feminist Porn Awards.
Nonprofit groups supporting sex-positive feminism include the currently defunct Feminist Anti-Censorship Task Force associated with Carole Vance and Ann Snitow, Feminists for Free Expression, founded by Marcia Pally, and Feminists Against Censorship associated with anti-censorship and civil liberties campaigner Avedon Carol.
Feminist pornography is a small but growing segment of the pornography industry. A Feminist Porn Award was established in 2006. The equivalent in Europe is the PorYes award for feminist porn, established in 2009. The magazine On Our Backs was founded in 1986 to promote a more positive attitude towards erotica within the community of lesbian and bisexual women. It flourished until 1994, struggled with financial problems and changing ownership and the final edition was published in 2006.
See also
Sex-positive literature
Girl Heroes
The Ethical Slut
Notes
References
Further reading
Pdf.
External links
Advocacy of sex-positive feminism
Articles
Archived at Susie Bright's Journal (website)).
Archived at WendyMcElroy.com (website).
Organizations
Feminism and pornography
Feminism and prostitution
Feminism and sexuality
Third-wave feminism
Feminist movements and ideologies
Human sexuality
Liberal feminism
Second-wave feminism
Sexual revolution
Women and sexuality | Sex-positive feminism | [
"Biology"
] | 4,959 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
632,992 | https://en.wikipedia.org/wiki/Paley%E2%80%93Wiener%20theorem | In mathematics, a Paley–Wiener theorem is a theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. It is named after Raymond Paley (1907–1933) and Norbert Wiener (1894–1964) who, in 1934, introduced various versions of the theorem. The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality (to interchange the absolute value and integration).
The original work by Paley and Wiener is also used as a namesake in the fields of control theory and harmonic analysis; introducing the Paley–Wiener condition for spectral factorization and the Paley–Wiener criterion for non-harmonic Fourier series respectively. These are related mathematical concepts that place the decay properties of a function in context of stability problems.
Holomorphic Fourier transforms
The classical Paley–Wiener theorems make use of the holomorphic Fourier transform on classes of square-integrable functions supported on the real line. Formally, the idea is to take the integral defining the (inverse) Fourier transform
and allow to be a complex number in the upper half-plane. One may then expect to differentiate under the integral in order to verify that the Cauchy–Riemann equations hold, and thus that defines an analytic function. However, this integral may not be well-defined, even for in ; indeed, since is in the upper half plane, the modulus of grows exponentially as ; so differentiation under the integral sign is out of the question. One must impose further restrictions on in order to ensure that this integral is well-defined.
The first such restriction is that be supported on : that is, . The Paley–Wiener theorem now asserts the following: The holomorphic Fourier transform of , defined by
for in the upper half-plane is a holomorphic function. Moreover, by Plancherel's theorem, one has
and by dominated convergence,
Conversely, if is a holomorphic function in the upper half-plane satisfying
then there exists such that is the holomorphic Fourier transform of .
In abstract terms, this version of the theorem explicitly describes the Hardy space . The theorem states that
This is a very useful result as it enables one to pass to the Fourier transform of a function in the Hardy space and perform calculations in the easily understood space
of square-integrable functions supported on the positive axis.
By imposing the alternative restriction that be compactly supported, one obtains another Paley–Wiener theorem. Suppose that is supported in , so that . Then the holomorphic Fourier transform
is an entire function of exponential type , meaning that there is a constant such that
and moreover, is square-integrable over horizontal lines:
Conversely, any entire function of exponential type which is square-integrable over horizontal lines is the holomorphic Fourier transform of an
function supported in .
Schwartz's Paley–Wiener theorem
Schwartz's Paley–Wiener theorem asserts that the Fourier transform of a distribution of compact support on is an entire function on and gives estimates on its growth at infinity. It was proven by Laurent Schwartz (1952). The formulation presented here is from .
Generally, the Fourier transform can be defined for any tempered distribution; moreover, any distribution of compact support is a tempered distribution. If is a distribution of compact support and is an infinitely differentiable function, the expression
is well defined.
It can be shown that the Fourier transform of is a function (as opposed to a general tempered distribution) given at the value by
and that this function can be extended to values of in the complex space . This extension of the Fourier transform to the complex domain is called the Fourier–Laplace transform.
Additional growth conditions on the entire function impose regularity properties on the distribution .
For instance:
Sharper results giving good control over the singular support of have been formulated by . In particular, let be a convex compact set in with supporting function , defined by
Then the singular support of is contained in if and only if there is a constant and sequence of constants such that
for
Notes
References
.
.
.
.
Theorems in Fourier analysis
Generalized functions
Theorems in complex analysis
Hardy spaces | Paley–Wiener theorem | [
"Mathematics"
] | 875 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.