id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
11,709,317 | https://en.wikipedia.org/wiki/BioWeb | The BioWeb is the connotation for a network of web-enabled biological devices (e.g. trees, plants, and flowers) which extends an internet of things to the Internet of Living Things of natural sensory devices. The BioWeb devices give insights to real-time ecological data and feedback to changes in the environment. The biodiversity of today is one giant ecological mesh network of information exchange, and a resource humanity should be able to access for a better understanding of the state of our global ecology.
Technology
The BioWeb information technologies emerge from the interdisciplinary fields of biotechnology and nanotechnology. The devices for reading individual ecological systems can be either wireless transmitters implemented into the organic structure of seeds or external inserted network nodes with the ability to read information and wirelessly transmit the information to the Internet (or network).
See also
Mesh networking, a way to route information between nodes
Biotechnology, technology based on biology where technology is a concept that deals with knowledge of skills
External links
Cellbiol.com: The Bio-Web
Mrs. King's BioWeb
The BioWeb and BioNews Search Engines
Botanicalls
Biotechnology | BioWeb | [
"Biology"
] | 230 | [
"nan",
"Biotechnology"
] |
11,710,342 | https://en.wikipedia.org/wiki/Perenniporia%20piceicola | Perenniporia piceicola is a species of poroid crust fungus that is found on fallen spruce in Yunnan province, China. Basidiocarps are corky in texture, or more across with a characteristic pale yellow pore surface. The basidiospores are ellipsoid and hyaline and very large for the genus, up to 13 μm in length.
References
Perenniporia
Fungi described in 2002
Fungi of China
Taxa named by Yu-Cheng Dai
Fungus species | Perenniporia piceicola | [
"Biology"
] | 103 | [
"Fungi",
"Fungus species"
] |
11,710,718 | https://en.wikipedia.org/wiki/Well%20drainage | Well drainage means drainage of agricultural lands by wells. Agricultural land is drained by pumped wells (vertical drainage) to improve the soils by controlling water table levels and soil salinity.
Introduction
Subsurface (groundwater) drainage for water table and soil salinity in agricultural land can be done by horizontal and vertical drainage systems.
Horizontal drainage systems are drainage systems using open ditches (trenches) or buried pipe drains.
Vertical drainage systems are drainage systems using pumped wells, either open dug wells or tube wells.
Both systems serve the same purposes, namely water table control and soil salinity control .
Both systems can facilitate the reuse of drainage water (e.g. for irrigation), but wells offer more flexibility.
Reuse is only feasible if the quality of the groundwater is acceptable and the salinity is low.
Design
Although one well may be sufficient to solve groundwater and soil salinity problems in a few hectares, one usually needs a number of wells, because the problems may be widely spread.
The wells may be arranged in a triangular, square or rectangular pattern.
The design of the well field concerns depth, capacity, discharge, and spacing of the wells.
The discharge is found from a water balance.
The depth is selected in accordance to aquifer properties. The well filter must be placed in a permeable soil layer.
The spacing can be calculated with a well spacing equation using discharge, aquifer properties, well depth and optimal depth of the water table.
The determination of the optimum depth of the water table is the realm of drainage research .
Flow to wells
The basic, steady state, equation for flow to fully penetrating wells (i.e. wells reaching the impermeable base) in a regularly spaced well field in a uniform unconfined (phreatic) aquifer with a hydraulic conductivity that is isotropic is:
where Q = safe well discharge - i.e. the steady state discharge at which no overdraught or groundwater depletion occurs - (m3/day), K = uniform hydraulic conductivity of the soil (m/day), D = depth below soil surface, = depth of the bottom of the well equal to the depth of the impermeable base (m), = depth of the watertable midway between the wells (m), is the depth of the water level inside the well (m), = radius of influence of the well (m) and is the radius of the well (m).
The radius of influence of the wells depends on the pattern of the well field, which may be triangular, square, or rectangular. It can be found as:
where = total surface area of the well field (m2)and N = number of wells in the well field.
The safe well discharge (Q) can also be found from:
where q is the safe yield or drainable surplus of the aquifer (m/day) and is the operation intensity of the wells (hours/24 per day). Thus the basic equation can also be written as:
Well spacing
With a well spacing equation one can calculate various design alternatives to arrive at the most attractive or economical solution for watertable control in agricultural land.
The basic flow equation cannot be used for determining the well spacing in a partially penetrating well-field in a non-uniform and anisotropic aquifer, but one needs a numerical solution of more complicated equations.
The costs of the most attractive solution can be compared with the costs of a horizontal drainage system - for which the drain spacing can be calculated with a drainage equation - serving the same purpose, to decide which system deserves preference.
The well design proper is described in
An illustration of the parameters involved is shown in the figure. The hydraulic conductivity can be found from an aquifer test.
Software
The numerical computer program WellDrain for well spacing calculations takes into account fully and partially penetrating wells, layered aquifers, anisotropy (different vertical and horizontal hydraulic conductivity or permeability) and entrance resistance.
Modelling
With a groundwater model that includes the possibility to introduce wells, one can study the impact of a well drainage system on the hydrology of the project area. There are also models that give the opportunity to evaluate the water quality.
SahysMod is such a polygonal groundwater model permitting to assess the use of well water for irrigation, the effects on soil salinity and on depth of the water table.
References
External links
Salinity Control and Reclamation Program (SCARP) using wells in the Indus valley of Pakistan.
Website on waterlogging and land reclamation by horizontal and vertical drainage systems :
Drainage
Hydrology
Hydrogeology
Hydraulic engineering
Land management
Land reclamation
Water and the environment
de:Schluckbrunnen | Well drainage | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 974 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering",
"Hydrogeology"
] |
11,710,987 | https://en.wikipedia.org/wiki/Perenniporia%20podocarpi | Perenniporia podocarpi is a species of resupinate (encrusting) polypore. It occurs widely but uncommonly on the New Zealand endemic podocarps Dacrydium cupressinum and Prumnopitys taxifolia. Basidiocarps are dimitic and grow up to 9 cm across, thick and cushion-like with a distinctive white or very pale cream spore surface with large pores. The basidiospores are extremely large for the genus, up to 27 μm in length.
As with other members of its genus, P. podocarpi causes a white rot in affected host plants.
References
Perenniporia
Fungi described in 1992
Fungi of New Zealand
Fungus species | Perenniporia podocarpi | [
"Biology"
] | 154 | [
"Fungi",
"Fungus species"
] |
11,711,216 | https://en.wikipedia.org/wiki/Tinsel%20wire | Tinsel wire is a type of electrical wire used for applications that require high mechanical flexibility but low current-carrying capacity. Tinsel wire is commonly used in cords of telephones, handsets, headphones, and small electrical appliances. It is far more resistant to metal fatigue failure than either stranded wire or solid wire.
Construction
Tinsel wire is produced by wrapping several strands of thin metal foil around a flexible nylon or textile core. Because the foil is very thin, the bend radius imposed on the foil is much greater than the thickness of the foil, leading to a low probability of metal fatigue. Meanwhile, the core provides high tensile strength without impairing flexibility.
Typically, multiple tinsel wires are jacketed with an insulating layer to form one conductor. A cord is formed from several conductors in either a round profile or a flat cable.
Connections
Tinsel wire is commonly connected to equipment with crimped terminal lugs that pierce the insulation to make contact with the metal ribbons, rather than stripping insulation. Separated from the core, the individual ribbons are relatively fragile, and the core can be damaged by high temperatures. These factors make it difficult or impractical to terminate tinsel wire by soldering during equipment manufacture, although soldering is possible, with some difficulty, to repair a failed connection. However, the conductors tend to break at their junction with the rigid solder.
Applications
Tinsel wires or cords are used for telephony and audio applications in which frequent bending of electric cords occurs, such as for headsets and telephone handsets. It is also used in power cords for small appliances such as electric shavers or clocks, where stranded cable conductors of adequate mechanical size would be too stiff. Tinsel cords are recognized as type TPT or TST in the US and Canadian electrical codes, and are rated at 0.5 amperes.
Manufacturers and suppliers
Maeden
Dacon Systems, Inc.
Gavitt Wire & Cable Co., Inc.
See also
Litz wire
References
Electrical wiring
Telephony equipment | Tinsel wire | [
"Physics",
"Engineering"
] | 408 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
11,711,579 | https://en.wikipedia.org/wiki/Salamandrella%20keyserlingii | Salamandrella keyserlingii, the Siberian salamander, is a species of salamander found in Northeast Asia. It lives in wet woods and riparian groves.
Distribution
It is found primarily in Siberia east of the Sosva River and the Urals, in the East Siberian Mountains, including the Verkhoyansk Range, northeast to the Anadyr Highlands, east to the Kamchatka Peninsula and south into Manchuria, with outlying populations also in northern Kazakhstan and Mongolia, northeastern China, and on the Korean Peninsula. It is believed to be extirpated from South Korea. An isolated population exists on Hokkaidō, Japan, in the Kushiro Shitsugen National Park. A breeding ground of Siberian salamanders in Paegam, South Hamgyong, is designated North Korean natural monument #360.
Description
Adults are from 9.0 to 12.5 cm in length. Their bodies are bluish-brown in color, with a purple stripe along the back. Thin, dark brown stripes occur between and around the eyes, and also sometimes on the tail. Four clawless toes are on each foot. The tail is longer than the body. Males are typically smaller than females.
The species is known for surviving deep freezes (as low as −45 °C). In some cases, they have been known to remain frozen in permafrost for years, and upon thawing, walking off. They accomplish this by reducing to a fourth of their body weight through water loss and liver shrinkage, and by increasing the concentration of glycerol in their body.
Discovery
In 1870, Dybowski gave it the name of Salamandrella Keyserlingii. It was renamed in 1910, the 1910 scientific name hardly used. Boulenger gave it the new (but unused) name.
General Behavior
The Siberian salamander is fairly nocturnal, foraging above ground at night and staying under moist logs or woody debris during the day.
Habitat
Within its extensive range, the habitat of the Siberian newt is wet conifer, mixed deciduous forests in the taiga and riparian grooves in the tundra and forest steppe. They can be found near ephemeral or permanent pools, wetlands, sedge meadows, off near oxbow lakes.
Reproduction
Their breeding season occur during May or beginning of June, in pools of water. A single egg sac contains 50-80 eggs on average, with a female typically laying up to 240 eggs in a season. The light-brown eggs hatch three to four weeks after being laid, releasing larval salamanders of 11–12 mm in length.
References
Further reading
External links
Distribution map
Malyarchuk B., Derenko M. et al. Phylogeography and molecular adaptation of Siberian salamander Salamandrella keyserlingii based on mitochondrial DNA variation, 2010
keyserlingii
Cryozoa
Amphibians of China
Amphibians of Japan
Amphibians of Korea
Amphibians of Mongolia
Amphibians of Russia
Amphibians described in 1870 | Salamandrella keyserlingii | [
"Chemistry"
] | 620 | [
"Cryozoa",
"Cryobiology"
] |
11,711,647 | https://en.wikipedia.org/wiki/Wildlife%20of%20Mauritania | Mauritania's wildlife has two main influences as the country lies in two biogeographic realms. The north sits in the Palearctic which extends south from the Sahara to roughly 19° north latitude and the south is in the Afrotropic realm. Additionally, Mauritania is an important wintering area for numerous birds which migrate from the Palearctic.
Faunal regions and habitats
Most of the north to about 19° north latitude is regarded as being in the Palearctic, and is largely made up of the Sahara desert and adjacent littoral habitats. South of this is regarded as being in the Afrotropical biogeographic realm, which means that species of a predominantly Afotropical distribution dominate the fauna. South of the Sahara is the South Saharan steppe and woodlands ecoregion which integrates into the Sahelian acacia savanna ecoregion. The southernmost part of the country lies in the West Sudanian savanna ecoregion.
Wetlands are important and the two main protected areas are the Banc d'Arguin National Park which protects rich, shallow coastal and marine ecosystems which integrate with the arid Sahara desert and the Diawling National Park which forms the northern part of the delta of the Senegal River. Elsewhere in Mauritania wetlands are normally ephemeral and rely on the seasonal rainfall and may be very important for migratory birds.
Mammals
Most of the larger mammal species have been extirpated from Mauritania. Among the antelopes the scimitar-horned oryx, addax, korrigum and dama gazelle are extinct, the bohor reedbuck, Buffon's kob, dorcas gazelle and red-fronted gazelle are extinct and the bushbuck and slender-horned gazelle are of indeterminate status. In the area of Diawling National Park, the last lion was shot in 1970 and there have been no sightings of manatees or hippopotamus in recent years. The Mediterranean monk seal has one of its last strongholds in the world in the coves along the Cap Blanc Peninsula near Nouadhibou. Common extant mammals include fennec fox, African golden wolves, warthogs, African wildcats, Cape hares and patas monkeys.
The rich offshore waters of Mauritania are home to a diverse fauna of cetaceans. Upwellings off the coats create rich feeding grounds for baleen whales and these include blue whale, sei whale and Bryde's whale, although the North Atlantic right whale is now extinct in the eastern Atlantic, it was recorded off Mauritania. Other cetaceans found off Mauritania's coast include harbour porpoise, Atlantic spotted dolphin, bottlenose dolphin, sperm whale, short-finned pilot whale and orca.
Birds
Over 500 species of bird have been recorded in Mauritania. Specialities and spectacular species include scissor-tailed kite, Nubian bustard, Arabian bustard, houbara bustard, Egyptian plover, golden nightjar, chestnut-bellied starling, Kordofan lark and Sudan golden sparrow.
The coastal wetlands are of immense importance for over two million wintering Western Palearctic waders, from fifteen different species including dunlin, bar-tailed godwit, curlew sandpiper and common redshank each numbering over 100,000 birds. Other wintering species include more than 30,000 greater flamingos. Breeding birds include great white pelican, reed cormorant, gull-billed tern, Caspian tern, royal tern and common tern, together with two unique subspecies of grey heron Ardea cinerea monicae and Eurasian spoonbill Platalea leucorodia balsaci and an outpost of the western reef heron.
Herpetofauna
The West African crocodile still exists in small numbers in Mauritania. Other reptiles found include the African chameleon, Senegal chameleon, Nile monitor, various geckos and other lizards, the Mali cobra and black-necked spitting cobra, African rock python, the desert horned viper and the Saharan sand viper, puff adders are among the snakes, as well as terrestrial, freshwater and marine turtles. In all 86 species of reptile in 21 families have been recorded in Mauritania. Eleven species of amphibian have been confirmed as occurring in Mauritania but another 19 are expected to be recorded, mainly in the south of the country.
Fish
The marine fish found off Mauritania's coast are an important resource for commercial, subsistence and sport fishing. Estimates put the potential catch at between 400,000 and 700,000 tons. The rich waters off the Mauritanian coast are host to a variety of species more familiar in more northerly temperate waters such as European seabass, European hake, Norwegian skate and gilt-head bream, as well as species more typical of warmer waters including whale shark, Atlantic bluefin tuna, Atlantic sailfish, tarpon and Atlantic blue marlin. 56 species of freshwater fish have been reported from Mauritania of which 50 have been confirmed as occurring.
Flora
References
Biota of Mauritania
Mauritania | Wildlife of Mauritania | [
"Biology"
] | 1,062 | [
"Biota by country",
"Biota of Mauritania",
"Wildlife by country"
] |
11,711,952 | https://en.wikipedia.org/wiki/Sensor%20node | A sensor node (also known as a mote in North America), consists of an individual node from a sensor network that is capable of performing a desired action such as gathering, processing or communicating information with other connected nodes in a network.
History
Although wireless sensor networks have existed for decades and used for diverse applications such as earthquake measurements or warfare, the modern development of small sensor nodes dates back to the 1998 Smartdust project and the NASA. Sensor Web One of the objectives of the Smartdust project was to create autonomous sensing and communication within a cubic millimeter of space, though this project ended early on, it led to many more research projects and major research centres such as The Berkeley NEST and CENS. The researchers involved in these projects coined the term mote to refer to a sensor node. The equivalent term in the NASA Sensor Webs Project for a physical sensor node is pod, although the sensor node in a Sensor Web can be another Sensor Web itself. Physical sensor nodes have been able to increase their effectiveness and its capability in conjunction with Moore's Law.
The chip footprint contains more complex and lower powered microcontrollers. Thus, for the same node footprint, more silicon capability can be packed into it. Nowadays, motes focus on providing the longest wireless range (dozens of km), the lowest energy consumption (a few uA) and the easiest development process for the user.
Components
The main components of a sensor node usually involve a microcontroller, transceiver, external memory, power source and one or more sensors.
Sensors
Sensors are used by wireless sensor nodes to capture data from their environment. They are hardware devices that produce a measurable response to a change in a physical condition like temperature or pressure. Sensors measure physical data of the parameter to be monitored and have specific characteristics such as accuracy, sensitivity etc. The continual analog signal produced by the sensors is digitized by an analog-to-digital converter and sent to controllers for further processing. Some sensors contain the necessary electronics to convert the raw signals into readings which can be retrieved via a digital link (e.g. I2C, SPI) and many convert to units such as °C. Most sensor nodes are small in size, consume little energy, operate in high volumetric densities, be autonomous and operate unattended, and be adaptive to the environment. As wireless sensor nodes are typically very small electronic devices, they can only be equipped with a limited power source of less than 0.5-2 ampere-hour and 1.2-3.7 volts.
Sensors are classified into three categories: passive, omnidirectional sensors; passive, narrow-beam sensors; and active sensors. Passive sensors sense the data without actually manipulating the environment by active probing. They are self powered; that is, energy is needed only to amplify their analog signal. Active sensors actively probe the environment, for example, a sonar or radar sensor, and they require continuous energy from a power source. Narrow-beam sensors have a well-defined notion of direction of measurement, similar to a camera. Omnidirectional sensors have no notion of direction involved in their measurements.
Most theoretical work on WSNs assumes the use of passive, omnidirectional sensors. Each sensor node has a certain area of coverage for which it can reliably and accurately report the particular quantity that it is observing. Several sources of power consumption in sensors are: signal sampling and conversion of physical signals to electrical ones, signal conditioning, and analog-to-digital conversion. Spatial density of sensor nodes in the field may be as high as 20 nodes per cubic meter.
Controller
The controller performs tasks, processes data and controls the functionality of other components in the sensor node. While the most common controller is a microcontroller, other alternatives that can be used as a controller are: a general purpose desktop microprocessor, digital signal processors, FPGAs and ASICs. A microcontroller is often used in many sensor nodes due to its low cost, flexibility to connect to other devices (or nodes in a network), ease of programming, and low power consumption. A general purpose microprocessor generally has a higher power consumption than a microcontroller, making it an undesirable choice for a sensor node. Digital Signal Processors may be chosen for broadband wireless communication applications, but in Wireless Sensor Networks the wireless communication is often modest: i.e., simpler, easier to process modulation and the signal processing tasks of actual sensing of data is less complicated. Therefore, the advantages of DSPs are not usually of much importance to wireless sensor nodes. FPGAs can be reprogrammed and reconfigured according to requirements, but this takes more time and energy than desired.
Transceiver
Sensor nodes often make use of ISM band, which gives free radio, spectrum allocation and global availability. The possible choices of wireless transmission media are radio frequency (RF), optical communication (laser) and infrared. Lasers require less energy , but need line-of-sight for communication and are sensitive to atmospheric conditions. Infrared, like lasers, needs no antenna but it is limited in its broadcasting capacity. Radio frequency-based communication is the most relevant that fits most of the WSN applications. WSNs tend to use license-free communication frequencies: 173, 433, 868, and 915 MHz; and 2.4 GHz. The functionality of both transmitter and receiver are combined into a single device known as a transceiver. Transceivers often lack unique identifiers. The operational states are transmit, receive, idle, and sleep. Current generation transceivers have built-in state machines that perform some operations automatically.
Most transceivers operating in idle mode have a power consumption almost equal to the power consumed in receive mode. Thus, it is better to completely shut down the transceiver rather than leave it in the idle mode when it is not transmitting or receiving. A significant amount of power is consumed when switching from sleep mode to transmit mode in order to transmit a packet.
External memory
From an energy perspective, the most relevant kinds of memory are the on-chip memory of a microcontroller and Flash memory—off-chip RAM is rarely, if ever, used. Flash memories are used due to their cost and storage capacity. Memory requirements are very much application dependent. Two categories of memory based on the purpose of storage are: user memory used for storing application related or personal data, and program memory used for programming the device. Program memory also contains identification data of the device if present.
Power source
A wireless sensor node is a popular solution when it is difficult or impossible to run a mains supply to the sensor node. However, since the wireless sensor node is often placed in a hard-to-reach location, changing the battery regularly can be costly and inconvenient. An important aspect in the development of a wireless sensor node is ensuring that there is always adequate energy available to power the system.
The sensor node consumes power for sensing, communicating and data processing. More energy is required for data communication than any other process. The energy cost of transmitting 1 Kb a distance of is approximately the same as that used for the execution of 3 million instructions by a 100 million instructions per second/W processor. Power is stored either in batteries or capacitors. Batteries, both rechargeable and non-rechargeable, are the main source of power supply for sensor nodes. They are also classified according to electrochemical material used for the electrodes such as NiCd (nickel-cadmium), NiZn (nickel-zinc), NiMH (nickel-metal hydride), and lithium-ion.
Current sensors are able to renew their energy from solar sources, Radio Frequency(RF), temperature differences, or vibration. Two power saving policies used are Dynamic Power Management (DPM) and Dynamic Voltage Scaling (DVS). DPM conserves power by shutting down parts of the sensor node which are not currently used or active. A DVS scheme varies the power levels within the sensor node depending on the non-deterministic workload. By varying the voltage along with the frequency, it is possible to obtain quadratic reduction in power consumption.
See also
Mesh networking
Mobile ad hoc network (MANETS)
List of wireless sensor nodes
Mobile Wireless Sensor Networks
References
Wireless sensor network
Computer networking
Embedded systems | Sensor node | [
"Technology",
"Engineering"
] | 1,724 | [
"Computer networking",
"Computer engineering",
"Embedded systems",
"Wireless networking",
"Wireless sensor network",
"Computer systems",
"Computer science"
] |
11,712,041 | https://en.wikipedia.org/wiki/Wildlife%20of%20Senegal | The wildlife of Senegal consists of the flora and fauna of this nation in West Africa. Senegal has a long Atlantic coastline and a range of habitat types, with a corresponding diversity of plants and animals. Senegal has 188 species of mammals and 674 species of bird.
Geography
Senegal is bounded by the Atlantic Ocean to the west, Mauritania to the north, Mali to the east, and Guinea and Guinea-Bissau to the south. It has a long internal border with The Gambia which lies on either side of the Gambia River but is otherwise surrounded by Senegal. The four major rivers, the Senegal River, the Saloum River, the Gambia River and the Casamance River, drain westwards into the Atlantic Ocean. The Lac de Guiers is a large freshwater lake in the north of the country while Lake Retba, near Dakar, is saline.
The northern half of the country has an arid or semi-arid climate and is largely desert while south of the Gambia River the rainfall is higher and the terrain consists of savannah grassland and forest. Much of the country is fairly flat and below the contour, but there are some low, rolling hills in the southeast, the foothills of the Fouta Djallon in Guinea. The northern half of the coast is sandy and flat, whereas south of Dakar it is muddy and swampy.
The northern part of the country has a semi-arid climate, with precipitation increasing substantially further south to exceed in some areas. Winds blow from the southwest during the rainy season from May to November, and from the northeast during the rest of the year, resulting in well-defined humid and dry seasons. Dakar's maximum temperatures averages in the wet and in the dry season.
Biodiversity
With four main ecosystems (forest, savanna grassland, freshwater, marine and coastal), Senegal has a wide diversity of plants and animals. However, increases in human activities and changes in weather patterns which include increased deficits in rainfall, are impacting and degrading the natural habitats. This is particularly noticeable with regard to forests, which in the five years to 2010, were being lost at the rate of per year.
Flora
About 5,213 species, subspecies and varieties of vascular plants had been recorded in Senegal by the end of 2018, of which 515 were trees or woody plants.
The Niokolo-Koba National Park is a World Heritage Site and large natural protected area in southeastern Senegal near the Guinea-Bissau border. The park is typical of the woodland savannah of the country. About thirty species of tree are found here, mainly from the families Fabaceae, Combretaceae and Anacardiaceae, and about one thousand species of vascular plant. The drier parts are dominated by the African kino tree and Combretum glutinosum, while the gallery forests beside rivers and streams (many of which dry up seasonally) are largely formed from Erythrophleum guineense and Pseudospondias microcarpa, interspersed with palms and bamboo clumps. Depressions in the ground fill with water in the rainy season and support a wide range of aquatic vegetation. In the coastal zone of Niayes, a coastal strip of land between Dakar and Saint Louis where a line of lakes lie behind the coastal sand dunes, the predominant vegetation is the African oil-palm, along with the African mesquite and Cape fig.
Mammals
Many of the larger animals of Senegal that used to have a widespread distribution have suffered from loss of habitat, persecution by farmers, and hunting for bushmeat, and are now largely restricted to the national park. The Guinea baboon is one of these, as are the Senegal hartebeest, the western hartebeest, the scimitar oryx, the roan antelope and several species of gazelle. Habitat degradation has caused populations of western red colobus, elephants, lions, and many other species to decrease heavily. The western subspecies of the giant eland is critically endangered, the only remaining known population being in the Niokolo-Koba National Park; the rapid decline in numbers of this antelope has been attributed to poaching.
Other mammals found in the country include the green monkey, the Guinean gerbil and the Senegal one-striped grass mouse.
Birds
Some 674 species of bird had been recorded in Senegal by April 2019. Some of the more spectacular include the red-billed tropicbird, the Arabian bustard, the Egyptian plover, the golden nightjar, the red-throated bee-eater, the chestnut-bellied starling, the cricket warbler, the Kordofan lark and the Sudan golden sparrow.
The Djoudj National Bird Sanctuary on the south side of the Senegal River Delta is an important site for migrating and overwintering waterfowl. About three million migratory birds spend the winter here. Some birds that nest and breed in the delta include the great white pelican, lesser flamingo, the marbled duck, African spoonbill, purple heron, black crowned crane, and others. Further south is the Saloum Delta National Park which lies on the East Atlantic Flyway, along which about 90 million birds migrate annually. Some birds that breed or winter in the park include the royal tern, the greater flamingo, the Eurasian spoonbill, the curlew sandpiper, the ruddy turnstone and the little stint. Another important wetland area is the Niayes, which is an important centre for waterbirds and raptors; large numbers of black kites have been recorded here.
Fish
Some 244 species of marine fish had been recorded off the coast of Senegal by April 2019. Some freshwater species of fish have been impacted by the creation of dams in the Senegal River Delta and the proliferation of some plants such as the southern cattail.
Molluscs
Insects
List of butterflies of Senegal
List of moths of Senegal
References
External links
Biota of Senegal
Senegal | Wildlife of Senegal | [
"Biology"
] | 1,208 | [
"Biota by country",
"Wildlife by country",
"Biota of Senegal"
] |
11,712,109 | https://en.wikipedia.org/wiki/Wildlife%20of%20Mozambique | The wildlife of Mozambique consists of the flora and fauna of this country in southeastern Africa.
Mozambique has a range of different habitat types and an ecologically rich and diverse wildlife. This includes 236 species of mammal, 740 species of bird and 5,692 species of vascular plant. The Maputaland-Pondoland-Albany hotspot, with significantly high levels of biodiversity, stretches from the southern tip of Mozambique into northeastern South Africa.
Geography
Mozambique is located on the southeast coast of Africa. It is bounded by Eswatini to the south, South Africa to the south and southwest, Zimbabwe to the west, Zambia and Malawi to the northwest, Tanzania to the north and the Indian Ocean to the east. Mozambique lies between latitudes 10° and 27°S, and longitudes 30° and 41°E.
The country is divided into two topographical regions by the Zambezi River. To the north of the Zambezi, the narrow coastal strip gives way to inland hills and low plateaus. Rugged highlands are further west; they include the Niassa highlands, Namuli or Shire highlands, Angonia highlands, Tete highlands and the Makonde plateau, covered with miombo woodlands. To the south of the Zambezi River, the lowlands are broader with the Mashonaland plateau and Lebombo Mountains located in the deep south. The country is drained by five principal rivers and several smaller ones with the largest and most important being the Zambezi. There are four large lakes, all in the north of the country; Lake Niassa (or Malawi), Lake Chiuta, Lake Cahora Bassa and Lake Shirwa.
Mozambique has a tropical climate with a wet season from October to March and a dry season from April to September. Climatic conditions, however, vary depending on altitude. Rainfall is heavy along the coast and decreases in the north and south of the country. Annual rainfall varies from depending on the region, with an average of . Average temperature ranges in Maputo are from in July and from in February. Much of the inland part of southern Mozambique is semi-desert.
Flora
A total of 5,692 species of vascular plant has been recorded in the country, of which 145 species are considered to be threatened.
Most of the terrain of Mozambique is covered by wooded savanna, grassland with a scattering of trees but an open canopy. In the northern part of the country this is miombo woodland, dominated by Brachystegia trees, covering about 70% of the country. In drier areas further south, mopane woodland predominates, with Acacia woodland in some areas in the south and in riverside locations in the north. Forest cover is mostly limited to the upper mountain slopes and some gallery forests, with some patches of dry lowland forests occurring in certain coastal areas, notably near Cape Delgado and around Dondo. The Maputaland-Pondoland-Albany Hotspot in southern Mozambique and northeastern South Africa, is a biogeographic region with significantly high levels of biodiversity and plant endemism.
The scenery on many parts of the coast consists of thickets, scrubland and palm groves. The floodplains of the main rivers consist of alluvial grassland and marshes. The Zambezi Delta is a vast marshy area covering with grasses and Borassus palms, extending for along the coast. A fringe of mangroves can be found along much of the coast, the trees growing to a height of , and there are extensive areas of mangroves in the Zambezi and Messalo River deltas where the height of the trees can reach . Ten different species of mangrove have been recorded here, including Lumnitzera racemosa and Xylocarpus granatum.
Fauna
Some 236 species of mammal have been recorded in Mozambique, of which 17 species are considered threatened. Ungulates found here include the common warthog, the hippopotamus and the South African giraffe and around twenty species of antelope including the common eland, the Lichtenstein's hartebeest, the greater kudu, the sable antelope, the nyala, the waterbuck, the blue wildebeest and the Cape bushbuck. There are around fifty species of rodent, a dozen of shrew, over sixty species of bat and a single hedgehog, the four-toed hedgehog. Primates are represented by bushbabies, vervet monkeys, blue monkeys, chacma baboons and yellow baboons. There are African bush elephants, lions, leopards, Southeast African cheetahs, genets, mongooses, hyaenas, jackals and various other species of carnivore.
Large numbers of birds are either resident in or migrate across Mozambique, 768 species having been recorded, including 34 globally threatened species. Some notable examples include the lesser jacana, the crab-plover, the mangrove kingfisher, the Böhm's bee-eater, the racket-tailed roller, the African pitta, the green-headed oriole, the collared palm thrush, the pale batis, the lowland tiny greenbul, the lesser seedcracker and the locust finch.
There is also a rich fauna of reptiles and amphibians, with 225 species of reptile recorded in the country (as compiled by the Reptile Database), and 90 species of amphibian (compiled by AmphibiaWeb). There are numerous species of snake, with venomous species including the puff adder, several species of cobra, the black mamba and the boomslang. Non-venomous snakes include the mole snake, and the egg-eating snake. The Nile crocodile is only likely to be found in protected areas. The savannah monitor is the largest lizard in the country, but more common are the much smaller skinks, agamas, chamaeleons and house geckos. The leopard tortoise occurs here as well as three species of freshwater terrapin.
With a long coastline, Mozambique boasts numerous marine vertebrates, including about twenty species of whale and ten of dolphin, the dugong, the brown fur seal and the southern elephant seal. The country has a large number of marine mollusc species, as well as plentiful numbers of terrestrial snails and slugs.
Protected areas
Mozambique has seven national parks, two of which are largely marine, and six national reserves. Additionally, there are several other protected areas, three community wildlife utilisation areas, various wildlife utilisation areas and forest reserves. In reality, many of these have little protection and many animals were severely depleted as a result of the Mozambican Civil War (1977–1992) and the increase in poaching which took place at that time. More recently, efforts are being made to restock some of the protected areas with animals brought in from elsewhere, and facilities for visitors have improved, particularly at Gorongosa National Park.
National parks
National parks in Mozambique include:
Banhine National Park, Parque Nacional de Banhine - Gaza (7,250 km2)
Bazaruto National Park, Parque Nacional do Arquipelago de Bazaruto - Inhambane (1,463 km2)
Gorongosa National Park, Parque Nacional da Gorongosa - Sofala (5,370 km2)
Limpopo National Park, Parque Nacional do Limpopo - Gaza (11,233 km2)
Magoe National Park, Parque Nacional do Magoe - Tete (3,558 km2)
Quirimbas National Park, Parque Nacional das Quirimbas - Cabo Delgado (9,130 km2)
Zinave National Park, Parque Nacional do Zinave - Inhambane (4,000 km2)
National reserves
National reserves in Mozambique include:
Chimanimani National Reserve, Reserva Nacional do Chimanimani - Manica (6550 km2)
Gilé National Reserve, Reserva Nacional do Gilé - Zambézia (4,436 km2)
Maputo Special Reserve, Reserva Especial de Maputo - Maputo (1,040 km2)
Marromeu Buffalo Reserve, Reserva de Búfalos de Marromeu - Sofala (1,500 km2)
Niassa National Reserve, Reserva Nacional do Niassa - Niassa (42,200 km2)
Pomene National Reserve, Reserva National de Pomene - Inhambane (50 km2)
References
External links
Biota of Mozambique
Mozambique | Wildlife of Mozambique | [
"Biology"
] | 1,762 | [
"Biota by country",
"Wildlife by country",
"Biota of Mozambique"
] |
11,712,329 | https://en.wikipedia.org/wiki/Onshore%20%28hydrocarbons%29 | Onshore, when used in relation to hydrocarbons, refers to an oil, natural gas or condensate field that is under the land or to activities or operations carried out in relation to such a field.
Onshore may also refer to processes that take place on land that are associated with oil, gas or condensate production that has taken place offshore. The offshore production facility delivers oil, gas and condensate by pipelines to the onshore terminal and processing facility. Alternatively oil may be delivered by ocean-going tanker to the onshore terminal.
Onshore oil terminals
Onshore oil terminals may include large crude oil tanks for the initial storage of oil prior to processing. Such tanks provide a buffer volume where oil is delivered by tanker. The oil tanker delivery rate is considerably greater than the processing capacity of the plant. Crude oil tanks also allow offshore production to continue if the export route becomes unavailable.
Onshore oil terminals generally have fired heaters to heat the oil to improve subsequent separation. Separator vessels and coalescers stabilise the crude and remove any sediments, produced water and allow light hydrocarbons to flash-off. Large separation vessels give the oil an appropriate residence time in the vessel to allow effective separation to occur. Onshore separators operate at near atmospheric pressure to release as much vapor as possible. The oil processing plant aims to achieve an appropriate vapor pressure specification for the oil. The associated gas is processed for export or used in the plant as fuel gas. Stabilised oil is routed to storage tanks prior to dispatch for international sales delivery by tanker, or to a local oil refinery for processing.
Onshore gas terminals
See main article Natural-gas Processing
Onshore gas terminals may have facilities for removal of liquids from the incoming gas stream. Liquids may include natural gas liquids (NGL), produced water, and glycol (MEG or TEG). Separation of liquid from gas is done in slug catchers, which either comprise an array of pipes or a large cylindrical vessel. A variety of treatment processes are used to condition the gas to a required specification. Such processes may include glycol dehydration, gas sweetening, hydrocarbon dew-point control, fractionation, natural gas liquids (NGL) recovery, gas compression before gas distribution to users.
The hydrocarbon dewpoint changes with the prevailing ambient temperature, the seasonal variation is:
See also
Petroleum refining processes
Oil refinery
Oil terminal
References
Petroleum industry glossary from Saipem Spa
Petroleum industry glossary from Anson Ltd
Petroleum geology
Oilfield terminology
Petroleum industry | Onshore (hydrocarbons) | [
"Chemistry"
] | 518 | [
"Petroleum stubs",
"Petroleum industry",
"Petroleum",
"Chemical process engineering",
"Petroleum geology"
] |
11,712,344 | https://en.wikipedia.org/wiki/Offshore%20%28hydrocarbons%29 | "Offshore", when used in relation to hydrocarbons, refers to operations undertaken at, or under the, sea in association with an oil, natural gas or condensate field that is under the seabed, or to activities carried out in relation to such a field. Offshore is part of the upstream sector of the oil and gas industry.
Offshore activities include searching for potential underground crude oil and natural gas reservoirs and accumulations, the drilling of exploratory wells, and subsequently drilling and operating the wells that recover and bring the crude oil and/or natural gas to the surface.
Offshore exploration is performed with floating drilling units, drill ships, semi-submersible installations and jack-up installations.
At the surface (either on the seabed or above water) offshore facilities are designed, constructed, commissioned and operated to process and treat the hydrocarbon oil and gas. Permanent oil and gas installations and plant include subsea wellheads and flowlines, offshore platforms and tethered floating installations. Other facilities include storage vessels, tanker ships, and pipelines to transport hydrocarbons onshore for further treatment and distribution. Further treatment and distribution comprise the midstream and downstream sectors of the industry.
There are various types of installation used in the development of offshore oil and gas fields and subsea facilities, these include: fixed platforms, compliant towers, semi-submersible platforms, jack-up installations, floating production systems, tension-leg platforms, gravity-based structure and spar platforms.
Production facilities on these installations include oil, gas and water separation systems; oil heating, cooling, pumping, metering and storage; gas cooling, treating and compression; and produced water clean-up. Other facilities may include reservoir gas injection and water injection; fuel gas systems; power generation; vents and flares; drains and sewage treatment; compressed air; helicopter fuel; heating, ventilation and air conditioning; and accommodation facilities for the crew.
The final phase of offshore operations is the abandonment of wells, the decommissioning and removal of offshore facilities to onshore disposal, and the flushing, cleaning and abandonment of pipelines.
See also
Deepwater drilling
Offshore drilling
Offshore oil and gas in the United States
Oil platform
Oil production plant
Semi-submersible platform
Submarine pipeline
Subsea (technology)
References
External links
Petroleum industry glossary from Saipem Spa.
Petroleum industry glossary from Anson Ltd
Petroleum geology
Oilfield terminology
Petroleum industry
Underwater mining | Offshore (hydrocarbons) | [
"Chemistry"
] | 494 | [
"Petroleum stubs",
"Petroleum industry",
"Petroleum",
"Chemical process engineering",
"Petroleum geology"
] |
11,713,215 | https://en.wikipedia.org/wiki/Vladimir%20Mazya | Vladimir Gilelevich Maz'ya (; born 31 December 1937) (the family name is sometimes transliterated as Mazya, Maz'ja or Mazja) is a Russian-born Swedish mathematician, hailed as "one of the most distinguished analysts of our time" and as "an outstanding mathematician of worldwide reputation", who strongly influenced the development of mathematical analysis and the theory of partial differential equations.
Mazya's early achievements include: his work on Sobolev spaces, in particular the discovery of the equivalence between Sobolev and isoperimetric/isocapacitary inequalities (1960), his counterexamples related to Hilbert's 19th and Hilbert's 20th problem (1968), his solution, together with Yuri Burago, of a problem in harmonic potential theory (1967) posed by , his extension of the Wiener regularity test to –Laplacian and the proof of its sufficiency for the boundary regularity. Maz'ya solved Vladimir Arnol'd's problem for the oblique derivative boundary value problem (1970) and Fritz John's problem on the oscillations of a fluid in the presence of an immersed body (1977).
In recent years, he proved a Wiener's type criterion for higher order elliptic equations, together with Mikhail Shubin solved a problem in the spectral theory of the Schrödinger operator formulated by Israel Gelfand in 1953, found necessary and sufficient conditions for the validity of maximum principles for elliptic and parabolic systems of PDEs and introduced the so–called approximate approximations. He also contributed to the development of the theory of capacities, nonlinear potential theory, the asymptotic and qualitative theory of arbitrary order elliptic equations, the theory of ill-posed problems, the theory of boundary value problems in domains with piecewise smooth boundary.
Biography
Life and academic career
Vladimir Maz'ya was born on 31 December 1937 in a Jewish family. His father died in December 1941 at the World War II front, and all four grandparents died during the siege of Leningrad. His mother, a state accountant, chose to not remarry and dedicated her life to him: they lived on her meager salary in a 9 square meters room in a big communal apartment, shared with other four families. As a secondary school student, he repeatedly won the city's mathematics and physics olympiads and graduated with a gold medal.
In 1955, at the age of 18, Maz'ya entered the Mathematics and Mechanics Department of Leningrad University. Taking part to the traditional mathematical olympiad of the faculty, he solved the problems for both first year and second year students and, since he did not make this a secret, the other participants did not submit their solutions causing the invalidation of the contest by the jury which therefore did not award the prize. However, he attracted the attention of Solomon Mikhlin who invited him at his home, thus starting their lifelong friendship: and this friendship had a great influence on him, helping him develop his mathematical style more than anyone else. According to , in the years to come, "Maz'ya was never a formal student of Mikhlin, but Mikhlin was more than a teacher for him. Maz'ya had found the topics of his dissertations by himself, while Mikhlin taught him mathematical ethics and rules of writing, referring and reviewing".
More details on the life of Vladimir Maz'ya, from his birth to the year 1968, can be found in his autobiography .
Maz'ya graduated from Leningrad University in 1960. The same year he gave two talks at Smirnov's seminar: their contents were published as a short report in the Proceedings of the USSR Academy of Sciences and later evolved in his "kandidat nauk" thesis, "Classes of sets and embedding theorems for function spaces", which was defended in 1962. In 1965 he earned the Doktor nauk degree, again from Leningrad University, defending the dissertation "Dirichlet and Neumann problems in Domains with irregular boundaries", when he was only 27. Neither the first nor his second thesis were written under the guidance of an advisor: Vladimir Maz'ya never had a formal scientific adviser, choosing the research problems he worked to by himself.
From 1960 up to 1986, he worked as a "research fellow" at the Research Institute of Mathematics and Mechanics of Leningrad University (RIMM), being promoted from junior to senior research fellow in 1965. From 1968 to 1978 he taught at the , where he was awarded the title of "professor" in 1976. From 1986 to 1990 he worked to the Leningrad Section of the of the USSR Academy of Sciences, where he created and directed the Laboratory of Mathematical Models in Mechanics and the Consulting Center in Mathematics for Engineers.
In 1978 he married Tatyana Shaposhnikova, a former doctoral student of Solomon Mikhlin, and they have a son, Michael: In 1990, they left the URSS for Sweden, where Prof. Maz'ya obtained the Swedish citizenship and started to work at Linköping University.
Currently, he is honorary Senior Fellow of Liverpool University and Professor Emeritus at Linköping University: he is also member of the editorial board of several mathematical journals.
Honors
In 1962 Maz'ya was awarded the "Young Mathematician" prize by the Leningrad Mathematical Society, for his results on Sobolev spaces: he was the first winner of the prize. In 1990 he was awarded an honorary doctorate from Rostock University. In 1999, Maz'ya received the Humboldt Prize. He was elected member of the Royal Society of Edinburgh in 2000, and of the Swedish Academy of Science in 2002. In March 2003, he, jointly with Tatyana Shaposhnikova, was awarded the Verdaguer Prize by the French Academy of Sciences. On 31 August 2004 he was awarded the Celsius Gold Medal, the Royal Society of Sciences in Uppsala's top award, "for his outstanding research on partial differential equations and hydrodynamics". He was awarded the Senior Whitehead Prize by the London Mathematical Society on 20 November 2009. In 2012 he was elected fellow of the American Mathematical Society. On 30 October 2013 he was elected foreign member of the Georgian National Academy of Sciences.
Starting from 1993, several conferences have been held to honor him: the first one, held in that year at the University of Kyoto, was a conference on Sobolev spaces. On the occasion of his 60th birthday in 1998, two international conferences were held in his honor: the one at the University of Rostock was on Sobolev spaces, while the other, at the École Polytechnique in Paris, was on the boundary element method. He was invited speaker at the International Mathematical Congress held in Beijing in 2002: his talk is an exposition on his work on Wiener–type criteria for higher order elliptic equations. Other two conferences were held on the occasion of his 70th birthday: "Analysis, PDEs and Applications on the occasion of the 70th birthday of Vladimir Maz'ya" was held in Rome, while the "Nordic – Russian Symposium in honour of Vladimir Maz'ya on the occasion of his 70th birthday" was held in Stockholm. On the same occasion, also a volume of the Proceedings of Symposia in Pure Mathematics was dedicated to him. On the occasion of his 80th birthday, a "Workshop on Sobolev Spaces and Partial Differential Equations" was held on 17–18 May 2018 was held at the Accademia Nazionale dei Lincei to honor him. On the 26–31 May 2019, the international conference "Harmonic Analysis and PDE" was held in his honor at the Holon Institute of Technology.
Work
Research activity
Maz'ya authored/coauthored more than 500 publications, including 20 research monographs. Several survey articles describing his work can be found in the book , and also the paper by Dorina and Marius Mitrea (2008) describes extensively his research achievements, so these references are the main ones in this section: in particular, the classification of the research work of Vladimir Maz'ya is the one proposed by the authors of these two references. He is also the author of Seventy (Five) Thousand Unsolved Problems in Analysis and Partial Differential Equations which collects problems he considers to be important research directions in the field
Theory of boundary value problems in nonsmooth domains
In one of his early papers, considers the Dirichlet problem for the following linear elliptic equation:
where
is a bounded region in the –dimensional euclidean space
is a matrix whose first eigenvalue is not less than a fixed positive constant and whose entries are functions sufficiently smooth defined on , the closure of .
, and are respectively a vector-valued function and two scalar functions sufficiently smooth on as their matrix counterpart .
He proves the following a priori estimate
for the weak solution of , where is a constant depending on , , and other parameters but not depending on the moduli of continuity of the coefficients. The integrability exponents of the norms in are subject to the relations
for ,
is an arbitrary positive number for ,
the first one of which answers positively to a conjecture proposed by .
Selected works
Papers
, translated as .
, translated as .
, translated in English as .
, translated in English as .
Books
, translated in English as .
. A definitive monograph, giving a detailed study of a priori estimates of constant coefficient matrix differential operators defined on , with : translated as .
(also available with ).
.
(also available as ).
.
.
. There are also two revised and expanded editions: the French translation , and the (further revised and expanded) Russian translation .
.
.
.
.
.
.
.
.
.
.
.
.
.
(also published with ). First Russian edition published as .
See also
Function space
Multiplication operator
Partial differential equation
Potential theory
Sobolev space
Notes
References
Biographical and general references
. A biographical paper written on the occasion of Maz'ya 65th birthday: a freely accessible version is available here from Prof. Maz'ya website.
. A biographical paper written on the occasion of Maz'ya 70th birthday (a freely accessible English translation is available here from Prof. Maz'ya web site), translated from the (freely accessible) Russian original .
.
.
.
. Another biographical paper written on the occasion of Maz'ya 70th birthday: a freely accessible version is available here from Prof. Maz'ya web site.
. Proceedings of the minisymposium held at the École Polytechnique, Palaiseau, 25–29 May 1998.
.
.
.
. A two–volume continuation of the opus "Mathematics in the USSR during its first forty years 1917–1957", describing the developments of Soviet mathematics during the period 1958–1967. Precisely it is meant as a continuation of the second volume of that work and, as such, is titled "Biobibliography" (evidently an acronym of biography and bibliography). It includes new biographies (when possible, brief and complete) and bibliographies of works published by new Soviet mathematicians during that period, and updates on the work and biographies of scientist included in the former volume, alphabetically ordered with respect to author's surname.
. A list of the winners of the Verdaguer Prize in PDF format, including short motivations for the awarding.
. The membership diploma awarded to Vladimir Maz'ya on the occasion of his election as foreign member of the Georgian National Academy of Sciences.
.
.
. The summary of the kandidat nauk thesis of Aben Khvoles, one of the doctoral students of Vladimir Maz'ya.
.
(e–).
.
.
. The "Presentation of prizes and awards" speech given by the Secretary of the Royal Society of Sciences in Uppsala, written in the "yearbook 2004", on the occasion of the awarding of the Society prizes to prof. V. Maz'ya and to other 2004 winners.
Scientific references
.
.
, translated in English as .
.
.
.
.
.
.
.
Publications and conferences and dedicated to Vladimir Maz'ya
.
(e–).
. Proceedings of the minisymposium held at the École Polytechnique, Palaiseau, 25–29 May 1998.
.
.
(also published with ; ; and ).
(also published with ; ; and ).
(also published with ; ; and ).
.
.
.
.
External links
Professor's Maz'ya's home page
1937 births
Living people
Mathematicians from Saint Petersburg
Russian Jews
Soviet emigrants to Sweden
Soviet mathematicians
Swedish people of Russian-Jewish descent
20th-century Russian mathematicians
21st-century Russian mathematicians
Academics of the University of Liverpool
Fellows of the Royal Society of Edinburgh
Fellows of the American Mathematical Society
Academic staff of Linköping University
Mathematical analysts
Members of the Royal Swedish Academy of Sciences
Partial differential equation theorists | Vladimir Mazya | [
"Mathematics"
] | 2,625 | [
"Mathematical analysis",
"Mathematical analysts"
] |
314,366 | https://en.wikipedia.org/wiki/H-infinity%20methods%20in%20control%20theory | H∞ (i.e. "H-infinity") methods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H∞ methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H∞ techniques have the advantage over classical control techniques in that H∞ techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H∞ techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s
by George Zames (sensitivity minimization), J. William Helton (broadband matching),
and Allen Tannenbaum (gain margin optimization).
The phrase H∞ control comes from the name of the mathematical space over which the optimization takes place: H∞ is the Hardy space of matrix-valued functions that are analytic and bounded in the open right-half of the complex plane defined by Re(s) > 0; the H∞ norm is the supremum singular value of the matrix over that space. In the case of a scalar-valued function, the elements of the Hardy space that extend continuously to the boundary and are continuous at infinity is the disk algebra. For a matrix-valued function, the norm can be interpreted as a maximum gain in any direction and at any frequency; for SISO systems, this is effectively the maximum magnitude of the frequency response.
H∞ techniques can be used to minimize the closed loop impact of a perturbation: depending on the problem formulation, the impact will either be measured in terms of stabilization or performance. Simultaneously optimizing robust performance and robust stabilization is difficult. One method that comes close to achieving this is H∞ loop-shaping, which allows the control designer to apply classical loop-shaping concepts to the multivariable frequency response to get good robust performance, and then optimizes the response near the system bandwidth to achieve good robust stabilization.
Commercial software is available to support H∞ controller synthesis.
Problem formulation
First, the process has to be represented according to the following standard configuration:
The plant P has two inputs, the exogenous input w, that includes reference signal and disturbances, and the manipulated variables u. There are two outputs, the error signals z that we want to minimize, and the measured variables v, that we use to control the system. v is used in K to calculate the manipulated variables u. Notice that all these are generally vectors, whereas P and K are matrices.
In formulae, the system is:
It is therefore possible to express the dependency of z on w as:
Called the lower linear fractional transformation, is defined (the subscript comes from lower):
Therefore, the objective of control design is to find a controller such that is minimised according to the norm. The same definition applies to control design. The infinity norm of the transfer function matrix is defined as:
where is the maximum singular value of the matrix .
The achievable H∞ norm of the closed loop system is mainly given through the matrix D11 (when the system P is given in the form (A, B1, B2, C1, C2, D11, D12, D22, D21)). There are several ways to come to an H∞ controller:
A Youla-Kucera parametrization of the closed loop often leads to very high-order controller.
Riccati-based approaches solve two Riccati equations to find the controller, but require several simplifying assumptions.
An optimization-based reformulation of the Riccati equation uses linear matrix inequalities and requires fewer assumptions.
See also
Blaschke product
Hardy space
H square
H-infinity loop-shaping
Linear-quadratic-Gaussian control (LQG)
Rosenbrock system matrix
References
Bibliography
.
.
.
.
.
.
Control theory
Hardy spaces | H-infinity methods in control theory | [
"Mathematics"
] | 886 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
314,383 | https://en.wikipedia.org/wiki/Supertask | In philosophy, a supertask is a countably infinite sequence of operations that occur sequentially within a finite interval of time. Supertasks are called hypertasks when the number of operations becomes uncountably infinite. A hypertask that includes one task for each ordinal number is called an ultratask. The term "supertask" was coined by the philosopher James F. Thomson, who devised Thomson's lamp. The term "hypertask" derives from Clark and Read in their paper of that name.
History
Zeno
Motion
The origin of the interest in supertasks is normally attributed to Zeno of Elea. Zeno claimed that motion was impossible. He argued as follows: suppose our burgeoning "mover", Achilles say, wishes to move from A to B. To achieve this he must traverse half the distance from A to B. To get from the midpoint of AB to B, Achilles must traverse half this distance, and so on and so forth. However many times he performs one of these "traversing" tasks, there is another one left for him to do before he arrives at B. Thus it follows, according to Zeno, that motion (travelling a non-zero distance in finite time) is a supertask. Zeno further argues that supertasks are not possible (how can this sequence be completed if for each traversing there is another one to come?). It follows that motion is impossible.
Zeno's argument takes the following form:
Motion is a supertask, because the completion of motion over any set distance involves an infinite number of steps
Supertasks are impossible
Therefore, motion is impossible
Most subsequent philosophers reject Zeno's bold conclusion in favor of common sense. Instead, they reverse the argument and take it as a proof by contradiction where the possibility of motion is taken for granted. They accept the possibility of motion and apply modus tollens (contrapositive) to Zeno's argument to reach the conclusion that either motion is not a supertask or not all supertasks are impossible.
Achilles and the tortoise
Zeno himself also discusses the notion of what he calls "Achilles and the tortoise". Suppose that Achilles is the fastest runner, and moves at a speed of 1 m/s. Achilles chases a tortoise, an animal renowned for being slow, that moves at 0.1 m/s. However, the tortoise starts 0.9 metres ahead. Common sense seems to decree that Achilles will catch up with the tortoise after exactly 1 second, but Zeno argues that this is not the case. He instead suggests that Achilles must inevitably come up to the point where the tortoise has started from, but by the time he has accomplished this, the tortoise will already have moved on to another point. This continues, and every time Achilles reaches the mark where the tortoise was, the tortoise will have reached a new point that Achilles will have to catch up with; while it begins with 0.9 metres, it becomes an additional 0.09 metres, then 0.009 metres, and so on, infinitely. While these distances will grow very small, they will remain finite, while Achilles' chasing of the tortoise will become an unending supertask. Much commentary has been made on this particular paradox; many assert that it finds a loophole in common sense.
Thomson
James F. Thomson believed that motion was not a supertask, and he emphatically denied that supertasks are possible. He considered a lamp that may either be on or off. At time the lamp is off, and the switch is flipped on at ; after that, the switch is flipped after waiting for half the time as before. Thomson asks what is the state at , when the switch has been flipped infinitely many times. He reasons that it cannot be on because there was never a time when it was not subsequently turned off, and vice versa, and reaches a contradiction. He concludes that supertasks are impossible.
Benacerraf
Paul Benacerraf believes that supertasks are at least logically possible despite Thomson's apparent contradiction. Benacerraf agrees with Thomson insofar as that the experiment he outlined does not determine the state of the lamp at t = 1. However he disagrees with Thomson that he can derive a contradiction from this, since the state of the lamp at t = 1 cannot be logically determined by the preceding states.
Modern literature
Most of the modern literature comes from the descendants of Benacerraf, those who tacitly accept the possibility of supertasks. Philosophers who reject their possibility tend not to reject them on grounds such as Thomson's but because they have qualms with the notion of infinity itself. Of course there are exceptions. For example, McLaughlin claims that Thomson's lamp is inconsistent if it is analyzed with internal set theory, a variant of real analysis.
Philosophy of mathematics
If supertasks are possible, then the truth or falsehood of unknown propositions of number theory, such as Goldbach's conjecture, or even undecidable propositions could be determined in a finite amount of time by a brute-force search of the set of all natural numbers. This would, however, be in contradiction with the Church–Turing thesis. Some have argued this poses a problem for intuitionism, since the intuitionist must distinguish between things that cannot in fact be proven (because they are too long or complicated; for example Boolos's "Curious Inference") but nonetheless are considered "provable", and those which are provable by infinite brute force in the above sense.
Physical possibility
Some have claimed, Thomson's lamp is physically impossible since it must have parts moving at speeds faster than the speed of light (e.g., the lamp switch). Adolf Grünbaum suggests that the lamp could have a strip of wire which, when lifted, disrupts the circuit and turns off the lamp; this strip could then be lifted by a smaller distance each time the lamp is to be turned off, maintaining a constant velocity.
However, such a design would ultimately fail, as eventually the distance between the contacts would be so small as to allow electrons to jump the gap, preventing the circuit from being broken at all. Still, for either a human or any device, to perceive or act upon the state of the lamp some measurement has to be done, for example the light from the lamp would have to reach an eye or a sensor.
Any such measurement will take a fixed frame of time, no matter how small and, therefore, at some point measurement of the state will be impossible. Since the state at t=1 can not be determined even in principle, it is not meaningful to speak of the lamp being either on or off.
Other physically possible supertasks have been suggested. In one proposal, one person (or entity) counts upward from 1, taking an infinite amount of time, while another person observes this from a frame of reference where this occurs in a finite space of time. For the counter, this is not a supertask, but for the observer, it is. (This could theoretically occur due to time dilation, for example if the observer were falling into a black hole while observing a counter whose position is fixed relative to the singularity.)
Gustavo E. Romero in the paper 'The collapse of supertasks' maintains that any attempt to carry out a supertask will result in the formation of a black hole, making supertasks physically impossible.
Super Turing machines
The impact of supertasks on theoretical computer science has triggered some new and interesting work, for example Hamkins and Lewis "Infinite Time Turing Machine".
Prominent supertasks
Ross–Littlewood paradox
Suppose there is a jar capable of containing infinitely many marbles and an infinite collection of marbles labelled 1, 2, 3, and so on. At time t = 0, marbles 1 through 10 are placed in the jar and marble 1 is taken out. At t = 0.5, marbles 11 through 20 are placed in the jar and marble 2 is taken out; at t = 0.75, marbles 21 through 30 are put in the jar and marble 3 is taken out; and in general at time t = 1 − 0.5n, marbles 10n + 1 through 10n + 10 are placed in the jar and marble n + 1 is taken out. How many marbles are in the jar at time t = 1?
One argument states that there should be infinitely many marbles in the jar, because at each step before t = 1 the number of marbles increases from the previous step and does so unboundedly. A second argument, however, shows that the jar is empty. Consider the following argument: if the jar is non-empty, then there must be a marble in the jar. Let us say that that marble is labeled with the number n. But at time t = 1 − 0.5n - 1, the nth marble has been taken out, so marble n cannot be in the jar. This is a contradiction, so the jar must be empty. The Ross–Littlewood paradox is that here we have two seemingly perfectly good arguments with completely opposite conclusions.
Benardete's paradox
There has been considerable interest in J. A. Benardete’s “Paradox of the Gods”:
Grim Reaper paradox
Inspired by J. A. Benardete’s paradox regarding an infinite series of assassins, David Chalmers describes the paradox as follows:
It has gained significance in philosophy via its use in arguing for a finite past, thereby bearing relevance to the Kalam cosmological argument.
Davies' super-machine
Proposed by E. Brian Davies, this is a machine that can, in the space of half an hour, create an exact replica of itself that is half its size and capable of twice its replication speed. This replica will in turn create an even faster version of itself with the same specifications, resulting in a supertask that finishes after an hour. If, additionally, the machines create a communication link between parent and child machine that yields successively faster bandwidth and the machines are capable of simple arithmetic, the machines can be used to perform brute-force proofs of unknown conjectures. However, Davies also points out that due to fundamental properties of the real universe such as quantum mechanics, thermal noise and information theory his machine cannot actually be built.
See also
References
External links
Article on Supertasks in Stanford Encyclopedia of Philosophy
Supertasks - Vsauce (YouTube)
The Infinity Machine
Concepts in logic
Hypercomputation
Supertasks | Supertask | [
"Mathematics"
] | 2,182 | [
"Supertasks",
"Mathematical objects",
"Infinity"
] |
314,402 | https://en.wikipedia.org/wiki/Liquid%20oxygen | Liquid oxygen, sometimes abbreviated as LOX or LOXygen, is a clear cyan liquid form of dioxygen . It was used as the oxidizer in the first liquid-fueled rocket invented in 1926 by Robert H. Goddard, an application which is ongoing.
Physical properties
Liquid oxygen has a clear cyan color and is strongly paramagnetic: it can be suspended between the poles of a powerful horseshoe magnet. Liquid oxygen has a density of , slightly denser than liquid water, and is cryogenic with a freezing point of and a boiling point of at . Liquid oxygen has an expansion ratio of 1:861 and because of this, it is used in some commercial and military aircraft as a transportable source of breathing oxygen.
Because of its cryogenic nature, liquid oxygen can cause the materials it touches to become extremely brittle. Liquid oxygen is also a very powerful oxidizing agent: organic materials will burn rapidly and energetically in liquid oxygen. Further, if soaked in liquid oxygen, some materials such as coal briquettes, carbon black, etc., can detonate unpredictably from sources of ignition such as flames, sparks or impact from light blows. Petrochemicals, including asphalt, often exhibit this behavior.
The tetraoxygen molecule (O4) was predicted in 1924 by Gilbert N. Lewis, who proposed it to explain why liquid oxygen defied Curie's law. Modern computer simulations indicate that, although there are no stable O4 molecules in liquid oxygen, O2 molecules do tend to associate in pairs with antiparallel spins, forming transient O4 units.
Liquid nitrogen has a lower boiling point at −196 °C (77 K) than oxygen's −183 °C (90 K), and vessels containing liquid nitrogen can condense oxygen from air: when most of the nitrogen has evaporated from such a vessel, there is a risk that liquid oxygen remaining can react violently with organic material. Conversely, liquid nitrogen or liquid air can be oxygen-enriched by letting it stand in open air; atmospheric oxygen dissolves in it, while nitrogen evaporates preferentially.
The surface tension of liquid oxygen at its normal pressure boiling point is .
Uses
In commerce, liquid oxygen is classified as an industrial gas and is widely used for industrial and medical purposes. Liquid oxygen is obtained from the oxygen found naturally in air by fractional distillation in a cryogenic air separation plant.
Air forces have long recognized the strategic importance of liquid oxygen, both as an oxidizer and as a supply of gaseous oxygen for breathing in hospitals and high-altitude aircraft flights. In 1985, the USAF started a program of building its own oxygen-generation facilities at all major consumption bases.
In rocket propellant
Liquid oxygen is the most common cryogenic liquid oxidizer propellant for spacecraft rocket applications, usually in combination with liquid hydrogen, kerosene or methane.
Liquid oxygen was used in the first liquid fueled rocket. The World War II V-2 missile also used liquid oxygen under the name A-Stoff and Sauerstoff. In the 1950s, during the Cold War both the United States' Redstone and Atlas rockets, and the Soviet R-7 Semyorka used liquid oxygen. Later, in the 1960s and 1970s, the ascent stages of the Apollo Saturn rockets, and the Space Shuttle main engines used liquid oxygen.
As of 2024, many active rockets use liquid oxygen:
Chinese space program
CASC: Long March 5, Long March 6, Long March 7, Long March 8, Long March 12, Long March 9 (under development), Long March 10 (under development)
Galactic Energy: Pallas-1 (under development)
i-Space: Hyperbola-3 (under development)
LandSpace: Zhuque-2
Orienspace: Gravity-2 (under development)
Space Pioneer: Tianlong-2
European Space Agency: Ariane 6
Indian Space Research Organisation: GSLV
JAXA (Japan): H-IIA, H3
Korea Aerospace Research Institute: Naro-1, Nuri
Roscosmos (Russia): Soyuz-2, Angara
United States
Blue Origin: New Shepard, New Glenn (under development)
Firefly Aerospace: Firefly Alpha
NASA: Space Launch System
Northrop Grumman: Antares 300 (under development)
Rocket Lab: Electron, Neutron (under development)
SpaceX: Falcon 9, Falcon Heavy, Starship
United Launch Alliance: Atlas V, Vulcan
History
By 1845, Michael Faraday had managed to liquefy most gases then known to exist. Six gases, however, resisted every attempt at liquefaction and were known at the time as "permanent gases". They were oxygen, hydrogen, nitrogen, carbon monoxide, methane, and nitric oxide.
In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air.
In 1883, Polish professors Zygmunt Wróblewski and Karol Olszewski produced the first measurable quantity of liquid oxygen.
See also
Oxygen storage
Industrial gas
Cryogenics
Liquid hydrogen
Liquid helium
Liquid nitrogen
List of stoffs
Natterer compressor
Rocket fuel
Solid oxygen
Tetraoxygen
References
Further reading
Rocket oxidizers
Cryogenics
Oxygen
Industrial gases
Liquids
1883 in science | Liquid oxygen | [
"Physics",
"Chemistry"
] | 1,082 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Cryogenics",
"Oxidizing agents",
"Rocket oxidizers",
"Industrial gases",
"Chemical process engineering",
"Matter",
"Liquids"
] |
314,425 | https://en.wikipedia.org/wiki/Barrel%20shifter | A barrel shifter is a digital circuit that can shift a data word by a specified number of bits without the use of any sequential logic, only pure combinational logic, i.e. it inherently provides a binary operation. It can however in theory also be used to implement unary operations, such as logical shift left, in cases where limited by a fixed amount (e.g. for address generation unit). One way to implement a barrel shifter is as a sequence of multiplexers where the output of one multiplexer is connected to the input of the next multiplexer in a way that depends on the shift distance. A barrel shifter is often used to shift and rotate n-bits in modern microprocessors, typically within a single clock cycle.
For example, take a four-bit barrel shifter, with inputs A, B, C and D. The shifter can cycle the order of the bits ABCD as DABC, CDAB, or BCDA; in this case, no bits are lost. That is, it can shift all of the outputs up to three positions to the right (and thus make any cyclic combination of A, B, C and D). The barrel shifter has a variety of applications, including being a useful component in microprocessors (alongside the ALU).
Implementation
The very fastest shifters are implemented as full crossbars, in a manner similar to the 4-bit shifter depicted above, only larger. These incur the least delay, with the output always a single gate delay behind the input to be shifted (after allowing the small time needed for the shift count decoder to settle; this penalty, however, is only incurred when the shift count changes). These crossbar shifters require however n2 gates for n-bit shifts. Because of this, the barrel shifter is often implemented as a cascade of parallel 2×1 multiplexers instead, which allows a large reduction in gate count, now growing only with n x log n; the propagation delay is however larger, growing with log n (instead of being constant as with the crossbar shifter).
For an 8-bit barrel shifter, two intermediate signals are used which shifts by four and two bits, or passes the same data, based on the value of S[2] and S[1]. This signal is then shifted by another multiplexer, which is controlled by S[0]:
int1 = IN , if S[2] == 0
= IN << 4, if S[2] == 1
int2 = int1 , if S[1] == 0
= int1 << 2, if S[1] == 1
OUT = int2 , if S[0] == 0
= int2 << 1, if S[0] == 1
Larger barrel shifters have additional stages.
The cascaded shifter has the further advantage over the full crossbar shifter of not requiring any decoding logic for the shift count.
Cost
The number of multiplexers required for an n-bit word is . Five common word sizes and the number of multiplexers needed are listed below:
128-bit —
64-bit —
32-bit —
16-bit —
8-bit —
Cost of critical path in FO4 (estimated, without wire delay):
32-bit: from 18 FO4 to 14 FO4
Uses
A common usage of a barrel shifter is in the hardware implementation of floating-point arithmetic. For a floating-point add or subtract operation, the significands of the two numbers must be aligned, which requires shifting the smaller number to the right, increasing its exponent, until it matches the exponent of the larger number. This is done by subtracting the exponents and using the barrel shifter to shift the smaller number to the right by the difference, in one cycle.
See also
Circular shift
References
Further reading
External links
Barrel-shifter (8 bit), University of Hamburg
Implementing Barrel Shifters Using Multipliers (Paul Gigliotti, 2004-08-17)
Digital circuits
Binary arithmetic
Computer arithmetic
Unary operations
Binary operations | Barrel shifter | [
"Mathematics"
] | 843 | [
"Functions and mappings",
"Unary operations",
"Mathematical objects",
"Binary operations",
"Binary relations",
"Computer arithmetic",
"Arithmetic",
"Mathematical relations",
"Binary arithmetic"
] |
314,428 | https://en.wikipedia.org/wiki/Groupie | A groupie is a fan of a particular musical group who follows the band around while they are on tour or who attends as many of their public appearances as possible, with the hope of meeting them. The term is used mostly describing young women, and sometimes men, who follow these individuals aiming to gain fame of their own, or help with behind-the-scenes work, or to initiate a relationship of some kind, intimate or otherwise. The term is also used to describe similarly enthusiastic fans of athletes, writers, and other public figures.
Origin in music
The word groupie originated around 1965 to describe teen-aged girls or young women who began following a particular group or band of musicians on a regular basis. The phenomenon was much older; Mary McCarthy had earlier described it in her novel The Company She Keeps (1942). Some sources have attributed the coining of the word to The Rolling Stones bassist Bill Wyman during the group's 1965 Australian tour; but Wyman said he and his bandmates used other "code words" for women on tour.
A prominent explanation of the groupie concept came from Rolling Stone magazine, which published an issue devoted to the topic, Groupies: The Girls of Rock (February 1969), which emphasized the sexual behavior of rock musicians and groupies. Time magazine published an article, "Manners And Morals: The Groupies", later that month. Also that year, journalists Jenny Fabian and Johnny Byrne released a largely autobiographical novel called Groupie (1969). The following year, a documentary film titled Groupies (1970) was released.
Female groupies in particular have a long-standing reputation of being available to celebrities, pop stars, rock stars, and other public figures. Led Zeppelin singer Robert Plant is quoted as distinguishing between fans who wanted brief sexual encounters, and "groupies" who traveled with musicians for extended periods of time, acting as a surrogate girlfriend, and often taking care of the musician's wardrobe and social life. Women who adopt this role are sometimes referred to as "road wives". Cynthia Plaster Caster, Cleo Odzer, Barbara Cope (The Butter Queen) and The GTOs (Girls Together Outrageously), with Pamela Des Barres, in particular, as de facto spokeswoman, are probably the best-known groupies of this type.
A characteristic that may classify one as a groupie is a reputation for promiscuity. Connie Hamzy, also known as "Sweet Connie", a prominent groupie in the 1960s, argues in favor of the groupie movement and defends her chosen lifestyle by saying, "Look, we're not hookers, we loved the glamour". However, her openness regarding her sexual endeavors with various rock stars is exactly what has enhanced the negative connotations surrounding her type. For example, she stated in the Los Angeles Times article "Pop & Hiss" (December 15, 2010): "Hamzy, unlike the other groupies, was never looking to build relationships. She was after sex, and she unabashedly shared intimate moments with virtually every rock star—even their roadies—who came through Arkansas." However, some groupies also downplayed the sexual connotations of the term. Speaking about the "groupie" label, former baby groupie Lori Mattix stated, "I feel like it's been degraded somewhere along the way, and it was never meant to be negative. Groupies in the old days were girlfriends of the band. They were classy and sophisticated, but now you hear the word groupie and you think of hookers and strippers."
Des Barres, who wrote two books detailing her experiences as a groupie—I'm with the Band (1987) and Take Another Little Piece of My Heart: A Groupie Grows Up (1993)—as well as another non-fiction book, Rock Bottom: Dark Moments in Music Babylon, asserts that a groupie is to a rock band as Mary Magdalene was to Jesus. Her most recent book, Let's Spend the Night Together (2007), is a collection of wildly varied interviews with classic "old school" groupies including Catherine James, Connie Hamzy, Cherry Vanilla, DeeDee Keel, and Margaret Moser. Des Barres described Keel as: "One of the most intimidating dolls ... a slim strawberry blonde who won the highly prized job of Whisky office manager after her predecessor Gail Sloatman met Frank Zappa and became what we all wanted to be." Keel was one of the few who has stayed connected in Hollywood and with bands for nearly four decades. Des Barres, who married rock singer/actor Michael Des Barres, also persuaded cult actress Tura Satana, singer and model Bebe Buell, actress Patti D'Arbanville, and Cassandra Peterson, better known as "Elvira, Mistress of the Dark", to talk about their relationships with musicians.
Also according to Des Barres' book, there is at least one male groupie, Pleather, who followed female celebrities such as Courtney Love and members of the 1980s pop group The Pandoras.
The "groupie" label, as it was used in the music scene, has been criticized by some feminist scholars for diminishing the role that women played in supporting and creating music. Norma Coates, a scholar of media and cultural studies, notes that Rolling Stone's 1969 special report on groupies also included profiles of women who were not groupies at all but rather musicians in their own right. According to model and groupie Bebe Buell, groupies sometimes became music celebrities in their own right. Speaking about "baby" groupies Sable Starr and Lori Mattix, she stated, "Every rock star that came to L.A. wanted to meet them, it wasn't the other way around." Music critic Ralph J. Gleason noted that as the prominence of the most well-known groupies increased, they became the "people that others looked to when determining whether a band was 'cool.'
American space program
During the Mercury, Gemini, and Apollo American space programs in the 1960s, women would hang around the hotels of Clear Lake in Houston, home to many astronauts, and Cocoa Beach in Florida near the rocket launching site at Cape Canaveral, "collecting" astronauts. Joan Roosa, wife of Apollo 14 Command Module Pilot Stu Roosa, recalled: "I was at a party one night in Houston. A woman standing behind me, who had no idea who I was, said 'I've slept with every astronaut who has been to the Moon.' ... I said 'Pardon me, but I don't think so'."
Sports
Groupies also play a role in sports. A puck bunny is an ice hockey fan whose interest in the sport is primarily motivated by sexual attraction to the players rather than enjoyment of the game itself. Primarily a Canadian term, it gained popular currency in the 21st century, and in 2004 was added to the second edition of the Canadian Oxford Dictionary which defines it as follows:
Puck bunny: a young female hockey fan, especially one motivated more by a desire to meet the players than by an interest in hockey.
The term is somewhat analogous to the term "groupie" as it relates to rock and roll musicians. Sociological studies of the phenomenon in minor league hockey indicate that self-proclaimed "puck bunnies" are proud as punch' to have sex with the [players]", as it confers social status on them. However, these transitory relationships are often contrasted with those of girlfriends, with whom players have more stable, long-term relationships.
"Buckle bunnies" are a well-known part of the world of rodeo. The term comes from a slang term for women ("bunnies"), and from the prize belt buckles awarded to the winners in rodeo, which are highly sought by the bunnies. According to one report, bunnies "usually do not expect anything more than sex from the rodeo participants and vice versa".
In a 1994 Spin magazine feature, Elizabeth Gilbert characterized buckle bunnies as an essential element of the rodeo scene, and described a particularly dedicated group of bunnies who are known on the rodeo circuit for their supportive attitude and generosity, going beyond sex, to "some fascination with providing the most macho group of guys on Earth with the only brand of nurturing they will accept".
Recently, in Irish sport, particularly in Gaelic Athletic Association sports the term "Jersey Puller" or "Jersey Tugger" has been used to describe females who are romantically interested in players. The term refers to the pulling of a player's top. The term can range from who look to be romantically linked with senior intercounty players to local players playing for their parish.
In popular culture
Film
Groupies (1970), documentary
200 Motels (1971), by Frank Zappa about life on the road.
Almost Famous (2000) depicts groupies who call themselves "band aids".
The Banger Sisters (2002) depicts two middle-aged women who used to be friends and groupies when they were young.
School of Rock (2003), referenced when Dewey Finn (Jack Black) (when creating a band and crew composed of prep school students) gives three schoolgirls the roles of groupies, until one of them—Summer Hathaway (Miranda Cosgrove)—learns what a groupie is and is appalled; Dewey subsequently gives her the more important role of band manager.
Secret Lives of Women: Groupies (2009), a reality television spot featured the Beatle Bandaids (a modern day vintage groupie troupe), Pamela Des Barres, and the Plastics (professional groupies).
In Woody Allen's movie Midnight in Paris (2011), Gil Pender (Owen Wilson) comments that Adriana is taking the word "art groupie" to a whole new level.
Evil Dead Rise (2023), the protagonist, Beth Bixler (Lily Sullivan), is constantly called a groupie by the deadite entities to mock her.
Literature
Music
Groupies
The GTOs (Girls Together Outrageously), is a band organized by Frank Zappa in the late 1960s, composed of seven groupies: Miss Pamela (Pamela Des Barres de facto spokeswoman), Miss Sparky (Linda Sue Parker), Miss Lucy (Lucy McLaren), Miss Christine (Christine Frka), Miss Sandra (Sandra Leano), Miss Mercy (Mercy Fontentot), and Miss Cynderella (Cynthia Cale-Binion)
Songs
"Pick Me, I'm Clean" and "Road Ladies", both by Frank Zappa.
On December 16, 2014, KXNG Crooked, a.k.a. Crooked I of Slaughterhouse (Shady Records) released a song called "Groupie" featuring Shalé, produced by Jonathan Hay and Mike Smith from the album Sex, Money and Hip-Hop.
The song "La Groupie" featured by Reggaetón singers De La Ghetto, Ñejo, Lui-G 21 Plus, Nicky Jam and Ñengo Flow contains explicit vocabulary and expressions for women considered as groupies.
Michael Jackson's songs "Dirty Diana" and "Billie Jean" both describe sexual encounters with groupies.
The song "Look Away" by Iggy Pop was written for rock and roll groupie Sable Starr.
New Riders of the Purple Sage recorded a song titled "Groupie". The chorus goes "She really ain't no groupie/She said so in a movie/At least that's what she said to me."
Bonnie Bramlett and Leon Russell wrote a song they titled "Groupie", which was recorded by Delaney & Bonnie. The song was covered by The Carpenters under the title "Superstar" and it became one of their most popular hits. Besides the title change, the duo changed the lyric in the second verse from "I can hardly wait to sleep with you again" to the somewhat less suggestive "I can hardly wait to be with you again."
Grand Funk Railroad recorded their song "We're an American Band", which included the line "Sweet, sweet Connie was doing her act/She had the whole show and that's a natural fact." This lyric is referring to groupie Connie Hamzy.
Dr. Hook & the Medicine Show recorded the novelty song "Roland the Roadie and Gertrude the Groupie".
The song "Little Miss Honky Tonk" by Brooks & Dunn praises the singer's girlfriend stating "I wouldn't give her up for a thousand buckle bunnies."
The song "Star Star" by The Rolling Stones, originally titled "Starfucker", from their album Goats Head Soup (1973) is an infamous, profanity-laden song that speaks candidly of the groupie scene of the early 1970s.
The song "Groupie Love" by Lana Del Rey, featuring A$AP Rocky off her album Lust for Life (2017), connotes the relationship between an artist with a type of fan—usually a young woman which seeks for emotional or sexual intimacy, involved in obsessive adoration of entertainers such as musicians, actors, athletes, and even political figures.
The song "Famous Groupies" by the band Wings on the album London Town (1978) tells about a pair of groupies and the damage they leave behind.
The song "Sick Again" by the band Led Zeppelin on their album Physical Graffiti (1975) is about the L.A. groupie scene in the early 1970s.
The song "Summer '68" by the band Pink Floyd on their album Atom Heart Mother (1970) was written about keyboardist Richard Wright's encounter with a groupie.
Stan Rogers described his song "You Can't Stay Here" on his album Northwest Passage (1981) as "[a]n only slightly tongue-in-cheek look at the 'groupie' problem".
The song "Psycho" by the band System of a Down on their album Toxicity (2001) makes several references to groupies, such as the line "So you want to see the show? You really don't have to be a ho. From the time you were a Psycho, groupie, cocaine, crazy."
Television
In Sons of Anarchy, the groupies who hang around the fictional SOA motorcycle club are referred to as "Crow Eaters"; in season 6, Jax's ex-wife Wendy tells Tara, Margaret, and Lowen she was a "Crow Eater" for a year before marrying Jax.
References
External links
Led Zeppelin's Abuse of Groupies
Article about firefighter and police groupies after 9/11/01.
Human sexuality
Women and sexuality
Lifestyles
Bill Wyman | Groupie | [
"Biology"
] | 3,014 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
314,461 | https://en.wikipedia.org/wiki/Tabes%20dorsalis | Tabes dorsalis is a late consequence of neurosyphilis, characterized by the slow degeneration (specifically, demyelination) of the neural tracts primarily in the dorsal root ganglia of the spinal cord (nerve root). These patients have lancinating nerve root pain which is aggravated by coughing, and features of sensory ataxia with ocular involvement.
Signs and symptoms
Signs and symptoms may not appear for decades after the initial infection and include weakness, diminished reflexes, paresthesias (shooting and burning pains, pricking sensations, and formication), hypoesthesias (abnormally diminished sense of touch), tabetic gait (locomotor ataxia), progressive degeneration of the joints, loss of coordination, episodes of intense pain and disturbed sensation (including glossodynia), personality changes, urinary incontinence, dementia, deafness, visual impairment, positive Romberg's test, and impaired response to light (Argyll Robertson pupil). The skeletal musculature is hypotonic due to destruction of the sensory limb of the spindle reflex. The deep tendon reflexes are also diminished or absent; for example, the "knee jerk" or patellar reflex may be lacking (Westphal's sign). A complication of tabes dorsalis can be transient neuralgic paroxysmal pain affecting the eyes and the ophthalmic areas, previously called "Pel's crises" after Dutch physician P.K. Pel. Now more commonly called "tabetic ocular crises", an attack is characterized by sudden, intense eye pain, tearing of the eyes and sensitivity to light.
"Tabes dorsalgia" is a related lancinating back pain.
"Tabetic gait" is a characteristic ataxic gait of untreated syphilis where the person's feet slap the ground as they strike the floor due to loss of proprioception. In daylight the person can avoid some unsteadiness by watching their own feet.
Cause
Tabes dorsalis is caused by demyelination by advanced syphilis infection (tertiary syphilis) when the primary infection by the causative spirochete bacterium, Treponema pallidum, is left untreated for an extended period of time (past the point of blood infection by the organism). The spirochete invades large myelinated fibers, leading to the involvement of the dorsal column medial leminiscus pathway rather than the spinothalamic tract.
Diagnosis
Routine screening for syphilis.
Treponemal antibody usually positive both in blood and in CSF also.
In CSF lymphocytosis and elevated protein found.
Serological tests are usually positive.
Treatment
Intravenously administered penicillin is the treatment of choice. Associated pain can be treated with opiates, valproate, or carbamazepine. Those with tabes dorsalis may also require physical therapy and occupational therapy to deal with muscle wasting and weakness. Preventive treatment for those who come into sexual contact with an individual with syphilis is important.
Prognosis
Left untreated, tabes dorsalis can lead to paralysis, dementia, and blindness. Existing nerve damage cannot be reversed.
Epidemiology
The disease is more frequent in males than in females. Onset is commonly during mid-life. The incidence of tabes dorsalis is rising, in part due to co-associated HIV infection.
History
Although there were earlier clinical accounts of this disease, and descriptions and illustrations of the posterior columns of the spinal cord, it was the Berlin neurologist Romberg whose account became the classical textbook description, first published in German and later translated into English.
Sir Arthur Conan Doyle, author of the Sherlock Holmes stories, completed his doctorate on tabes dorsalis in 1885.
Society and culture
Notable patients
German storywriter E.T.A. Hoffmann appears to have had and died in 1822 from tabes dorsalis.
Mary Todd Lincoln [Dec 12, 1818 - Jul 16, 1882], wife of U.S. President Abraham Lincoln and America's First Lady from 1861 to 1865 most probably suffered from tabes dorsalis as early as 1869, at age 51. She died of a stroke at age 63 in Springfield, IL.
The French novelist Alphonse Daudet kept a journal of the pain he experienced from this condition which was posthumously published as La Doulou (1930) and translated into English as In the Land of Pain (2002) by Julian Barnes.
Poet Charles Baudelaire contracted syphilis in 1839 and resorted to opium to help alleviate the pain of tabes dorsalis ascending his spine.
Painter Édouard Manet died of syphilis complications, including tabes dorsalis, in 1883, aged 51.
Boxer Charley Mitchell
Meyer Nudelman, the father of author and doctor Sherwin Nuland, who described his father's condition extensively in his book Lost in America; A Journey with my Father (2003).
See also
General paresis of the insane
References
External links
Pain
Histopathology
Neurodegenerative disorders
Syphilis | Tabes dorsalis | [
"Chemistry"
] | 1,058 | [
"Histopathology",
"Microscopy"
] |
314,493 | https://en.wikipedia.org/wiki/M%C3%B6bius%20transformation | In geometry and complex analysis, a Möbius transformation of the complex plane is a rational function of the form
of one complex variable ; here the coefficients , , , are complex numbers satisfying .
Geometrically, a Möbius transformation can be obtained by first applying the inverse stereographic projection from the plane to the unit sphere, moving and rotating the sphere to a new location and orientation in space, and then applying a stereographic projection to map from the sphere back to the plane. These transformations preserve angles, map every straight line to a line or circle, and map every circle to a line or circle.
The Möbius transformations are the projective transformations of the complex projective line. They form a group called the Möbius group, which is the projective linear group . Together with its subgroups, it has numerous applications in mathematics and physics.
Möbius geometries and their transformations generalize this case to any number of dimensions over other fields.
Möbius transformations are named in honor of August Ferdinand Möbius; they are an example of homographies, linear fractional transformations, bilinear transformations, and spin transformations (in relativity theory).
Overview
Möbius transformations are defined on the extended complex plane (i.e., the complex plane augmented by the point at infinity).
Stereographic projection identifies with a sphere, which is then called the Riemann sphere; alternatively, can be thought of as the complex projective line . The Möbius transformations are exactly the bijective conformal maps from the Riemann sphere to itself, i.e., the automorphisms of the Riemann sphere as a complex manifold; alternatively, they are the automorphisms of as an algebraic variety. Therefore, the set of all Möbius transformations forms a group under composition. This group is called the Möbius group, and is sometimes denoted .
The Möbius group is isomorphic to the group of orientation-preserving isometries of hyperbolic 3-space and therefore plays an important role when studying hyperbolic 3-manifolds.
In physics, the identity component of the Lorentz group acts on the celestial sphere in the same way that the Möbius group acts on the Riemann sphere. In fact, these two groups are isomorphic. An observer who accelerates to relativistic velocities will see the pattern of constellations as seen near the Earth continuously transform according to infinitesimal Möbius transformations. This observation is often taken as the starting point of twistor theory.
Certain subgroups of the Möbius group form the automorphism groups of the other simply-connected Riemann surfaces (the complex plane and the hyperbolic plane). As such, Möbius transformations play an important role in the theory of Riemann surfaces. The fundamental group of every Riemann surface is a discrete subgroup of the Möbius group (see Fuchsian group and Kleinian group). A particularly important discrete subgroup of the Möbius group is the modular group; it is central to the theory of many fractals, modular forms, elliptic curves and Pellian equations.
Möbius transformations can be more generally defined in spaces of dimension as the bijective conformal orientation-preserving maps from the to the -sphere. Such a transformation is the most general form of conformal mapping of a domain. According to Liouville's theorem a Möbius transformation can be expressed as a composition of translations, similarities, orthogonal transformations and inversions.
Definition
The general form of a Möbius transformation is given by
where , , , are any complex numbers that satisfy .
In case , this definition is extended to the whole Riemann sphere by defining
If , we define
Thus a Möbius transformation is always a bijective holomorphic function from the Riemann sphere to the Riemann sphere.
The set of all Möbius transformations forms a group under composition. This group can be given the structure of a complex manifold in such a way that composition and inversion are holomorphic maps. The Möbius group is then a complex Lie group. The Möbius group is usually denoted as it is the automorphism group of the Riemann sphere.
If , the rational function defined above is a constant (unless , when it is undefined):
where a fraction with a zero denominator is ignored. A constant function is not bijective and is thus not considered a Möbius transformation.
An alternative definition is given as the kernel of the Schwarzian derivative.
Fixed points
Every non-identity Möbius transformation has two fixed points on the Riemann sphere. The fixed points are counted here with multiplicity; the parabolic transformations are those where the fixed points coincide. Either or both of these fixed points may be the point at infinity.
Determining the fixed points
The fixed points of the transformation
are obtained by solving the fixed point equation . For , this has two roots obtained by expanding this equation to
and applying the quadratic formula. The roots are
with discriminant
where the matrix
represents the transformation.
Parabolic transforms have coincidental fixed points due to zero discriminant. For c nonzero and nonzero discriminant the transform is elliptic or hyperbolic.
When , the quadratic equation degenerates into a linear equation and the transform is linear. This corresponds to the situation that one of the fixed points is the point at infinity. When the second fixed point is finite and is given by
In this case the transformation will be a simple transformation composed of translations, rotations, and dilations:
If and , then both fixed points are at infinity, and the Möbius transformation corresponds to a pure translation:
Topological proof
Topologically, the fact that (non-identity) Möbius transformations fix 2 points (with multiplicity) corresponds to the Euler characteristic of the sphere being 2:
Firstly, the projective linear group is sharply 3-transitive – for any two ordered triples of distinct points, there is a unique map that takes one triple to the other, just as for Möbius transforms, and by the same algebraic proof (essentially dimension counting, as the group is 3-dimensional). Thus any map that fixes at least 3 points is the identity.
Next, one can see by identifying the Möbius group with that any Möbius function is homotopic to the identity. Indeed, any member of the general linear group can be reduced to the identity map by Gauss-Jordan elimination, this shows that the projective linear group is path-connected as well, providing a homotopy to the identity map. The Lefschetz–Hopf theorem states that the sum of the indices (in this context, multiplicity) of the fixed points of a map with finitely many fixed points equals the Lefschetz number of the map, which in this case is the trace of the identity map on homology groups, which is simply the Euler characteristic.
By contrast, the projective linear group of the real projective line, need not fix any points – for example has no (real) fixed points: as a complex transformation it fixes ±i – while the map 2x fixes the two points of 0 and ∞. This corresponds to the fact that the Euler characteristic of the circle (real projective line) is 0, and thus the Lefschetz fixed-point theorem says only that it must fix at least 0 points, but possibly more.
Normal form
Möbius transformations are also sometimes written in terms of their fixed points in so-called normal form. We first treat the non-parabolic case, for which there are two distinct fixed points.
Non-parabolic case:
Every non-parabolic transformation is conjugate to a dilation/rotation, i.e., a transformation of the form
with fixed points at 0 and ∞. To see this define a map
which sends the points (γ1, γ2) to (0, ∞). Here we assume that γ1 and γ2 are distinct and finite. If one of them is already at infinity then g can be modified so as to fix infinity and send the other point to 0.
If f has distinct fixed points (γ1, γ2) then the transformation has fixed points at 0 and ∞ and is therefore a dilation: . The fixed point equation for the transformation f can then be written
Solving for f gives (in matrix form):
or, if one of the fixed points is at infinity:
From the above expressions one can calculate the derivatives of f at the fixed points:
and
Observe that, given an ordering of the fixed points, we can distinguish one of the multipliers (k) of f as the characteristic constant of f. Reversing the order of the fixed points is equivalent to taking the inverse multiplier for the characteristic constant:
For loxodromic transformations, whenever , one says that γ1 is the repulsive fixed point, and γ2 is the attractive fixed point. For , the roles are reversed.
Parabolic case:
In the parabolic case there is only one fixed point γ. The transformation sending that point to ∞ is
or the identity if γ is already at infinity. The transformation fixes infinity and is therefore a translation:
Here, β is called the translation length. The fixed point formula for a parabolic transformation is then
Solving for f (in matrix form) gives
Note that
If :
Note that β is not the characteristic constant of f, which is always 1 for a parabolic transformation. From the above expressions one can calculate:
Poles of the transformation
The point is called the pole of ; it is that point which is transformed to the point at infinity under .
The inverse pole is that point to which the point at infinity is transformed. The point midway between the two poles is always the same as the point midway between the two fixed points:
These four points are the vertices of a parallelogram which is sometimes called the characteristic parallelogram of the transformation.
A transform can be specified with two fixed points γ1, γ2 and the pole .
This allows us to derive a formula for conversion between k and given :
which reduces down to
The last expression coincides with one of the (mutually reciprocal) eigenvalue ratios of (compare the discussion in the preceding section about the characteristic constant of a transformation). Its characteristic polynomial is equal to
which has roots
Simple Möbius transformations and composition
A Möbius transformation can be composed as a sequence of simple transformations.
The following simple transformations are also Möbius transformations:
is a translation.
is a combination of a homothety (uniform scaling) and a rotation. If then it is a rotation, if then it is a homothety.
(inversion and reflection with respect to the real axis)
Composition of simple transformations
If , let:
(translation by d/c)
(inversion and reflection with respect to the real axis)
(homothety and rotation)
(translation by a/c)
Then these functions can be composed, showing that, if
one has
In other terms, one has
with
This decomposition makes many properties of the Möbius transformation obvious.
Elementary properties
A Möbius transformation is equivalent to a sequence of simpler transformations. The composition makes many properties of the Möbius transformation obvious.
Formula for the inverse transformation
The existence of the inverse Möbius transformation and its explicit formula are easily derived by the composition of the inverse functions of the simpler transformations. That is, define functions g1, g2, g3, g4 such that each gi is the inverse of fi. Then the composition
gives a formula for the inverse.
Preservation of angles and generalized circles
From this decomposition, we see that Möbius transformations carry over all non-trivial properties of circle inversion. For example, the preservation of angles is reduced to proving that circle inversion preserves angles since the other types of transformations are dilations and isometries (translation, reflection, rotation), which trivially preserve angles.
Furthermore, Möbius transformations map generalized circles to generalized circles since circle inversion has this property. A generalized circle is either a circle or a line, the latter being considered as a circle through the point at infinity. Note that a Möbius transformation does not necessarily map circles to circles and lines to lines: it can mix the two. Even if it maps a circle to another circle, it does not necessarily map the first circle's center to the second circle's center.
Cross-ratio preservation
Cross-ratios are invariant under Möbius transformations. That is, if a Möbius transformation maps four distinct points to four distinct points respectively, then
If one of the points is the point at infinity, then the cross-ratio has to be defined by taking the appropriate limit; e.g. the cross-ratio of is
The cross ratio of four different points is real if and only if there is a line or a circle passing through them. This is another way to show that Möbius transformations preserve generalized circles.
Conjugation
Two points z1 and z2 are conjugate with respect to a generalized circle C, if, given a generalized circle D passing through z1 and z2 and cutting C in two points a and b, are in harmonic cross-ratio (i.e. their cross ratio is −1). This property does not depend on the choice of the circle D. This property is also sometimes referred to as being symmetric with respect to a line or circle.
Two points z, z∗ are conjugate with respect to a line, if they are symmetric with respect to the line. Two points are conjugate with respect to a circle if they are exchanged by the inversion with respect to this circle.
The point z∗ is conjugate to z when L is the line determined by the vector based upon eiθ, at the point z0. This can be explicitly given as
The point z∗ is conjugate to z when C is the circle of a radius r, centered about z0. This can be explicitly given as
Since Möbius transformations preserve generalized circles and cross-ratios, they also preserve the conjugation.
Projective matrix representations
Isomorphism between the Möbius group and
The natural action of on the complex projective line CP1 is exactly the natural action of the Möbius group on the Riemann sphere
Correspondance between the complex projective line and the Riemann sphere
Here, the projective line CP1 and the Riemann sphere are identified as follows:
Here [z1:z2] are homogeneous coordinates on CP1; the point [1:0] corresponds to the point of the Riemann sphere. By using homogeneous coordinates, many calculations involving Möbius transformations can be simplified, since no case distinctions dealing with are required.
Action of PGL(2, C) on the complex projective line
Every invertible complex 2×2 matrix
acts on the projective line as
where
The result is therefore
Which, using the above identification, corresponds to the following point on the Riemann sphere :
Equivalence with a Möbius transformation on the Riemann sphere
Since the above matrix is invertible if and only if its determinant is not zero, this induces an identification of the action of the group of Möbius transformations with the action of on the complex projective line. In this identification, the above matrix corresponds to the Möbius transformation
This identification is a group isomorphism, since the multiplication of by a non zero scalar does not change the element of , and, as this multiplication consists of multiplying all matrix entries by this does not change the corresponding Möbius transformation.
Other groups
For any field K, one can similarly identify the group of the projective linear automorphisms with the group of fractional linear transformations. This is widely used; for example in the study of homographies of the real line and its applications in optics.
If one divides by a square root of its determinant, one gets a matrix of determinant one. This induces a surjective group homomorphism from the special linear group to , with as its kernel.
This allows showing that the Möbius group is a 3-dimensional complex Lie group (or a 6-dimensional real Lie group), which is a semisimple and non-compact, and that SL(2,C) is a double cover of . Since is simply-connected, it is the universal cover of the Möbius group, and the fundamental group of the Möbius group is Z2.
Specifying a transformation by three points
Given a set of three distinct points on the Riemann sphere and a second set of distinct points , there exists precisely one Möbius transformation with for . (In other words: the action of the Möbius group on the Riemann sphere is sharply 3-transitive.) There are several ways to determine from the given sets of points.
Mapping first to 0, 1,
It is easy to check that the Möbius transformation
with matrix
maps to , respectively. If one of the is , then the proper formula for is obtained from the above one by first dividing all entries by and then taking the limit .
If is similarly defined to map to then the matrix which maps to becomes
The stabilizer of (as an unordered set) is a subgroup known as the anharmonic group.
Explicit determinant formula
The equation
is equivalent to the equation of a standard hyperbola
in the -plane. The problem of constructing a Möbius transformation mapping a triple to another triple is thus equivalent to finding the coefficients of the hyperbola passing through the points . An explicit equation can be found by evaluating the determinant
by means of a Laplace expansion along the first row, resulting in explicit formulae,
for the coefficients of the representing matrix . The constructed matrix has determinant equal to , which does not vanish if the resp. are pairwise different thus the Möbius transformation is well-defined. If one of the points or is , then we first divide all four determinants by this variable and then take the limit as the variable approaches .
Subgroups of the Möbius group
If we require the coefficients of a Möbius transformation to be real numbers with , we obtain a subgroup of the Möbius group denoted as . This is the group of those Möbius transformations that map the upper half-plane to itself, and is equal to the group of all biholomorphic (or equivalently: bijective, conformal and orientation-preserving) maps . If a proper metric is introduced, the upper half-plane becomes a model of the hyperbolic plane H, the Poincaré half-plane model, and is the group of all orientation-preserving isometries of H in this model.
The subgroup of all Möbius transformations that map the open disk to itself consists of all transformations of the form
with and . This is equal to the group of all biholomorphic (or equivalently: bijective, angle-preserving and orientation-preserving) maps . By introducing a suitable metric, the open disk turns into another model of the hyperbolic plane, the Poincaré disk model, and this group is the group of all orientation-preserving isometries of H in this model.
Since both of the above subgroups serve as isometry groups of H, they are isomorphic. A concrete isomorphism is given by conjugation with the transformation
which bijectively maps the open unit disk to the upper half plane.
Alternatively, consider an open disk with radius r, centered at ri. The Poincaré disk model in this disk becomes identical to the upper-half-plane model as r approaches ∞.
A maximal compact subgroup of the Möbius group is given by
and corresponds under the isomorphism to the projective special unitary group which is isomorphic to the special orthogonal group SO(3) of rotations in three dimensions, and can be interpreted as rotations of the Riemann sphere. Every finite subgroup is conjugate into this maximal compact group, and thus these correspond exactly to the polyhedral groups, the point groups in three dimensions.
Icosahedral groups of Möbius transformations were used by Felix Klein to give an analytic solution to the quintic equation in ; a modern exposition is given in .
If we require the coefficients a, b, c, d of a Möbius transformation to be integers with , we obtain the modular group , a discrete subgroup of important in the study of lattices in the complex plane, elliptic functions and elliptic curves. The discrete subgroups of are known as Fuchsian groups; they are important in the study of Riemann surfaces.
Classification
In the following discussion we will always assume that the representing matrix is normalized such that .
Non-identity Möbius transformations are commonly classified into four types, parabolic, elliptic, hyperbolic and loxodromic, with the hyperbolic ones being a subclass of the loxodromic ones. The classification has both algebraic and geometric significance. Geometrically, the different types result in different transformations of the complex plane, as the figures below illustrate.
The four types can be distinguished by looking at the trace . The trace is invariant under conjugation, that is,
and so every member of a conjugacy class will have the same trace. Every Möbius transformation can be written such that its representing matrix has determinant one (by multiplying the entries with a suitable scalar). Two Möbius transformations (both not equal to the identity transform) with are conjugate if and only if
Parabolic transforms
A non-identity Möbius transformation defined by a matrix of determinant one is said to be parabolic if
(so the trace is plus or minus 2; either can occur for a given transformation since is determined only up to sign). In fact one of the choices for has the same characteristic polynomial as the identity matrix, and is therefore unipotent. A Möbius transform is parabolic if and only if it has exactly one fixed point in the extended complex plane , which happens if and only if it can be defined by a matrix conjugate to
which describes a translation in the complex plane.
The set of all parabolic Möbius transformations with a given fixed point in , together with the identity, forms a subgroup isomorphic to the group of matrices
this is an example of the unipotent radical of a Borel subgroup (of the Möbius group, or of for the matrix group; the notion is defined for any reductive Lie group).
Characteristic constant
All non-parabolic transformations have two fixed points and are defined by a matrix conjugate to
with the complex number λ not equal to 0, 1 or −1, corresponding to a dilation/rotation through multiplication by the complex number , called the characteristic constant or multiplier of the transformation.
Elliptic transforms
The transformation is said to be elliptic if it can be represented by a matrix of determinant 1 such that
A transform is elliptic if and only if and . Writing , an elliptic transform is conjugate to
with α real.
For any with characteristic constant k, the characteristic constant of is kn. Thus, all Möbius transformations of finite order are elliptic transformations, namely exactly those where λ is a root of unity, or, equivalently, where α is a rational multiple of . The simplest possibility of a fractional multiple means , which is also the unique case of , is also denoted as a ; this corresponds geometrically to rotation by 180° about two fixed points. This class is represented in matrix form as:
There are 3 representatives fixing {0, 1, ∞}, which are the three transpositions in the symmetry group of these 3 points: which fixes 1 and swaps 0 with ∞ (rotation by 180° about the points 1 and −1), , which fixes ∞ and swaps 0 with 1 (rotation by 180° about the points 1/2 and ∞), and which fixes 0 and swaps 1 with ∞ (rotation by 180° about the points 0 and 2).
Hyperbolic transforms
The transform is said to be hyperbolic if it can be represented by a matrix whose trace is real with
A transform is hyperbolic if and only if λ is real and .
Loxodromic transforms
The transform is said to be loxodromic if is not in . A transformation is loxodromic if and only if .
Historically, navigation by loxodrome or rhumb line refers to a path of constant bearing; the resulting path is a logarithmic spiral, similar in shape to the transformations of the complex plane that a loxodromic Möbius transformation makes. See the geometric figures below.
General classification
The real case and a note on terminology
Over the real numbers (if the coefficients must be real), there are no non-hyperbolic loxodromic transformations, and the classification is into elliptic, parabolic, and hyperbolic, as for real conics. The terminology is due to considering half the absolute value of the trace, |tr|/2, as the eccentricity of the transformation – division by 2 corrects for the dimension, so the identity has eccentricity 1 (tr/n is sometimes used as an alternative for the trace for this reason), and absolute value corrects for the trace only being defined up to a factor of ±1 due to working in PSL. Alternatively one may use half the trace squared as a proxy for the eccentricity squared, as was done above; these classifications (but not the exact eccentricity values, since squaring and absolute values are different) agree for real traces but not complex traces. The same terminology is used for the classification of elements of (the 2-fold cover), and analogous classifications are used elsewhere. Loxodromic transformations are an essentially complex phenomenon, and correspond to complex eccentricities.
Geometric interpretation of the characteristic constant
The following picture depicts (after stereographic transformation from the sphere to the plane) the two fixed points of a Möbius transformation in the non-parabolic case:
The characteristic constant can be expressed in terms of its logarithm:
When expressed in this way, the real number ρ becomes an expansion factor. It indicates how repulsive the fixed point γ1 is, and how attractive γ2 is. The real number α is a rotation factor, indicating to what extent the transform rotates the plane anti-clockwise about γ1 and clockwise about γ2.
Elliptic transformations
If , then the fixed points are neither attractive nor repulsive but indifferent, and the transformation is said to be elliptic. These transformations tend to move all points in circles around the two fixed points. If one of the fixed points is at infinity, this is equivalent to doing an affine rotation around a point.
If we take the one-parameter subgroup generated by any elliptic Möbius transformation, we obtain a continuous transformation, such that every transformation in the subgroup fixes the same two points. All other points flow along a family of circles which is nested between the two fixed points on the Riemann sphere. In general, the two fixed points can be any two distinct points.
This has an important physical interpretation.
Imagine that some observer rotates with constant angular velocity about some axis. Then we can take the two fixed points to be the North and South poles of the celestial sphere. The appearance of the night sky is now transformed continuously in exactly the manner described by the one-parameter subgroup of elliptic transformations sharing the fixed points 0, ∞, and with the number α corresponding to the constant angular velocity of our observer.
Here are some figures illustrating the effect of an elliptic Möbius transformation on the Riemann sphere (after stereographic projection to the plane):
These pictures illustrate the effect of a single Möbius transformation. The one-parameter subgroup which it generates continuously moves points along the family of circular arcs suggested by the pictures.
Hyperbolic transformations
If α is zero (or a multiple of 2), then the transformation is said to be hyperbolic. These transformations tend to move points along circular paths from one fixed point toward the other.
If we take the one-parameter subgroup generated by any hyperbolic Möbius transformation, we obtain a continuous transformation, such that every transformation in the subgroup fixes the same two points. All other points flow along a certain family of circular arcs away from the first fixed point and toward the second fixed point. In general, the two fixed points may be any two distinct points on the Riemann sphere.
This too has an important physical interpretation. Imagine that an observer accelerates (with constant magnitude of acceleration) in the direction of the North pole on his celestial sphere. Then the appearance of the night sky is transformed in exactly the manner described by the one-parameter subgroup of hyperbolic transformations sharing the fixed points 0, ∞, with the real number ρ corresponding to the magnitude of his acceleration vector. The stars seem to move along longitudes, away from the South pole toward the North pole. (The longitudes appear as circular arcs under stereographic projection from the sphere to the plane.)
Here are some figures illustrating the effect of a hyperbolic Möbius transformation on the Riemann sphere (after stereographic projection to the plane):
These pictures resemble the field lines of a positive and a negative electrical charge located at the fixed points, because the circular flow lines subtend a constant angle between the two fixed points.
Loxodromic transformations
If both ρ and α are nonzero, then the transformation is said to be loxodromic. These transformations tend to move all points in S-shaped paths from one fixed point to the other.
The word "loxodrome" is from the Greek: "λοξος (loxos), slanting + δρόμος (dromos), course". When sailing on a constant bearing – if you maintain a heading of (say) north-east, you will eventually wind up sailing around the north pole in a logarithmic spiral. On the mercator projection such a course is a straight line, as the north and south poles project to infinity. The angle that the loxodrome subtends relative to the lines of longitude (i.e. its slope, the "tightness" of the spiral) is the argument of k. Of course, Möbius transformations may have their two fixed points anywhere, not just at the north and south poles. But any loxodromic transformation will be conjugate to a transform that moves all points along such loxodromes.
If we take the one-parameter subgroup generated by any loxodromic Möbius transformation, we obtain a continuous transformation, such that every transformation in the subgroup fixes the same two points. All other points flow along a certain family of curves, away from the first fixed point and toward the second fixed point. Unlike the hyperbolic case, these curves are not circular arcs, but certain curves which under stereographic projection from the sphere to the plane appear as spiral curves which twist counterclockwise infinitely often around one fixed point and twist clockwise infinitely often around the other fixed point. In general, the two fixed points may be any two distinct points on the Riemann sphere.
You can probably guess the physical interpretation in the case when the two fixed points are 0, ∞: an observer who is both rotating (with constant angular velocity) about some axis and moving along the same axis, will see the appearance of the night sky transform according to the one-parameter subgroup of loxodromic transformations with fixed points 0, ∞, and with ρ, α determined respectively by the magnitude of the actual linear and angular velocities.
Stereographic projection
These images show Möbius transformations stereographically projected onto the Riemann sphere. Note in particular that when projected onto a sphere, the special case of a fixed point at infinity looks no different from having the fixed points in an arbitrary location.
Iterating a transformation
If a transformation has fixed points γ1, γ2, and characteristic constant k, then will have .
This can be used to iterate a transformation, or to animate one by breaking it up into steps.
These images show three points (red, blue and black) continuously iterated under transformations with various characteristic constants.
And these images demonstrate what happens when you transform a circle under Hyperbolic, Elliptical, and Loxodromic transforms. In the elliptical and loxodromic images, the value of α is 1/10.
Higher dimensions
In higher dimensions, a Möbius transformation is a homeomorphism of , the one-point compactification of , which is a finite composition of inversions in spheres and reflections in hyperplanes. Liouville's theorem in conformal geometry states that in dimension at least three, all conformal transformations are Möbius transformations. Every Möbius transformation can be put in the form
where , , is an orthogonal matrix, and is 0 or 2. The group of Möbius transformations is also called the Möbius group.
The orientation-preserving Möbius transformations form the connected component of the identity in the Möbius group. In dimension , the orientation-preserving Möbius transformations are exactly the maps of the Riemann sphere covered here. The orientation-reversing ones are obtained from these by complex conjugation.
The domain of Möbius transformations, i.e. , is homeomorphic to the n-dimensional sphere . The canonical isomorphism between these two spaces is the Cayley transform, which is itself a Möbius transformation of . This identification means that Möbius transformations can also be thought of as conformal isomorphisms of . The n-sphere, together with action of the Möbius group, is a geometric structure (in the sense of Klein's Erlangen program) called Möbius geometry.
Applications
Lorentz transformation
An isomorphism of the Möbius group with the Lorentz group was noted by several authors: Based on previous work of Felix Klein (1893, 1897) on automorphic functions related to hyperbolic geometry and Möbius geometry, Gustav Herglotz (1909) showed that hyperbolic motions (i.e. isometric automorphisms of a hyperbolic space) transforming the unit sphere into itself correspond to Lorentz transformations, by which Herglotz was able to classify the one-parameter Lorentz transformations into loxodromic, elliptic, hyperbolic, and parabolic groups. Other authors include Emil Artin (1957), H. S. M. Coxeter (1965), and Roger Penrose, Wolfgang Rindler (1984), Tristan Needham (1997) and W. M. Olivia (2002).
Minkowski space consists of the four-dimensional real coordinate space R4 consisting of the space of ordered quadruples of real numbers, together with a quadratic form
Borrowing terminology from special relativity, points with are considered timelike; in addition, if , then the point is called future-pointing. Points with are called spacelike. The null cone S consists of those points where ; the future null cone N+ are those points on the null cone with . The celestial sphere is then identified with the collection of rays in N+ whose initial point is the origin of R4. The collection of linear transformations on R4 with positive determinant preserving the quadratic form Q and preserving the time direction form the restricted Lorentz group .
In connection with the geometry of the celestial sphere, the group of transformations is identified with the group of Möbius transformations of the sphere. To each , associate the hermitian matrix
The determinant of the matrix X is equal to . The special linear group acts on the space of such matrices via
for each , and this action of preserves the determinant of X because . Since the determinant of X is identified with the quadratic form Q, acts by Lorentz transformations. On dimensional grounds, covers a neighborhood of the identity of . Since is connected, it covers the entire restricted Lorentz group . Furthermore, since the kernel of the action () is the subgroup , then passing to the quotient group gives the group isomorphism
Focusing now attention on the case when is null, the matrix X has zero determinant, and therefore splits as the outer product of a complex two-vector ξ with its complex conjugate:
The two-component vector ξ is acted upon by in a manner compatible with (). It is now clear that the kernel of the representation of on hermitian matrices is .
The action of on the celestial sphere may also be described geometrically using stereographic projection. Consider first the hyperplane in R4 given by x0 = 1. The celestial sphere may be identified with the sphere S+ of intersection of the hyperplane with the future null cone N+. The stereographic projection from the north pole of this sphere onto the plane takes a point with coordinates with
to the point
Introducing the complex coordinate
the inverse stereographic projection gives the following formula for a point on S+:
The action of on the points of N+ does not preserve the hyperplane S+, but acting on points in S+ and then rescaling so that the result is again in S+ gives an action of on the sphere which goes over to an action on the complex variable ζ. In fact, this action is by fractional linear transformations, although this is not easily seen from this representation of the celestial sphere. Conversely, for any fractional linear transformation of ζ variable goes over to a unique Lorentz transformation on N+, possibly after a suitable (uniquely determined) rescaling.
A more invariant description of the stereographic projection which allows the action to be more clearly seen is to consider the variable as a ratio of a pair of homogeneous coordinates for the complex projective line CP1. The stereographic projection goes over to a transformation from to N+ which is homogeneous of degree two with respect to real scalings
which agrees with () upon restriction to scales in which The components of () are precisely those obtained from the outer product
In summary, the action of the restricted Lorentz group SO+(1,3) agrees with that of the Möbius group . This motivates the following definition. In dimension , the Möbius group Möb(n) is the group of all orientation-preserving conformal isometries of the round sphere Sn to itself. By realizing the conformal sphere as the space of future-pointing rays of the null cone in the Minkowski space R1,n+1, there is an isomorphism of Möb(n) with the restricted Lorentz group SO+(1,n+1) of Lorentz transformations with positive determinant, preserving the direction of time.
Coxeter began instead with the equivalent quadratic form .
He identified the Lorentz group with transformations for which is stable. Then he interpreted the x's as homogeneous coordinates and , the null cone, as the Cayley absolute for a hyperbolic space of points . Next, Coxeter introduced the variables
so that the Lorentz-invariant quadric corresponds to the sphere . Coxeter notes that Felix Klein also wrote of this correspondence, applying stereographic projection from to the complex plane Coxeter used the fact that circles of the inversive plane represent planes of hyperbolic space, and the general homography is the product of inversions in two or four circles, corresponding to the general hyperbolic displacement which is the product of inversions in two or four planes.
Hyperbolic space
As seen above, the Möbius group acts on Minkowski space as the group of those isometries that preserve the origin, the orientation of space and the direction of time. Restricting to the points where in the positive light cone, which form a model of hyperbolic 3-space H, we see that the Möbius group acts on H as a group of orientation-preserving isometries. In fact, the Möbius group is equal to the group of orientation-preserving isometries of hyperbolic 3-space. If we use the Poincaré ball model, identifying the unit ball in R3 with H, then we can think of the Riemann sphere as the "conformal boundary" of H. Every orientation-preserving isometry of H gives rise to a Möbius transformation on the Riemann sphere and vice versa.
See also
Bilinear transform
Conformal geometry
Fuchsian group
Generalised circle
Hyperbolic geometry
Infinite compositions of analytic functions
Inversion transformation
Kleinian group
Lie sphere geometry
Linear fractional transformation
Liouville's theorem (conformal mappings)
Lorentz group
Modular group
Poincaré half-plane model
Projective geometry
Projective line over a ring
Representation theory of the Lorentz group
Schottky group
Smith chart
Notes
References
Specific
General
(See Chapter 6 for the classification, up to conjugacy, of the Lie subalgebras of the Lie algebra of the Lorentz group.)
See Chapter 2.
translated from
(See Chapters 3–5 of this classic book for a beautiful introduction to the Riemann sphere, stereographic projection, and Möbius transformations.)
(Aimed at non-mathematicians, provides an excellent exposition of theory and results, richly illustrated with diagrams.)
(See Chapter 3 for a beautifully illustrated introduction to Möbius transformations, including their classification up to conjugacy.)
(See Chapter 2 for an introduction to Möbius transformations.)
Further reading
External links
Conformal maps gallery
Projective geometry
Conformal geometry
Lie groups
Riemann surfaces
Functions and mappings
Kleinian groups
Conformal mappings | Möbius transformation | [
"Mathematics"
] | 8,401 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical structures",
"Lie groups",
"Mathematical objects",
"Algebraic structures",
"Mathematical relations"
] |
314,494 | https://en.wikipedia.org/wiki/Libration | In lunar astronomy, libration is the cyclic variation in the apparent position of the Moon that is perceived by observers on the Earth observers and caused by changes between the orbital and rotational planes of the moon. It causes an observer to see slightly different hemispheres of the surface at different times. It is similar in both cause and effect to the changes in the Moon's apparent size because of changes in distance. It is caused by three mechanisms detailed below, two of which cause a relatively tiny physical libration via tidal forces exerted by the Earth. Such true librations are known as well for other moons with locked rotation.
The quite different phenomenon of a trojan asteroid's movement has been called Trojan libration, and Trojan libration point means Lagrangian point.
Lunar libration
The Moon keeps one hemisphere of itself facing the Earth because of tidal locking. Therefore, the first view of the far side of the Moon was not possible until the Soviet probe Luna 3 reached the Moon on October 7, 1959, and further lunar exploration by the United States and the Soviet Union. This simple picture is only approximately true since over time, slightly more than half (about 59% in total) of the Moon's surface is seen from Earth because of libration.
Lunar libration arises from three changes in perspective because of the non-circular and inclined orbit, the finite size of the Earth, and the orientation of the Moon in space. The first of these is called optical libration, the second parallax, and the third physical libration. Each of these can be divided into two contributions.
The following are the three types of lunar libration:
Optical libration, the combined libration of longitudinal and latitudinal libration produces a movement of the sub-Earth point and a wobbling view between the temporarily visible parts of the Moon, during a lunar orbit. This is not to be confused with the change of the Moon's apparent size because of the changing distance between the Moon and the Earth during the Moon's elliptic orbit, or with the change of positional angle because of the change in the position of the Moon's tilted axis, or with the observed swinging motion of the Moon because of the relative position of the Earth's tilted axis during an orbit of the Moon.
Libration in longitude results from the eccentricity of the orbit of the Moon around the Earth; the Moon's rotation sometimes leads and sometimes lags its orbital position. The lunar libration in longitude was discovered by Johannes Hevelius in 1648. It can reach 7°54′ in amplitude. Longitudinal libration allows an observer on Earth to view at times further into the Moon's west and east respectively at different phases of the Moon's orbit.
Libration in latitude results from the Moon's axial tilt (about 6.7°) between its rotation axis and orbital axis around Earth. This is analogous to how Earth's seasons arise from its axial tilt (about 23.4°) between its rotation axis and orbital axis about the Sun. Galileo Galilei is sometimes credited with the discovery of the lunar libration in latitude in 1632 although Thomas Harriot or William Gilbert might have done so before. Note Cassini's laws. It can reach 6°50′ in amplitude. The 6.7° depends on the orbit inclination of 5.15° and the negative equatorial tilt of 1.54°. Latitudinal libration allows an observer on Earth to view beyond the Moon's north pole and south pole at different phases of the Moon's orbit.
Parallax libration depends on both the longitude and latitude of the location on Earth from which the Moon is observed.
Diurnal libration is the small daily libration and oscillation from Earth's rotation, which carries an observer first to one side and then to the other side of the straight line joining Earth's and the Moon's centers, allowing the observer to look first around one side of the Moon and then around the other—since the observer is on Earth's surface, not at its center. It reaches less than 1° in amplitude.
Physical libration is the oscillation of orientation in space about uniform rotation and precession. There are physical librations about all three axes. The sizes are roughly 100 seconds of arc. As seen from the Earth, this amounts to less than 1 second of arc. Forced physical librations can be predicted given the orbit and shape of the Moon. The periods of free physical librations can also be predicted, but their amplitudes and phases cannot be predicted.
Physical libration
Also called real libration, as opposed to the optical libration of longitudinal, latitudinal and diurnal types, the orientation of the Moon exhibits small oscillations of the pole direction in space and rotation about the pole.
This libration can be differentiated between forced and free libration. Forced libration is caused by the forces exerted during the Moon's orbit around the Earth and the Sun, and free libration represents oscillations that occur over longer time periods.
Forced physical libration
Cassini's laws state the following:
The Moon rotates uniformly about its polar axis keeping one side toward the Earth.
The Moon's equator plane is tilted with respect to the ecliptic plane and it precesses uniformly along the ecliptic plane.
The descending node of the equator on the ecliptic matches the ascending node of the orbit plane.
In addition to uniform rotation and uniform precession of the equator plane, the Moon has small oscillations of orientation in space about all three axes. These oscillations are called physical librations. Apart from the 1.5427° tilt between equator and ecliptic, the oscillations are approximately ±100 seconds of arc in size. These oscillations can be expressed with trigonometric series that depend on the lunar moments of inertia A < B < C. The sensitive combinations are β = (C – A)/B and γ = (B – A)/C. The oscillation about the polar axis is most sensitive to γ and the 2-dimensional direction of the pole, including the 1.5427° tilt, is most sensitive to β. Consequently, accurate measurements of the physical librations provide accurate determinations of β = and γ = .
The placement of three retroreflectors on the Moon by the Lunar Laser Ranging experiment and two retroreflectors by Lunokhod rovers allowed accurate measurement of the physical librations by laser ranging to the Moon.
Free physical libration
A free physical libration is similar to the solution of the reduced equation for linear differential equations. The periods of the free librations can be calculated, but their amplitudes must be measured. Lunar Laser Ranging provides the determinations. The two largest free librations were discovered by O. Calame. Modern values are:
1.3 seconds of arc with a 1056-day (2.9-year) period for rotation about the polar axis,
a 74.6-year elliptical wobble of the pole of size 8.18 × 3.31 arcseconds, and
an 81-year rotation of the pole in space that is 0.03 seconds of arc in size.
The fluid core can cause a fourth mode with a period around four centuries. The free librations are expected to damp out in times very short compared to the age of the Moon. Consequently, their existence implies that there must be one or more stimulating mechanisms.
See also
Parallactic angle
References
External links
Libration of the Moon from educational website From Stargazers to Starships
Astronomy Picture of the Day: 2005 November 13 – time-lapse animation of the Moon through one complete cycle, hosted by NASA
Libration: 2 years in 2 seconds – 24 full moon pictures taken over two years, compiled in an animation (linked on page) showing the Moon's libration and variations in apparent diameter
Observing the Lunar Libration Zones
Dynamics of the Solar System
Orbit of the Moon
Articles containing video clips | Libration | [
"Astronomy"
] | 1,698 | [
"Dynamics of the Solar System",
"Solar System"
] |
314,575 | https://en.wikipedia.org/wiki/Octagon | In geometry, an octagon () is an eight-sided polygon or 8-gon.
A regular octagon has Schläfli symbol {8} and can also be constructed as a quasiregular truncated square, t{4}, which alternates two types of edges. A truncated octagon, t{8} is a hexadecagon, {16}. A 3D analog of the octagon can be the rhombicuboctahedron with the triangular faces on it like the replaced edges, if one considers the octagon to be a truncated square.
Properties
The sum of all the internal angles of any octagon is 1080°. As with all polygons, the external angles total 360°.
If squares are constructed all internally or all externally on the sides of an octagon, then the midpoints of the segments connecting the centers of opposite squares form a quadrilateral that is both equidiagonal and orthodiagonal (that is, whose diagonals are equal in length and at right angles to each other).
The midpoint octagon of a reference octagon has its eight vertices at the midpoints of the sides of the reference octagon. If squares are constructed all internally or all externally on the sides of the midpoint octagon, then the midpoints of the segments connecting the centers of opposite squares themselves form the vertices of a square.
Regularity
A regular octagon is a closed figure with sides of the same length and internal angles of the same size. It has eight lines of reflective symmetry and rotational symmetry of order 8. A regular octagon is represented by the Schläfli symbol {8}.
The internal angle at each vertex of a regular octagon is 135° ( radians). The central angle is 45° ( radians).
Area
The area of a regular octagon of side length a is given by
In terms of the circumradius R, the area is
In terms of the apothem r (see also inscribed figure), the area is
These last two coefficients bracket the value of pi, the area of the unit circle.
The area can also be expressed as
where S is the span of the octagon, or the second-shortest diagonal; and a is the length of one of the sides, or bases. This is easily proven if one takes an octagon, draws a square around the outside (making sure that four of the eight sides overlap with the four sides of the square) and then takes the corner triangles (these are 45–45–90 triangles) and places them with right angles pointed inward, forming a square. The edges of this square are each the length of the base.
Given the length of a side a, the span S is
The span, then, is equal to the silver ratio times the side, a.
The area is then as above:
Expressed in terms of the span, the area is
Another simple formula for the area is
More often the span S is known, and the length of the sides, a, is to be determined, as when cutting a square piece of material into a regular octagon. From the above,
The two end lengths e on each side (the leg lengths of the triangles (green in the image) truncated from the square), as well as being may be calculated as
Circumradius and inradius
The circumradius of the regular octagon in terms of the side length a is
and the inradius is
(that is one-half the silver ratio times the side, a, or one-half the span, S)
The inradius can be calculated from the circumradius as
Diagonality
The regular octagon, in terms of the side length a, has three different types of diagonals:
Short diagonal;
Medium diagonal (also called span or height), which is twice the length of the inradius;
Long diagonal, which is twice the length of the circumradius.
The formula for each of them follows from the basic principles of geometry. Here are the formulas for their length:
Short diagonal: ;
Medium diagonal: ; (silver ratio times a)
Long diagonal: .
Construction
A regular octagon at a given circumcircle may be constructed as follows:
Draw a circle and a diameter AOE, where O is the center and A, E are points on the circumcircle.
Draw another diameter GOC, perpendicular to AOE.
(Note in passing that A,C,E,G are vertices of a square).
Draw the bisectors of the right angles GOA and EOG, making two more diameters HOD and FOB.
A,B,C,D,E,F,G,H are the vertices of the octagon.
A regular octagon can be constructed using a straightedge and a compass, as 8 = 23, a power of two:
The regular octagon can be constructed with meccano bars. Twelve bars of size 4, three bars of size 5 and two bars of size 6 are required.
Each side of a regular octagon subtends half a right angle at the centre of the circle which connects its vertices. Its area can thus be computed as the sum of eight isosceles triangles, leading to the result:
for an octagon of side a.
Standard coordinates
The coordinates for the vertices of a regular octagon centered at the origin and with side length 2 are:
(±1, ±(1+))
(±(1+), ±1).
Dissectibility
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms.
In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular octagon, m=4, and it can be divided into 6 rhombs, with one example shown below. This decomposition can be seen as 6 of 24 faces in a Petrie polygon projection plane of the tesseract. The list defines the number of solutions as eight, by the eight orientations of this one dissection. These squares and rhombs are used in the Ammann–Beenker tilings.
Skew
A skew octagon is a skew polygon with eight vertices and edges but not existing on the same plane. The interior of such an octagon is not generally defined. A skew zig-zag octagon has vertices alternating between two parallel planes.
A regular skew octagon is vertex-transitive with equal edge lengths. In three dimensions it is a zig-zag skew octagon and can be seen in the vertices and side edges of a square antiprism with the same D4d, [2+,8] symmetry, order 16.
Petrie polygons
The regular skew octagon is the Petrie polygon for these higher-dimensional regular and uniform polytopes, shown in these skew orthogonal projections of in A7, B4, and D5 Coxeter planes.
Symmetry
The regular octagon has Dih8 symmetry, order 16. There are three dihedral subgroups: Dih4, Dih2, and Dih1, and four cyclic subgroups: Z8, Z4, Z2, and Z1, the last implying no symmetry.
On the regular octagon, there are eleven distinct symmetries. John Conway labels full symmetry as r16. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars) Cyclic symmetries in the middle column are labeled as g for their central gyration orders. Full symmetry of the regular form is r16 and no symmetry is labeled a1.
The most common high symmetry octagons are p8, an isogonal octagon constructed by four mirrors can alternate long and short edges, and d8, an isotoxal octagon constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular octagon.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g8 subgroup has no degrees of freedom but can be seen as directed edges.
Use
The octagonal shape is used as a design element in architecture. The Dome of the Rock has a characteristic octagonal plan. The Tower of the Winds in Athens is another example of an octagonal structure. The octagonal plan has also been in church architecture such as St. George's Cathedral, Addis Ababa, Basilica of San Vitale (in Ravenna, Italia), Castel del Monte (Apulia, Italia), Florence Baptistery, Zum Friedefürsten Church (Germany) and a number of octagonal churches in Norway. The central space in the Aachen Cathedral, the Carolingian Palatine Chapel, has a regular octagonal floorplan. Uses of octagons in churches also include lesser design elements, such as the octagonal apse of Nidaros Cathedral.
Architects such as John Andrews have used octagonal floor layouts in buildings for functionally separating office areas from building services, such as in the Intelsat Headquarters of Washington or Callam Offices in Canberra.
Derived figures
Related polytopes
The octagon, as a truncated square, is first in a sequence of truncated hypercubes:
As an expanded square, it is also first in a sequence of expanded hypercubes:
See also
Bumper pool
Hansen's small octagon
Octagon house
Octagonal number
Octagram
Octahedron, 3D shape with eight faces.
Oktogon, a major intersection in Budapest, Hungary
Rub el Hizb (also known as Al Quds Star and as Octa Star), a common motif in Islamic architecture
Smoothed octagon
References
External links
Octagon Calculator
Definition and properties of an octagon With interactive animation
8 (number)
Constructible polygons
Polygons by the number of sides
Elementary shapes | Octagon | [
"Mathematics"
] | 2,125 | [
"Constructible polygons",
"Planes (geometry)",
"Euclidean plane geometry"
] |
314,610 | https://en.wikipedia.org/wiki/Pebble | A pebble is a clast of rock with a particle size of based on the Udden-Wentworth scale of sedimentology. Pebbles are generally considered larger than granules ( in diameter) and smaller than cobbles ( in diameter). A rock made predominantly of pebbles is termed a conglomerate. Pebble tools are among the earliest known man-made artifacts, dating from the Palaeolithic period of human history.
A beach composed chiefly of surface pebbles is commonly termed a shingle beach. This type of beach has armoring characteristics with respect to wave erosion, as well as ecological niches that provide habitat for animals and plants.
Inshore banks of shingle (large quantities of pebbles) exist in some locations, such as the entrance to the River Ore, England, where the moving banks of shingle give notable navigational challenges.
Pebbles come in various colors and textures and can have streaks, known as veins, of quartz or other minerals. Pebbles are mostly smooth but, dependent on how frequently they come in contact with the sea, they can have marks of contact with other rocks or other pebbles. Pebbles left above the high water mark may have growths of organisms such as lichen on them, signifying the lack of contact with seawater.
Location
Pebbles on Earth exist in two types of locations – on the beaches of various oceans and seas, and inland where ancient seas used to cover the land. Then, when the seas retreated, the rocks became landlocked. Here, they entered lakes and ponds, and form in rivers, travelling into estuaries where the smoothing continues in the sea.
Beach pebbles and river pebbles (also known as river rock) are distinct in their geological formation and appearance.
Manufactured Pebbles-These are made from natural stones such as marble, granite, and sandstone. Manufactured Pebbles are designed to specified sizes and forms, making them flexible for many purposes.
Beach
Beach pebbles form gradually over time as the ocean water washes over loose rock particles. The result is a smooth, rounded appearance. The typical size range is from 2 mm to 50 mm. The colors range from translucent white to black, and include shades of yellow, brown, red and green. Some of the more plentiful pebble beaches are along the coast of the Pacific Ocean, beginning in Canada and extending down to the tip of South America in Argentina. Other pebble beaches are in northern Europe (particularly on the beaches of the Norwegian Sea), along the coast of the U.K. and Ireland, on the shores of Australia, and around the islands of Indonesia and Japan.
Inland
Inland pebbles (river pebbles of river rock) are usually found along the shores of large rivers and lakes. These pebbles form as the flowing water washes over rock particles on the bottom and along the shores of the river. The smoothness and color of river pebbles depends on several factors, such as the composition of the soil of the river banks, the chemical characteristics of the water, and the speed of the current. Because river current is gentler than the ocean waves, river pebbles are usually not as smooth as beach pebbles. The most common colors of river rock are black, grey, green, brown and white.
Human use
Beach pebbles and river pebbles are used for a variety of purposes, both outdoors and indoors. They can be sorted by colour and size, and they can also be polished to improve the texture and colour. Outdoors, beach pebbles are often used for landscaping, construction and as decorative elements. Beach pebbles are often used to cover walkways and driveways, around pools, in and around plant containers, on patios and decks. Beach and river pebbles are also used to create water-smart gardens in areas where water is scarce. Small pebbles are also used to create living spaces and gardens on the rooftops of buildings. Indoors, pebbles can be used as bookends and paperweights. Large pebbles are also used to create "pet rocks" for children.
Mars
On Mars, slabs of pebbly conglomerate rock have been found and have been interpreted by scientists as having formed in an ancient streambed. The gravels, which were discovered by NASA's Mars rover Curiosity, range from the size of sand particles to the size of golf balls. Analysis has shown that the pebbles were deposited by a stream that flowed at walking pace and was ankle- to hip-deep.
Gallery
See also
Gravel
Japanese Garden of Peace
Rock
References
External links
Stone (material)
Sedimentology
Granularity of materials
Building stone
Natural materials | Pebble | [
"Physics",
"Chemistry"
] | 905 | [
"Natural materials",
"Materials",
"Particle technology",
"Granularity of materials",
"Matter"
] |
314,629 | https://en.wikipedia.org/wiki/Vitreous%20enamel | Vitreous enamel, also called porcelain enamel, is a material made by fusing powdered glass to a substrate by firing, usually between . The powder melts, flows, and then hardens to a smooth, durable vitreous coating. The word vitreous comes from the Latin , meaning "glassy".
Enamel can be used on metal, glass, ceramics, stone, or any material that will withstand the fusing temperature. In technical terms fired enamelware is an integrated layered composite of glass and another material (or more glass). The term "enamel" is most often restricted to work on metal, which is the subject of this article. Essentially the same technique used with other bases is known by different terms: on glass as enamelled glass, or "painted glass", and on pottery it is called overglaze decoration, "overglaze enamels" or "enamelling". The craft is called "enamelling", the artists "enamellers" and the objects produced can be called "enamels".
Enamelling is an old and widely adopted technology, for most of its history mainly used in jewellery and decorative art. Since the 18th century, enamels have also been applied to many metal consumer objects, such as some cooking vessels, steel sinks, and cast-iron bathtubs. It has also been used on some appliances, such as dishwashers, laundry machines, and refrigerators, and on marker boards and signage.
The term "enamel" has also sometimes been applied to industrial materials other than vitreous enamel, such as enamel paint and the polymers coating enameled wire; these actually are very different in materials science terms.
The word enamel comes from the Old High German word (to smelt) via the Old French , or from a Latin word , first found in a 9th-century Life of Leo IV. Used as a noun, "an enamel" is usually a small decorative object coated with enamel. "Enamelled" and "enamelling" are the preferred spellings in British English, while "enameled" and "enameling" are preferred in American English.
History
Ancient
The earliest enamel all used the cloisonné technique, placing the enamel within small cells with gold walls. This had been used as a technique to hold pieces of stone and gems tightly in place since the 3rd millennium BC, for example in Mesopotamia, and then Egypt. Enamel seems likely to have developed as a cheaper method of achieving similar results.
The earliest undisputed objects known to use enamel are a group of Mycenaean rings from Cyprus, dated to the 13th century BC. Although Egyptian pieces, including jewellery from the Tomb of Tutankhamun of c. 1325 BC, are frequently described as using "enamel", many scholars doubt the glass paste was sufficiently melted to be properly so described, and use terms such as "glass-paste". It seems possible that in Egyptian conditions the melting point of the glass and gold were too close to make enamel a viable technique. Nonetheless, there appear to be a few actual examples of enamel, perhaps from the Third Intermediate Period of Egypt (beginning 1070 BC) on. But it remained rare in both Egypt and Greece.
The technique appears in the Koban culture of the northern and central Caucasus, and was perhaps carried by the Sarmatians to the ancient Celts. Red enamel is used in 26 places on the Battersea Shield (c.350–50 BC), probably as an imitation of the red Mediterranean coral, which is used on the Witham Shield (400–300 BC). Pliny the Elder mentions the Celts' use of the technique on metal, which the Romans in his day hardly knew. The Staffordshire Moorlands Pan is a 2nd-century AD souvenir of Hadrian's Wall, made for the Roman military market, which has swirling enamel decoration in a Celtic style. In Britain, probably through preserved Celtic craft skills, enamel survived until the hanging bowls of early Anglo-Saxon art.
A problem that adds to the uncertainty over early enamel is artefacts (typically excavated) that appear to have been prepared for enamel, but have now lost whatever filled the cloisons or backing to a champlevé piece. This occurs in several different regions, from ancient Egypt to Anglo-Saxon England. Once enamel becomes more common, as in medieval Europe after about 1000, the assumption that enamel was originally used becomes safer.
Medieval and Renaissance Europe
In European art history, enamel was at its most important in the Middle Ages, beginning with the Late Romans and then the Byzantine, who began to use cloisonné enamel in imitation of cloisonné inlays of precious stones. The Byzantine enamel style was widely adopted by the peoples of Migration Period northern Europe. The Byzantines then began to use cloisonné more freely to create images; this was also copied in Western Europe. In Kievan Rus a finift enamel technique was developed.
Mosan metalwork often included enamel plaques of the highest quality in reliquaries and other large works of goldsmithing. Limoges enamel was made in Limoges, France, the most famous centre of vitreous enamel production in Western Europe, though Spain also made a good deal. Limoges became famous for champlevé enamels from the 12th century onwards, producing on a large scale, and then (after a period of reduced production) from the 15th century retained its lead by switching to painted enamel on flat metal plaques. The champlevé technique was considerably easier and very widely practiced in the Romanesque period. In Gothic art the finest work is in basse-taille and ronde-bosse techniques, but cheaper champlevé works continued to be produced in large numbers for a wider market.
Painted enamel remained in fashion for over a century, and in France developed into a sophisticated Renaissance and the Mannerist style, seen on objects such as large display dishes, ewers, inkwells and in small portraits. After it fell from fashion it continued as a medium for portrait miniatures, spreading to England and other countries. This continued until the early 19th century.
A Russian school developed, which used the technique on other objects, as in the Renaissance, and for relatively cheap religious pieces such as crosses and small icons.
China
From either Byzantium or the Islamic world, the cloisonné technique reached China in the 13–14th centuries. The first written reference to cloisonné is in a book from 1388, where it is called "Dashi ('Muslim') ware". No Chinese pieces that are clearly from the 14th century are known; the earliest datable pieces are from the reign of the Xuande Emperor (1425–1435), which, since they show a full use of Chinese styles, suggest considerable experience in the technique.
Cloisonné remained very popular in China until the 19th century and is still produced today. The most elaborate and most highly valued Chinese pieces are from the early Ming dynasty, especially the reigns of the Xuande Emperor and Jingtai Emperor (1450–1457), although 19th century or modern pieces are far more common.
Japan
Japanese artists did not make three-dimensional enamelled objects until the 1830s but, once the technique took hold based on analysis of Chinese objects, it developed very rapidly, reaching a peak in the Meiji and Taishō eras (late 19th/early 20th century). Enamel had been used as decoration for metalwork since about 1600, and Japanese cloisonné was already exported to Europe before the start of the Meiji era in 1868. Cloisonné is known in Japan as , literally "seven treasures". This refers to richly coloured substances mentioned in Buddhist texts. The term was initially used for colourful objects imported from China. According to legend, in the 1830s Kaji Tsunekichi broke open a Chinese enamel object to examine it, then trained many artists, starting off Japan's own enamel industry.
Early Japanese enamels were cloudy and opaque, with relatively clumsy shapes. This changed rapidly from 1870 onwards. The Nagoya cloisonné company ( existed from 1871 to 1884, to sell the output of many small workshops and help them improve their work. In 1874, the government created the company to sponsor the creation of a wide range of decorative arts at international exhibitions. This was part of a programme to promote Japan as a modern, industrial nation.
Gottfried Wagener was a German scientist brought in by the government to advise Japanese industry and improve production processes. Along with Namikawa Yasuyuki he developed a transparent black enamel which was used for backgrounds. Translucent enamels in various other colours followed during this period. Along with Tsukamoto Kaisuke, Wagener transformed the firing processes used by Japanese workshops, improving the quality of finishes and extending the variety of colours. Kawade Shibatarō introduced a variety of techniques, including (drip-glaze) which produces a rainbow-coloured glaze and (repoussé) technique, in which the metal foundation is hammered outwards to create a relief effect. Together with Hattori Tadasaburō he developed the ("piling up") technique which places layers of enamel upon each other to create a three-dimensional effect. Namikawa Sōsuke developed a pictorial style that imitated paintings. He is known for (minimised wires) and (wireless cloisonné): techniques developed with Wagener in which the wire are minimised or burned away completely with acid. This contrasts with the Chinese style which used thick metal . Ando Jubei introduced the () technique which burns away the metal substrate to leave translucent enamel, producing an effect resembling stained glass. The Ando Cloisonné Company which he co-founded is one of the few makers from this era still active. Distinctively Japanese designs, in which flowers, birds and insects were used as themes, became popular. Designs also increasingly used areas of blank space. With the greater subtlety these techniques allowed, Japanese enamels were regarded as unequalled in the world and won many awards at national and international exhibitions.
India and Islamic world
Enamel was established in the Mughal Empire by around 1600 for decorating gold and silver objects, and became a distinctive feature of Mughal jewellery. The Mughal court was known to employ mīnākār (enamelers). These craftsmen reached a peak of during the reign of Shah Jahan in the mid-17th century. Transparent enamels were popular during this time. Both cloissoné and champlevé were produced in Mughal, with champlevé used for the finest pieces. Modern industrial production began in Calcutta in 1921, with the Bengal Enamel Works Limited.
Enamel was used in Iran for colouring and ornamenting the surface of metals by fusing over it brilliant colours that are decorated in an intricate design called Meenakari. The French traveller Jean Chardin, who toured Iran during the Safavid period, made a reference to an enamel work of Isfahan, which comprised a pattern of birds and animals on a floral background in light blue, green, yellow and red. Gold has been used traditionally for Meenakari jewellery as it holds the enamel better, lasts longer and its lustre brings out the colours of the enamels. Silver, a later introduction, is used for artifacts like boxes, bowls, spoons, and art pieces. Copper began to be used for handicraft products after the Gold Control Act, was enforced in India which compelled the Meenakars to look for an alternative material. Initially, the work of Meenakari often went unnoticed as this art was traditionally used on the back of pieces of kundan or gem-studded jewellery, allowing pieces to be reversible.
Modern
More recently, the bright, jewel-like colours have made enamel popular with jewellery designers, including the Art Nouveau jewellers, for designers of bibelots such as the eggs of Peter Carl Fabergé and the enameled copper boxes of the Battersea enamellers, and for artists such as George Stubbs and other painters of portrait miniatures.
Enamel was first applied commercially to sheet iron and steel in Austria and Germany in about 1850. Industrialization increased as the purity of raw materials increased and costs decreased. The wet application process started with the discovery of the use of clay to suspend frit in water. Developments that followed during the 20th century include enamelling-grade steel, cleaned-only surface preparation, automation, and ongoing improvements in efficiency, performance, and quality.
Between the World Wars, Cleveland in the United States became a center for enamel art, led by Kenneth F. Bates; H. Edward Winter who had taught at the Cleveland School of Art wrote three books on the topic including Enamel Art on Metals. In Australia, abstract artist Bernard Hesling brought the style into prominence with his variously sized steel plates, starting in 1957. A resurgence in enamel-based art took place near the end of the 20th century in the Soviet Union, led by artists like Alexei Maximov and Leonid Efros.
Properties
Vitreous enamel can be applied to most metals. Most modern industrial enamel is applied to steel in which the carbon content is controlled to prevent unwanted reactions at the firing temperatures. Enamel can also be applied to gold, silver, copper, aluminium, stainless steel, and cast iron.
Vitreous enamel has many useful properties: it is smooth, hard, chemically resistant, durable, scratch resistant (5–6 on the Mohs scale), has long-lasting colour fastness, is easy to clean, and cannot burn. Enamel is glass, not paint, so it does not fade under ultraviolet light. A disadvantage of enamel is a tendency to crack or shatter when the substrate is stressed or bent, but modern enamels are relatively chip- and impact-resistant because of good thickness control and coefficients of thermal expansion well-matched to the metal.
The Buick automobile company was founded by David Dunbar Buick with wealth earned by his development of improved enamelling processes, c. 1887, for sheet steel and cast iron. Such enameled ferrous material had, and still has, many applications: early 20th century and some modern advertising signs, interior oven walls, cooking pots, housing and interior walls of major kitchen appliances, housing and drums of clothes washers and dryers, sinks and cast iron bathtubs, farm storage silos, and processing equipment such as chemical reactors and pharmaceutical process tanks. Structures such as filling stations, bus stations and Lustron Houses had walls, ceilings and structural elements made of enamelled steel.
One of the most widespread modern uses of enamel is in the production of quality chalk-boards and marker-boards (typically called 'blackboards' or 'whiteboards') where the resistance of enamel to wear and chemicals ensures that 'ghosting', or unerasable marks, do not occur, as happens with polymer boards. Since standard enamelling steel is magnetically attractive, it may also be used for magnet boards. Some new developments in the last ten years include enamel/non-stick hybrid coatings, sol-gel functional top-coats for enamels, enamels with a metallic appearance, and easy-to-clean enamels.
The key ingredient of vitreous enamel is finely ground glass called frit. Frit for enamelling steel is typically an alkali borosilicate glass with a thermal expansion and glass temperature suitable for coating steel. Raw materials are smelted together between into a liquid glass that is directed out of the furnace and thermal shocked with either water or steel rollers into frit.
Colour in enamel is obtained by the addition of various minerals, often metal oxides cobalt, praseodymium, iron, or neodymium. The latter creates delicate shades ranging from pure violet through wine-red and warm grey. Enamel can be transparent, opaque or opalescent (translucent). Different enamel colours can be mixed to make a new colour, in the manner of paint.
There are various types of frit, which may be applied in sequence. A ground coat is applied first; it usually contains smelted-in transition metal oxides such as cobalt, nickel, copper, manganese, and iron that facilitate adhesion to the metal. Next, clear and semi-opaque frits that contain material for producing colours are applied.
Techniques of artistic enameling
The three main historical techniques for enamelling metal are:
Cloisonné, French for "cell", where thin wires are applied to form raised barriers, which contain different areas of (subsequently applied) enamel. Widely practiced in Europe, the Middle East and East Asia.
Champlevé, French for "raised field", where the surface is carved out to form pits in which enamel is fired, leaving the original metal exposed; the Romanesque Stavelot Triptych is an example.
Painted enamel, a design in enamel is painted onto a smooth metal surface. Limoges enamel is the best known type of painted enamel, using this from the 16th century onwards. Most traditional painting on glass, and some on ceramics, uses what is technically enamel, but is often described by terms such as "painted in enamels", reserving "painted enamel" and "enamel" as a term for the whole object for works with a metal base.
Variants, and less common techniques are:
Basse-taille, from the French word meaning "low-cut". The surface of the metal is decorated with a low relief design which can be seen through translucent and transparent enamels. The 14th century Royal Gold Cup is an outstanding example.
Plique-à-jour, French for "open to daylight" where the enamel is applied in cells, similar to cloisonné, but with no backing, so light can shine through the transparent or translucent enamel. It has a stained-glass like appearance; the Mérode Cup is the surviving medieval example.
Ronde bosse, French for "in the round", also known as "encrusted enamel". A 3D type of enamelling where a sculptural form or wire framework is completely or partly enamelled, as in the 15th century Holy Thorn Reliquary.
Grisaille, version of painted enamel, French term meaning "in grey", where a dark, often blue or black background is applied, then a palescent (translucent) enamel is painted on top, building up designs in a monochrome gradient, paler as the thickness of the layer of light colour increases.
En résille (, French for 'enamel in a network on glass') where enamelled metal is suspended in glass. The technique was briefly popular in seventeenth-century France and was re-discovered by Margret Craver in 1953. Craver spent 13 years re-creating the technique.
Other types:
Enamelled glass, in which a glass surface is enamelled, and fired to fuse the glasses.
Stenciling, where a stencil is placed over the work and the powdered enamel is sifted over the top. The stencil is removed before firing, the enamel staying in a pattern, slightly raised.
Sgraffito, where an unfired layer of enamel is applied over a previously fired layer of enamel of a contrasting colour, and then partly removed with a tool to create the design.
Serigraph, where a silkscreen is used with 60–70in grade mesh.
Surrey enamel, a 17th-century type for brass objects such as candlesticks; effectively champlevé.
Counter-enamelling, not strictly a technique, but a necessary step in many techniques, especially painted enamel on thin plaques; introduced in 15th-century Europe. Enamel is applied to the back of a piece as well – sandwiching the metal – to equalize the rates of expansion under heat, and so create less tension on the glass so it does not crack.
Safed chalwan, where jewels are set in white enamel
See also Japanese shipōyaki techniques.
Industrial enamel application
On sheet steel, a ground coat layer is applied to create adhesion. The only surface preparation required for modern ground coats is degreasing of the steel with a mildly alkaline solution. White and coloured second "cover" coats of enamel are applied over the fired ground coat. For electrostatic enamels, the coloured enamel powder can be applied directly over a thin unfired ground coat "base coat" layer that is co-fired with the cover coat in a very efficient two-coat/one-fire process.
The frit in the ground coat contains smelted-in cobalt and/or nickel oxide as well as other transition metal oxides to catalyse the enamel-steel bonding reactions. During firing of the enamel at between , iron oxide scale first forms on the steel. The molten enamel dissolves the iron oxide and precipitates cobalt and nickel. The iron acts as the anode in an electrogalvanic reaction in which the iron is again oxidised, dissolved by the glass, and oxidised again with the available cobalt and nickel limiting the reaction. Finally, the surface becomes roughened with the glass anchored into the holes.
Building cladding
Enamel coatings applied to steel panels offer protection to the core material whether cladding road tunnels, underground stations, building superstructures or other applications. It can also be specified as a curtain walling. Qualities of this structural material include:
Durable
Withstands extreme temperatures and is non-flammable
Long lasting UV, climate and corrosion resistance
Dirt-repellent and graffiti-proof
Resistant to abrasion and chemicals
Easy cleaning and maintenance
Gallery
See also
Fred Uhl Ball (1945–1985) – American enamellist who created the largest known enamel mural
Oskar Schindler
Rostov in Russia, with Moscow a centre of the Russian industry
Notes
References
Campbell, Marian. An Introduction to Medieval Enamels, 1983, HMSO for V&A Museum,
Lucie-Smith, Edward, The Thames & Hudson Dictionary of Art Terms, 2003 (2nd edn), Thames & Hudson, World of Art series,
Ogden, Jack, "Metal", in Ancient Egyptian Materials and Technology, eds. Paul T. Nicholson, Ian Shaw, 2000, Cambridge University Press, , 9780521452571, google books
Osborne, Harold (ed), The Oxford Companion to the Decorative Arts, 1975, OUP,
Further reading
"Collection Highlights: Art in the Islamic World". Beaker. Smithsonian Institution: 2013.
Dimand, M. S. "An Enameled-Glass Bottle of the Mamluk Period". Metropolitan Museum of Art.
Papadopoulous, Kiko. "Venetian Eastern Trade: 11th to 14th Centuries" 20 January 2012.
External links
Enamels on jewelry – historical
Enameling Articles and Tutorials at The Ganoksin Project
CIDAE Center of Information and Difusion of the Art of Enamelling (ES)
Society of Dutch Enamellers (NL)
The Enamelist Society (US)
Guild of Enamellers, UK
International Enamellers Institute
Vitreous Enamel Association (UK)
Pottery
Decorative arts
Coatings
Visual arts materials
Jewellery making
Glass applications
Glass compositions
Glass art
Ceramic art
Ceramic glazes | Vitreous enamel | [
"Chemistry"
] | 4,686 | [
"Glass chemistry",
"Glass compositions",
"Coatings",
"Vitreous enamel",
"Ceramic glazes"
] |
314,647 | https://en.wikipedia.org/wiki/GROMOS | GROningen MOlecular Simulation (GROMOS) is the name of a force field for molecular dynamics simulation, and a related computer software package. Both are developed at the University of Groningen, and at the Computer-Aided Chemistry Group at the Laboratory for Physical Chemistry at the Swiss Federal Institute of Technology (ETH Zurich). At Groningen, Herman Berendsen was involved in its development.
The united atom force field was optimized with respect to the condensed phase properties of alkanes.
Versions
GROMOS87
Aliphatic and aromatic hydrogen atoms were included implicitly by representing the carbon atom and attached hydrogen atoms as one group centered on the carbon atom, a united atom force field. The van der Waals force parameters were derived from calculations of the crystal structures of hydrocarbons, and on amino acids using short (0.8 nm) nonbonded cutoff radii.
GROMOS96
In 1996, a substantial rewrite of the software package was released. The force field was also improved, e.g., in the following way: aliphatic CHn groups were represented as united atoms with van der Waals interactions reparametrized on the basis of a series of molecular dynamics simulations of model liquid alkanes using long (1.4 nm) nonbonded cutoff radii. This version is continually being refined and several different parameter sets are available. GROMOS96 includes studies of molecular dynamics, stochastic dynamics, and energy minimization. The energy component was also part of the prior GROMOS, named GROMOS87. GROMOS96 was planned and conceived during a time of 20 months. The package is made of 40 different programs, each with a different essential function. An example of two important programs within the GROMOS96 are PROGMT, in charge of constructing molecular topology and also PROPMT, changing the classical molecular topology into the path-integral molecular topology.
GROMOS05
An updated version of the software package was introduced in 2005.
GROMOS11
The current GROMOS release is dated in May 2011.
Parameter sets
Some of the force field parameter sets that are based on the GROMOS force field. The A-version applies to aqueous or apolar solutions of proteins, nucleotides, and sugars. The B-version applies to isolated molecules (gas phase).
54
54A7 - 53A6 taken and adjusted torsional angle terms to better reproduce helical propensities, altered N–H, C=O repulsion, new CH3 charge group, parameterisation of Na+ and Cl− to improve free energy of hydration and new improper dihedrals.
54B7 - 53B6 in vacuo taken and changed in same manner as 53A6 to 54A7.
53
53A5 - optimised by first fitting to reproduce the thermodynamic properties of pure liquids of a range of small polar molecules and the solvation free enthalpies of amino acid analogs in cyclohexane, is an expansion and renumbering of 45A3.
53A6 - 53A5 taken and adjusted partial charges to reproduce hydration free enthalpies in water, recommended for simulations of biomolecules in explicit water.
45
45A3 - suitable to apply to lipid aggregates such as membranes and micelles, for mixed systems of aliphatics with or without water, for polymers, and other apolar systems that may interact with different biomolecules.
45A4 - 45A3 reparameterised to improve DNA representation.
43
43A1
43A2
See also
GROMACS
Ascalaph Designer
Comparison of software for molecular mechanics modeling
Comparison of force field implementations
References
External links
C++ software
Fortran software
Molecular dynamics software
Force fields (chemistry) | GROMOS | [
"Chemistry"
] | 794 | [
"Molecular dynamics software",
"Computational chemistry software",
"Molecular dynamics",
"Computational chemistry",
"Force fields (chemistry)"
] |
314,650 | https://en.wikipedia.org/wiki/Enamel%20paint | Enamel paint is paint that air-dries to a hard, usually glossy, finish, used for coating surfaces that are outdoors or otherwise subject to hard wear or variations in temperature; it should not be confused with decorated objects in "painted enamel", where vitreous enamel is applied with brushes and fired in a kiln. The name is something of a misnomer, as in reality most commercially available enamel paints are significantly softer than either vitreous enamel or stoved synthetic resins, and are totally different in composition; vitreous enamel is applied as a powder or paste and then fired at high temperature. There is no generally accepted definition or standard for use of the term "enamel paint", and not all enamel-type paints may use it.
Paint
Typically the term "enamel paint" is used to describe oil-based covering products, usually with a significant amount of gloss in them, however recently many latex or water-based paints have adopted the term as well. The term today means "hard surfaced paint" and usually is in reference to paint brands of higher quality, floor coatings of a high gloss finish, or spray paints. Most enamel paints are alkyd resin based. Some enamel paints have been made by adding varnish to oil-based paint.
Although "enamels" and "painted enamel" in art normally refer to vitreous enamel, in the 20th century some artists used commercial enamel paints in art, including Pablo Picasso (mixing it with oil paint), Hermann-Paul, Jackson Pollock, and Sidney Nolan. The Trial (1947) is one of a number of works by Nolan to use enamel paint, usually Ripolin, a commercial paint not intended for art, also Picasso's usual brand. Some "enamel paints" are now produced specifically for artists.
Enamels paints can also refer to nitrocellulose based paints, one of the first modern commercial paints of the 20th century. They have since been superseded by new synthetic coatings like alkyd, acrylic and vinyl, due to toxicity, safety, and conservation (tendency to age yellow) concerns. In art has been used also by Pollock with the commercial paint named Duco. The artist experimented and created with many types of commercial or house paints during his career. Other artists: "after discovering various types of industrial materials produced in the United States in the 1930s, Siqueiros' produced most of his easel works with uncommon materials which include Duco paint, a DuPont brand name for pyroxyline paint, a tough and resilient type of nitro-cellulose paint manufactured for the automotive industry". Nitro-cellulose enamels are also commonly known as modern lacquers. Enamel paint comes in a variety of hues and can be custom blended to produce a particular tint. It is also available in water-based and solvent-based formulations, with solvent-based enamel being more prevalent in industrial applications. For the greatest results, use a high-quality brush, roller, or spray gun when applying enamel paint. When dried, enamel paint forms a durable, hard-wearing surface that resists chipping, fading, and discoloration, making it a great choice for a wide range of surfaces and applications.
Uses and categories
Floor enamel – May be used for concrete, stairs, basements, porches, and patios.
Fast dry enamel – Can dry within 10–15 minutes of application. Ideal for refrigerators, counters, and other industrial finishes.
High-temp enamel – May be used for engines, brake calipers, exhaust pipe and BBQs.
Enamel paint is also used on wood to make it resistant to the elements via the waterproofing and rotproofing properties of enamel. Generally, treated surfaces last much longer and are much more resistant to wear than untreated surfaces.
Model building – Xtracolor and Humbrol are mainstream UK brands. Colourcoats model paint is a high quality brand with authentic accurate military colours. Testors, a US company, offers the Floquil, Pactra, Model Master and Testors brands.
Nail enamel – to color nails, it comes in many varieties for fast drying, color retention, gloss retention, etc.
Epoxy enamel, polyurethane enamel, etc. used in protective coating / industrial painting purpose in chemical and petrochemical industries for anti-corrosion purposes.
Notes
Coatings
Paints | Enamel paint | [
"Chemistry"
] | 904 | [
"Paints",
"Coatings"
] |
314,652 | https://en.wikipedia.org/wiki/Square%20wave | A square wave is a non-sinusoidal periodic waveform in which the amplitude alternates at a steady frequency between fixed minimum and maximum values, with the same duration at minimum and maximum. In an ideal square wave, the transitions between minimum and maximum are instantaneous.
The square wave is a special case of a pulse wave which allows arbitrary durations at minimum and maximum amplitudes. The ratio of the high period to the total period of a pulse wave is called the duty cycle. A true square wave has a 50% duty cycle (equal high and low periods).
Square waves are often encountered in electronics and signal processing, particularly digital electronics and digital signal processing. Its stochastic counterpart is a two-state trajectory.
Origin and uses
Square waves are universally encountered in digital switching circuits and are naturally generated by binary (two-level) logic devices. They are used as timing references or "clock signals", because their fast transitions are suitable for triggering synchronous logic circuits at precisely determined intervals. However, as the frequency-domain graph shows, square waves contain a wide range of harmonics; these can generate electromagnetic radiation or pulses of current that interfere with other nearby circuits, causing noise or errors. To avoid this problem in very sensitive circuits such as precision analog-to-digital converters, sine waves are used instead of square waves as timing references.
In musical terms, they are often described as sounding hollow, and are therefore used as the basis for wind instrument sounds created using subtractive synthesis. They also make up the "beeping" alerts used in many household, commercial, and industrial contexts. Additionally, the distortion effect used on electric guitars clips the outermost regions of the waveform, causing it to increasingly resemble a square wave as more distortion is applied.
Simple two-level Rademacher functions are square waves.
Definitions
The square wave in mathematics has many definitions, which are equivalent except at the discontinuities:
It can be defined as simply the sign function of a sinusoid:
which will be 1 when the sinusoid is positive, −1 when the sinusoid is negative, and 0 at the discontinuities. Here, T is the period of the square wave and f is its frequency, which are related by the equation f = 1/T.
A square wave can also be defined with respect to the Heaviside step function u(t) or the rectangular function Π(t):
A square wave can also be generated using the floor function directly:
and indirectly:
Using the fourier series (below) one can show that the floor function may be written in trigonometric form
Fourier analysis
Using Fourier expansion with cycle frequency over time , an ideal square wave with an amplitude of 1 can be represented as an infinite sum of sinusoidal waves:
The ideal square wave contains only components of odd-integer harmonic frequencies (of the form ).
A curiosity of the convergence of the Fourier series representation of the square wave is the Gibbs phenomenon. Ringing artifacts in non-ideal square waves can be shown to be related to this phenomenon. The Gibbs phenomenon can be prevented by the use of σ-approximation, which uses the Lanczos sigma factors to help the sequence converge more smoothly.
An ideal mathematical square wave changes between the high and the low state instantaneously, and without under- or over-shooting. This is impossible to achieve in physical systems, as it would require infinite bandwidth.
Square waves in physical systems have only finite bandwidth and often exhibit ringing effects similar to those of the Gibbs phenomenon or ripple effects similar to those of the σ-approximation.
For a reasonable approximation to the square-wave shape, at least the fundamental and third harmonic need to be present, with the fifth harmonic being desirable. These bandwidth requirements are important in digital electronics, where finite-bandwidth analog approximations to square-wave-like waveforms are used. (The ringing transients are an important electronic consideration here, as they may go beyond the electrical rating limits of a circuit or cause a badly positioned threshold to be crossed multiple times.)
Characteristics of imperfect square waves
As already mentioned, an ideal square wave has instantaneous transitions between the high and low levels. In practice, this is never achieved because of physical limitations of the system that generates the waveform. The times taken for the signal to rise from the low level to the high level and back again are called the rise time and the fall time respectively.
If the system is overdamped, then the waveform may never actually reach the theoretical high and low levels, and if the system is underdamped, it will oscillate about the high and low levels before settling down. In these cases, the rise and fall times are measured between specified intermediate levels, such as 5% and 95%, or 10% and 90%. The bandwidth of a system is related to the transition times of the waveform; there are formulas allowing one to be determined approximately from the other.
See also
List of periodic functions
Rectangular function
Pulse wave
Sine wave
Triangle wave
Sawtooth wave
Waveform
Sound
Multivibrator
Ronchi ruling, a square-wave stripe target used in imaging.
Cross sea
Clarinet, a musical instrument that produces odd overtones approximating a square wave.
References
External links
Fourier decomposition of a square wave Interactive demo of square wave synthesis using sine waves, from GeoGebra site.
Square Wave Approximated by Sines Interactive demo of square wave synthesis using sine waves.
Flash applets Square wave.
Waveforms
Fourier series | Square wave | [
"Physics"
] | 1,120 | [
"Waves",
"Physical phenomena",
"Waveforms"
] |
314,653 | https://en.wikipedia.org/wiki/CHARMM | Chemistry at Harvard Macromolecular Mechanics (CHARMM) is the name of a widely used set of force fields for molecular dynamics, and the name for the molecular dynamics simulation and analysis computer software package associated with them. The CHARMM Development Project involves a worldwide network of developers working with Martin Karplus and his group at Harvard to develop and maintain the CHARMM program. Licenses for this software are available, for a fee, to people and groups working in academia.
Force fields
The CHARMM force fields for proteins include: united-atom (sometimes termed extended atom) CHARMM19, all-atom CHARMM22 and its dihedral potential corrected variant CHARMM22/CMAP, as well as later versions CHARMM27 and CHARMM36 and various modifications such as CHARMM36m and CHARMM36IDPSFF. In the CHARMM22 protein force field, the atomic partial charges were derived from quantum chemical calculations of the interactions between model compounds and water. Furthermore, CHARMM22 is parametrized for the TIP3P explicit water model. Nevertheless, it is often used with implicit solvents. In 2006, a special version of CHARMM22/CMAP was reparametrized for consistent use with implicit solvent GBSW.
The CHARMM22 force field has the following potential energy function:
The bond, angle, dihedral, and nonbonded terms are similar to those found in other force fields such as AMBER. The CHARMM force field also includes an improper term accounting for out-of-plane bending (which applies to any set of four atoms that are not successively bonded), where is the force constant and is the out-of-plane angle. The Urey-Bradley term is a cross-term that accounts for 1,3 nonbonded interactions not accounted for by the bond and angle terms; is the force constant and is the distance between the 1,3 atoms.
For DNA, RNA, and lipids, CHARMM27 is used. Some force fields may be combined, for example CHARMM22 and CHARMM27 for the simulation of protein-DNA binding. Also, parameters for NAD+, sugars, fluorinated compounds, etc., may be downloaded. These force field version numbers refer to the CHARMM version where they first appeared, but may of course be used with subsequent versions of the CHARMM executable program. Likewise, these force fields may be used within other molecular dynamics programs that support them.
In 2009, a general force field for drug-like molecules (CGenFF) was introduced. It "covers a wide range of chemical groups present in biomolecules and drug-like molecules, including a large number of heterocyclic scaffolds". The general force field is designed to cover any combination of chemical groups. This inevitably comes with a decrease in accuracy for representing any particular subclass of molecules. Users are repeatedly warned in Mackerell's website not to use the CGenFF parameters for molecules for which specialized force fields already exist (as mentioned above for proteins, nucleic acids, etc.).
CHARMM also includes polarizable force fields using two approaches. One is based on the fluctuating charge (FQ) model, also termed Charge Equilibration (CHEQ). The other is based on the Drude shell or dispersion oscillator model.
Parameters for all of these force fields may be downloaded from the Mackerell website for free.
Molecular dynamics program
The CHARMM program allows for generating and analysing a wide range of molecular simulations. The most basic kinds of simulation are minimizing a given structure and production runs of a molecular dynamics trajectory. More advanced features include free energy perturbation (FEP), quasi-harmonic entropy estimation, correlation analysis and combined quantum, and quantum mechanics–molecular mechanics (QM/MM) methods.
CHARMM is one of the oldest programs for molecular dynamics. It has accumulated many features, some of which are duplicated under several keywords with slight variants. This is an inevitable result of the many outlooks and groups working on CHARMM worldwide. The changelog file, and CHARMM's source code, are good places to look for the names and affiliations of the main developers. The involvement and coordination by Charles L. Brooks III's group at the University of Michigan is salient.
Software history
Around 1969, there was considerable interest in developing potential energy functions for small molecules. CHARMM originated at Martin Karplus's group at Harvard. Karplus and his then graduate student Bruce Gelin decided the time was ripe to develop a program that would make it possible to take a given amino acid sequence and a set of coordinates (e.g., from the X-ray structure) and to use this information to calculate the energy of the system as a function of the atomic positions. Karplus has acknowledged the importance of major inputs in the development of the (at the time nameless) program, including:
Schneior Lifson's group at the Weizmann Institute, especially from Arieh Warshel who went to Harvard and brought his consistent force field (CFF) program with him
Harold Scheraga's group at Cornell University
Awareness of Michael Levitt's pioneering energy calculations for proteins
In the 1980s, finally a paper appeared and CHARMM made its public début. Gelin's program had by then been considerably restructured. For the publication, Bob Bruccoleri came up with the name HARMM (HARvard Macromolecular Mechanics), but it seemed inappropriate. So they added a C for Chemistry. Karplus said: "I sometimes wonder if Bruccoleri's original suggestion would have served as a useful warning to inexperienced scientists working with the program." CHARMM has continued to grow and the latest release of the executable program was made in 2015 as CHARMM40b2.
Running CHARMM under Unix-Linux
The general syntax for using the program is:
charmm -i filename.inp -o filename.out
charmm – The name of the program (or script which runs the program) on the computer system being used.
filename.inp – A text file which contains the CHARMM commands. It starts by loading the molecular topologies (top) and force field (par). Then one loads the molecular structures' Cartesian coordinates (e.g. from PDB files). One can then modify the molecules (adding hydrogens, changing secondary structure). The calculation section can include energy minimization, dynamics production, and analysis tools such as motion and energy correlations.
filename.out – The log file for the CHARMM run, containing echoed commands, and various amounts of command output. The output print level may be increased or decreased in general, and procedures such as minimization and dynamics have printout frequency specifications. The values for temperature, energy pressure, etc. are output at that frequency.
Volunteer computing
Docking@Home, hosted by University of Delaware, one of the projects which use an open-source platform for the distributed computing, BOINC, used CHARMM to analyze the atomic details of protein-ligand interactions in terms of molecular dynamics (MD) simulations and minimizations.
World Community Grid, sponsored by IBM, ran a project named The Clean Energy Project which also used CHARMM in its first phase, which has been completed.
See also
References
External links
, with documentation and helpful discussion forums
, BIOVIA
CHARMM tutorial;
MacKerell website, hosts package of force field parameters for CHARMM
C.Brooks website
CHARMM page at Harvard
Roux website;
Bernard R. Brooks Group website
Docking@Home
CHARMM-GUI project
CHARMMing (CHARMM Interface and Graphics);
CHARMM Tutorial
Force fields (chemistry)
Fortran software
Harvard University
Molecular dynamics software | CHARMM | [
"Chemistry"
] | 1,608 | [
"Molecular dynamics software",
"Computational chemistry software",
"Molecular dynamics",
"Computational chemistry",
"Force fields (chemistry)"
] |
314,692 | https://en.wikipedia.org/wiki/One-form%20%28differential%20geometry%29 | In differential geometry, a one-form (or covector field) on a differentiable manifold is a differential form of degree one, that is, a smooth section of the cotangent bundle. Equivalently, a one-form on a manifold is a smooth mapping of the total space of the tangent bundle of to whose restriction to each fibre is a linear functional on the tangent space. Symbolically,
where is linear.
Often one-forms are described locally, particularly in local coordinates. In a local coordinate system, a one-form is a linear combination of the differentials of the coordinates:
where the are smooth functions. From this perspective, a one-form has a covariant transformation law on passing from one coordinate system to another. Thus a one-form is an order 1 covariant tensor field.
Examples
The most basic non-trivial differential one-form is the "change in angle" form This is defined as the derivative of the angle "function" (which is only defined up to an additive constant), which can be explicitly defined in terms of the atan2 function. Taking the derivative yields the following formula for the total derivative:
While the angle "function" cannot be continuously defined – the function atan2 is discontinuous along the negative -axis – which reflects the fact that angle cannot be continuously defined, this derivative is continuously defined except at the origin, reflecting the fact that infinitesimal (and indeed local) in angle can be defined everywhere except the origin. Integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number times
In the language of differential geometry, this derivative is a one-form on the punctured plane. It is closed (its exterior derivative is zero) but not exact, meaning that it is not the derivative of a 0-form (that is, a function): the angle is not a globally defined smooth function on the entire punctured plane. In fact, this form generates the first de Rham cohomology of the punctured plane. This is the most basic example of such a form, and it is fundamental in differential geometry.
Differential of a function
Let be open (for example, an interval ), and consider a differentiable function with derivative The differential assigns to each point a linear map from the tangent space to the real numbers. In this case, each tangent space is naturally identifiable with the real number line, and the linear map in question is given by scaling by This is the simplest example of a differential (one-)form.
See also
References
Differential forms
1 (number) | One-form (differential geometry) | [
"Engineering"
] | 538 | [
"Tensors",
"Differential forms"
] |
314,696 | https://en.wikipedia.org/wiki/Environmental%20health | Environmental health is the branch of public health concerned with all aspects of the natural and built environment affecting human health. To effectively control factors that may affect health, the requirements that must be met to create a healthy environment must be determined. The major sub-disciplines of environmental health are environmental science, toxicology, environmental epidemiology, and environmental and occupational medicine.
Definitions
WHO definitions
Environmental health was defined in a 1989 document by the World Health Organization (WHO) as:
Those aspects of human health and disease that are determined by factors in the environment. It is also referred to as the theory and practice of accessing and controlling factors in the environment that can potentially affect health.
A 1990 WHO document states that environmental health, as used by the WHO Regional Office for Europe, "includes both the direct pathological effects of chemicals, radiation and some biological agents, and the effects (often indirect) on health and well being of the broad physical, psychological, social and cultural environment, which includes housing, urban development, land use and transport."
, the WHO website on environmental health states that "Environmental health addresses all the physical, chemical, and biological factors external to a person, and all the related factors impacting behaviours. It encompasses the assessment and control of those environmental factors that can potentially affect health. It is targeted towards preventing disease and creating health-supportive environments. This definition excludes behaviour not related to environment, as well as behaviour related to the social and cultural environment, as well as genetics."
The WHO has also defined environmental health services as "those services which implement environmental health policies through monitoring and control activities. They also carry out that role by promoting the improvement of environmental parameters and by encouraging the use of environmentally friendly and healthy technologies and behaviors. They also have a leading role in developing and suggesting new policy areas."
Other considerations
The term environmental medicine may be seen as a medical specialty, or branch of the broader field of environmental health. Terminology is not fully established, and in many European countries they are used interchangeably.
Other terms referring to or concerning environmental health include environmental public health and health protection.
Pediatric environmental health
Children's environmental health is the academic discipline that studies how environmental exposures in early life—chemical, biological, nutritional, and social—influence health and development in childhood and across the entire human life span. Pediatric environmental health is based on the recognition that children are not “little adults.” Infants and children have unique patterns of exposure and vulnerabilities. Environmental risks of infants and children are qualitatively and quantitatively different from those of adults. Pediatric environmental health is highly interdisciplinary. It spans and brings together general pediatrics and numerous pediatric subspecialties as well as epidemiology, occupational and environmental medicine, medical toxicology, industrial hygiene, and exposure science.
Disciplines
Five basic disciplines generally contribute to the field of environmental health: environmental epidemiology, toxicology, exposure science, environmental engineering, and environmental law. Each of these five disciplines contributes different information to describe problems and solutions in environmental health. However, there is some overlap among them.
Environmental epidemiology studies the relationship between environmental exposures (including exposure to chemicals, radiation, microbiological agents, etc.) and human health. Observational studies, which simply observe exposures that people have already experienced, are common in environmental epidemiology because humans cannot ethically be exposed to agents that are known or suspected to cause disease. While the inability to use experimental study designs is a limitation of environmental epidemiology, this discipline directly observes effects on human health rather than estimating effects from animal studies. Environmental epidemiology is the study of the effect on human health of physical, biologic, and chemical factors in the external environment, broadly conceived. Also, examining specific populations or communities exposed to different ambient environments, Epidemiology in our environment aims to clarify the relationship that exist between physical, biologic or chemical factors and human health.
Toxicology studies how environmental exposures lead to specific health outcomes, generally in animals, as a means to understand possible health outcomes in humans. Toxicology has the advantage of being able to conduct randomized controlled trials and other experimental studies because they can use animal subjects. However, there are many differences in animal and human biology, and there can be a lot of uncertainty when interpreting the results of animal studies for their implications for human health.
Exposure science studies human exposure to environmental contaminants by both identifying and quantifying exposures. Exposure science can be used to support environmental epidemiology by better describing environmental exposures that may lead to a particular health outcome, identify common exposures whose health outcomes may be better understood through a toxicology study, or can be used in a risk assessment to determine whether current levels of exposure might exceed recommended levels. Exposure science has the advantage of being able to very accurately quantify exposures to specific chemicals, but it does not generate any information about health outcomes like environmental epidemiology or toxicology.
Environmental engineering applies scientific and engineering principles for protection of human populations from the effects of adverse environmental factors; protection of environments from potentially deleterious effects of natural and human activities; and general improvement of environmental quality.
Environmental law includes the network of treaties, statutes, regulations, common and customary laws addressing the effects of human activity on the natural environment.
Information from epidemiology, toxicology, and exposure science can be combined to conduct a risk assessment for specific chemicals, mixtures of chemicals or other risk factors to determine whether an exposure poses significant risk to human health (exposure would likely result in the development of pollution-related diseases). This can in turn be used to develop and implement environmental health policy that, for example, regulates chemical emissions, or imposes standards for proper sanitation. Actions of engineering and law can be combined to provide risk management to minimize, monitor, and otherwise manage the impact of exposure to protect human health to achieve the objectives of environmental health policy.
Concerns
Environmental health addresses all human-health-related aspects of the natural environment and the built environment. Environmental health concerns include:
Biosafety.
Disaster preparedness and response.
Food safety, including in agriculture, transportation, food processing, wholesale and retail distribution and sale.
Housing, including substandard housing abatement and the inspection of jails and prisons.
Childhood lead poisoning prevention.
Land use planning, including smart growth.
Liquid waste disposal, including city waste water treatment plants and on-site waste water disposal systems, such as septic tank systems and chemical toilets.
Medical waste management and disposal.
Occupational health and industrial hygiene.
Radiological health, including exposure to ionizing radiation from X-rays or radioactive isotopes.
Recreational water illness prevention, including from swimming pools, spas and ocean and freshwater bathing places.
Solid waste management, including landfills, recycling facilities, composting and solid waste transfer stations.
Toxic chemical exposure whether in consumer products, housing, workplaces, air, water or soil.
Toxins from molds and algal blooms.
Vector control, including the control of mosquitoes, rodents, flies, cockroaches and other animals that may transmit pathogens.
According to recent estimates, about 5 to 10% of disability-adjusted life years (DALYs) lost are due to environmental causes in Europe. By far the most important factor is fine particulate matter pollution in urban air. Similarly, environmental exposures have been estimated to contribute to 4.9 million (8.7%) deaths and 86 million (5.7%) DALYs globally. In the United States, Superfund sites created by various companies have been found to be hazardous to human and environmental health in nearby communities. It was this perceived threat, raising the specter of miscarriages, mutations, birth defects, and cancers that most frightened the public.
Air quality
Air quality includes ambient outdoor air quality and indoor air quality. Large concerns about air quality include environmental tobacco smoke, air pollution by forms of chemical waste, and other concerns.
Outdoor air quality
Air pollution is globally responsible for over 6.5 million deaths each year. Air pollution is the contamination of an atmosphere due to the presence of substances that are harmful to the health of living organisms, the environment or climate. These substances concern environmental health officials since air pollution is often a risk-factor for diseases that are related to pollution, like lung cancer, respiratory infections, asthma, heart disease, and other forms of respiratory-related illnesses. Reducing air pollution, and thus developing air quality, has been found to decrease adult mortality.
Common products responsible for emissions include road traffic, energy production, household combustion, aviation and motor vehicles, and other forms of pollutants. These pollutants are responsible for the burning of fuel, which can release harmful particles into the air that humans and other living organisms can inhale or ingest.
Air pollution is associated with adverse health effects like respiratory and cardiovascular diseases, cancer, related illnesses, and even death. The risk of air pollution is determined by the pollutant's hazard and the amount of exposure that affects a person. For example, a child who plays outdoor sports will have a higher likelihood of outdoor air pollution exposure than an adult who tends to spend more time indoors, whether at work or elsewhere. Environmental health officials work to detect individuals who are at higher risks of consuming air pollution, work to decrease their exposure, and detect risk factors present in communities.
However, as shown in research by Ernesto, Sánchez-Triana in the case of Pakistan. After identifying the main sources of air pollution, such as mobile sources, such as heavy-duty vehicles and motorized 2–3 wheelers; stationary sources, such as power plants and burning of waste; and natural dust. The country implemented a clean air policy to reduce the road transport sector, which is responsible for 85% of particulate matter of less than 2.5 microns (PM2.5) total emissions and 72% of particulate matter of less than 10 microns (PM10) Most successful policies were:
Improving fuel quality by reducing the sulfur content in diesel
Converting diesel minibuses and city delivery vans to compressed natural gas (CNG)
Installing diesel oxidation catalysts (DOCs) on existing large buses and trucks
Converting existing two-stroke rickshaws to four-stroke CNG engines
Introducing low-sulfur fuel oil (1% sulfur) to major users located in Karachi
Indoor air quality
Household air pollution contributes to diseases that kill almost 4.3 million people every year. Indoor air pollution contributes to risk factors for diseases like heart disease, pulmonary disease, stroke, pneumonia, and other associated illnesses. For vulnerable populations, such as children and elderly populations, who spend large amounts of their time indoors or indoor air quality can be dangerous.
Burning fuels like coal or kerosene inside homes can cause dangerous chemicals to be released into the air. Dampness and mold in houses can cause diseases, but few studies have been performed on mold in schools and workplaces. Environmental tobacco smoke is considered to be a leading contributor to indoor air pollution since exposure to second and third-hand smoke is a common risk factor. Tobacco smoke contains over 60 carcinogens, where 18% are known human carcinogens. Exposure to these chemicals can lead to exacerbation of asthma, the development of cardiovascular diseases and cardiopulmonary diseases, and an increase in the likelihood of cancer development.
Climate change and its effects on health
Climate change makes extreme weather events more likely, including ozone smog events, dust storms, and elevated aerosol levels, all due to extreme heat, drought, winds, and rainfall. These extreme weather events can increase the likelihood of undernutrition, mortality, food insecurity, and climate-sensitive infectious diseases in vulnerable populations. The effects of climate change are felt by the whole world, but disproportionately affect disadvantaged populations who are subject to climate change vulnerability.
Climate impacts can affect exposure to water-borne pathogens through increased rates of runoff, frequent heavy rains, and the effects of severe storms. Extreme weather events and storm surges can also exceed the capacity of water infrastructure, which can increase the likelihood that populations will be exposed to these contaminants. Exposure to these contaminants are more likely in low-income communities, where they have inadequate infrastructure to respond to climate disasters and are less likely to recover from infrastructure damage as quickly.
Problems like the loss of homes, loved ones, and previous ways of life, are often what people face after a climate disaster occurs. These events can lead to vulnerability in the form of housing affordability stress, lower household income, lack of community attachment, grief, and anxiety around another disaster occurring.
Environmental racism
Certain groups of people can be put at a higher risk for environmental hazards like air, soil and water pollution. This often happens due to marginalization, economic and political processes, and racism. Environmental racism uniquely affects different groups globally, however generally the most marginalized groups of any region are affected. These marginalized groups are frequently put next to pollution sources like major roadways, toxic waste sites, landfills, and chemical plants. In a 2021 study, it was found that racial and ethnic minority groups in the United States are exposed to disproportionately high levels of particulate air pollution. Racial housing policies that exist in the United States continue to exacerbate racial minority exposure to air pollution at a disproportionate rate, even as overall pollution levels have declined. Likewise, in a 2022 study, it was shown that implementing policy changes that favor wealth redistribution could double as climate change mitigation measures. For populations who are not subject to wealth redistribution measures, this means more money will flow into their communities while climate effects are mitigated.
Noise pollution
Noise pollution is usually environmental, machine-created sound that can disrupt activities or communication between humans and other forms of life. Exposure to persistent noise pollution can cause numerous ailments like hearing impairment, sleep disturbances, cardiovascular problems, annoyance, problems with communication and other diseases. For American minorities that live in neighborhoods of low socioeconomic status, they often experience higher levels of noise pollution compared to their higher socioeconomic counterparts.
Noise pollution can cause or exacerbate cardiovascular diseases, which can further attribute to a larger range of diseases, increase stress levels, and cause sleep disturbances. Noise pollution is also responsible for many reported cases of hearing loss, tinnitus, and other forms of hypersensitivity(stress/irritability) or lack thereof to sound(present or subconscious from continuous exposure). These conditions can be dangerous to children and young adults who consistently experience noise pollution, as many of these conditions can develop into long-term problems, including physical and mental health issues.
Children who attend school in noisy traffic zones have shown to have 15% lower memory development compared to other students who attended schools in quiet traffic zones, according to a Barcelona study. This is consistent with research that suggests that children who are exposed to regular aircraft noise "have inadequate performance on standardised achievement tests."
Exposure to persistent noise pollution can cause one to develop hearing impairments, like tinnitus or impaired speech discrimination. One of the largest factors in worsened mental health due to noise pollution is annoyance. Annoyance due to environmental factors has been found to increase stress reactions and overall feelings of stress among adults. The level of annoyance felt by an individual varies, but contributes to worsened mental health significantly.
Noise exposure also contributes to sleep disturbances, which can cause daytime sleepiness and an overall lack of sleep, which contributes to worsened health. Daytime sleepiness has been linked to several reports of declining mental health and other health issues, job insecurities and further social and environmental factors declining.
Safe drinking water
Access to safe drinking water is considered a "basic human need for health and well-being" by the United Nations. According to their reports, over 2 billion people worldwide live without access to safe drinking water. In 2017, almost 22 million Americans drank from water systems that were in violation of public health standards. Globally, over 2 billion people drink feces-contaminated water, which poses the greatest threat to drinking water safety. Contaminated drinking water could transmit diseases like cholera, dysentery, typhoid, diarrhea and polio.
Harmful chemicals in drinking water can negatively affect health. Unsafe water management practices can increase the prevalence of water-borne diseases and sanitation-related illnesses. Inadequate disinfecting of wastewater in industrial and agricultural centers can also infect hundreds of millions of people with contaminated water. Chemicals like fluoride and arsenic can benefit humans when the levels of these chemicals are controlled;but other, more dangerous chemicals like lead and metals can be harmful to humans.
In America, communities of color can be subject to poor-quality water. In communities in America with large Hispanic and black populations, there is a correlated rise in SDWA health violations. Populations who have experienced lack of safe drinking water, like populations in Flint, Michigan, are more likely to distrust tap water in their communities. Populations to experience this are commonly low-income, communities of color.
Hazardous materials management
Hazardous materials management, including hazardous waste management, contaminated site remediation, the prevention of leaks from underground storage tanks and the prevention of hazardous materials releases to the environment and responses to emergency situations resulting from such releases. When hazardous materials are not managed properly, waste can pollute nearby water sources and reduce air quality.
According to a study done in Austria, people who live near industrial sites are "more often unemployed, have lower educations levels, and are twice as likely to be immigrants. With the interest of environmental health in mind, the Resource Conservation and Recovery Act was passed in the United States in 1976 that covered how to properly manage hazardous waste.
There are a variety of occupations that work with hazardous materials and help manage them so that everything is disposed of correctly. These professionals work in various sectors, including government agencies, private industry, consulting firms, and non-profit organizations, all with the common goal of ensuring the safe handling of hazardous materials and waste. These positions include but are not limited to Environmental Health and Safety Specialists, Waste collectors, Medical Professionals, and Emergency Responders. Handling waste, especially hazardous materials is considered one of the most dangerous occupations in the world. Often, these workers may not have all of information about the specific hazardous materials they encounter, making their jobs even more dangerous. The sudden exposure to materials they are not properly prepared to handle can lead to severe consequences. This emphasizes the importance of training, safety protocols, and the use of personal protective equipment for those working with hazardous waste.
Microplastic pollution
Soil pollution
Information and mapping
The Toxicology and Environmental Health Information Program (TEHIP) is a comprehensive toxicology and environmental health web site, that includes open access to resources produced by US government agencies and organizations, and is maintained under the umbrella of the Specialized Information Service at the United States National Library of Medicine. TEHIP includes links to technical databases, bibliographies, tutorials, and consumer-oriented resources. TEHIP is responsible for the Toxicology Data Network (TOXNET), an integrated system of toxicology and environmental health databases including the Hazardous Substances Data Bank, that are open access, i.e. available free of charge. TOXNET was retired in 2019.
There are many environmental health mapping tools. TOXMAP is a geographic information system (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP is a resource funded by the US federal government. TOXMAP's chemical and environmental health information is taken from the NLM's Toxicology Data Network (TOXNET) and PubMed, and from other authoritative sources.
Environmental health profession
Environmental health professionals may be known as environmental health officers, public health inspectors, environmental health specialists or environmental health practitioners. Researchers and policy-makers also play important roles in how environmental health is practiced in the field. In many European countries, physicians and veterinarians are involved in environmental health. In the United Kingdom, practitioners must have a graduate degree in environmental health and be certified and registered with the Chartered Institute of Environmental Health or the Royal Environmental Health Institute of Scotland. In Canada, practitioners in environmental health are required to obtain an approved bachelor's degree in environmental health along with the national professional certificate, the Certificate in Public Health Inspection (Canada), CPHI(C). Many states in the United States also require that individuals have a bachelor's degree and professional licenses to practice environmental health. California state law defines the scope of practice of environmental health as follows:
"Scope of practice in environmental health" means the practice of environmental health by registered environmental health specialists in the public and private sector within the meaning of this article and includes, but is not limited to, organization, management, education, enforcement, consultation, and emergency response for the purpose of prevention of environmental health hazards and the promotion and protection of the public health and the environment in the following areas: food protection; housing; institutional environmental health; land use; community noise control; recreational swimming areas and waters; electromagnetic radiation control; solid, liquid, and hazardous materials management; underground storage tank control; onsite septic systems; vector control; drinking water quality; water sanitation; emergency preparedness; and milk and dairy sanitation pursuant to Section 33113 of the Food and Agricultural Code.
The environmental health profession had its modern-day roots in the sanitary and public health movement of the United Kingdom. This was epitomized by Sir Edwin Chadwick, who was instrumental in the repeal of the poor laws, and in 1884 was the founding president of the Association of Public Sanitary Inspectors, now called the Chartered Institute of Environmental Health.
See also
EcoHealth
Environmental disease
Environmental medicine
Environmental toxicology
Epigenetics
Exposure science
Healing environments
Health effects from noise
Heavy metals
Indoor air quality
Industrial and organizational psychology
NIEHS
Nightingale's environmental theory
One Health
Pollution
Volatile organic compound
Journals:
List of environmental health journals
References
Further reading
Patrick Sogno, Claudia Traidl-Hoffmann, Claudia Kuenzer: Earth Observation Data Supporting Non-Communicable Disease Research: A Review. Remote Sensing (12), 2020, P. 1-34. doi: 10.3390/rs12162541. ISSN 2072-4292.
External links
NIEHS
Environmental social science
Environmental science | Environmental health | [
"Environmental_science"
] | 4,585 | [
"Environmental social science",
"nan"
] |
314,703 | https://en.wikipedia.org/wiki/Erectile%20tissue | Erectile tissue is tissue in the body with numerous vascular spaces, or cavernous tissue, that may become engorged with blood. However, tissue that is devoid of or otherwise lacking erectile tissue (such as the labia minora, vestibule, vagina and urethra) may also be described as engorging with blood, often with regard to sexual arousal.
In sex organs
Erectile tissue exists in external genitals such as the corpora cavernosa of the penis and their homologs in the clitoris, also called the corpora cavernosa. During penile or clitoral erection, the corpora cavernosa will become engorged with arterial blood, a process called tumescence. This may result from any of various physiological stimuli which can be internal or external. This process of stimulation, due to internal or external stimuli, is also known as sexual arousal. The corpus spongiosum is a single tubular structure located just below the corpora cavernosa in males. This may also become slightly engorged with blood, but less so than the corpora cavernosa.
In the nose
Erectile tissue is present in the anterior part of the nasal septum and is attached to the turbinates of the nose. The nasal cycle occurs as the erectile tissue on one side of the nose congests and the other side decongests. This process is controlled by the autonomic nervous system with parasympathetic dominance being associated with congestion and sympathetic with decongestion. The time of one cycle may vary greatly between individuals, with Kahana-Zweig et al. finding a range between 15 minutes and 10.35 hours though the average was noted as 2.15 ± 1.84 hours.
Other types
Erectile tissue is also found in the urethral sponge and perineal sponge. The erection of nipples is not due to erectile tissue, but rather due to the contraction of smooth muscle under the control of the autonomic nervous system.
References
Sexual anatomy
ru:Пещеристое тело | Erectile tissue | [
"Biology"
] | 430 | [
"Sexual anatomy",
"Sex"
] |
314,709 | https://en.wikipedia.org/wiki/Synthetic%20division | In algebra, synthetic division is a method for manually performing Euclidean division of polynomials, with less writing and fewer calculations than long division.
It is mostly taught for division by linear monic polynomials (known as Ruffini's rule), but the method can be generalized to division by any polynomial.
The advantages of synthetic division are that it allows one to calculate without writing variables, it uses few calculations, and it takes significantly less space on paper than long division. Also, the subtractions in long division are converted to additions by switching the signs at the very beginning, helping to prevent sign errors.
Regular synthetic division
The first example is synthetic division with only a monic linear denominator .
The numerator can be written as .
The zero of the denominator is .
The coefficients of are arranged as follows, with the zero of on the left:
The after the bar is "dropped" to the last row.
The is multiplied by the before the bar and placed in the .
An is performed in the next column.
The previous two steps are repeated, and the following is obtained:
Here, the last term (-123) is the remainder while the rest correspond to the coefficients of the quotient.
The terms are written with increasing degree from right to left beginning with degree zero for the remainder and the result.
Hence the quotient and remainder are:
Evaluating polynomials by the remainder theorem
The above form of synthetic division is useful in the context of the polynomial remainder theorem for evaluating univariate polynomials. To summarize, the value of at is equal to the remainder of the division of by
The advantage of calculating the value this way is that it requires just over half as many multiplication steps as naive evaluation. An alternative evaluation strategy is Horner's method.
Expanded synthetic division
This method generalizes to division by any monic polynomial with only a slight modification with changes in bold. Note that while it may not be displayed in the following example, the divisor must also be written with verbose coefficients. (Such as with ) Using the same steps as before, perform the following division:
We concern ourselves only with the coefficients.
Write the coefficients of the polynomial to be divided at the top.
Negate the coefficients of the divisor.
Write in every coefficient but the first one on the left in an upward right diagonal (see next diagram).
Note the change of sign from 1 to −1 and from −3 to 3. "Drop" the first coefficient after the bar to the last row.
Multiply the dropped number by the diagonal before the bar and place the resulting entries diagonally to the right from the dropped entry.
Perform an addition in the next column.
Repeat the previous two steps until you would go past the entries at the top with the next diagonal.
Then simply add up any remaining columns.
Count the terms to the left of the bar. Since there are two, the remainder has degree one and this is the two right-most terms under the bar. Mark the separation with a vertical bar.
The terms are written with increasing degree from right to left beginning with degree zero for both the remainder and the result.
The result of our division is:
For non-monic divisors
With a little prodding, the expanded technique may be generalised even further to work for any polynomial, not just monics. The usual way of doing this would be to divide the divisor with its leading coefficient (call it a):
then using synthetic division with as the divisor, and then dividing the quotient by a to get the quotient of the original division (the remainder stays the same). But this often produces unsightly fractions which get removed later and is thus more prone to error. It is possible to do it without first reducing the coefficients of .
As can be observed by first performing long division with such a non-monic divisor, the coefficients of are divided by the leading coefficient of after "dropping", and before multiplying.
Let's illustrate by performing the following division:
A slightly modified table is used:
Note the extra row at the bottom. This is used to write values found by dividing the "dropped" values by the leading coefficient of (in this case, indicated by the /3; note that, unlike the rest of the coefficients of , the sign of this number is not changed).
Next, the first coefficient of is dropped as usual:
and then the dropped value is divided by 3 and placed in the row below:
Next, the new (divided) value is used to fill the top rows with multiples of 2 and 1, as in the expanded technique:
The 5 is dropped next, with the obligatory adding of the 4 below it, and the answer is divided again:
Then the 3 is used to fill the top rows:
At this point, if, after getting the third sum, we were to try and use it to fill the top rows, we would "fall off" the right side, thus the third sum is the first coefficient of the remainder, as in regular synthetic division. But the values of the remainder are not divided by the leading coefficient of the divisor:
Now we can read off the coefficients of the answer. As in expanded synthetic division, the last two values (2 is the degree of the divisor) are the coefficients of the remainder, and the remaining values are the coefficients of the quotient:
and the result is
Compact Expanded Synthetic Division
However, the diagonal format above becomes less space-efficient when the degree of the divisor exceeds half of the degree of the dividend. Consider the following division:
It is easy to see that we have complete freedom to write each product in any row as long as it is in the correct column, so the algorithm can be compactified by a greedy strategy, as illustrated in the division below:
The following describes how to perform the algorithm; this algorithm includes steps for dividing non-monic divisors:
Python implementation
The following snippet implements Expanded Synthetic Division in Python for arbitrary univariate polynomials:
def expanded_synthetic_division(dividend, divisor):
"""Fast polynomial division by using Expanded Synthetic Division.
Also works with non-monic polynomials.
Dividend and divisor are both polynomials, which are here simply lists of coefficients.
E.g.: x**2 + 3*x + 5 will be represented as [1, 3, 5]
"""
out = list(dividend) # Copy the dividend
normalizer = divisor[0]
for i in range(len(dividend) - len(divisor) + 1):
# For general polynomial division (when polynomials are non-monic),
# we need to normalize by dividing the coefficient with the divisor's first coefficient
out[i] /= normalizer
coef = out[i]
if coef != 0: # Useless to multiply if coef is 0
# In synthetic division, we always skip the first coefficient of the divisor,
# because it is only used to normalize the dividend coefficients
for j in range(1, len(divisor)):
out[i + j] += -divisor[j] * coef
# The resulting out contains both the quotient and the remainder,
# the remainder being the size of the divisor (the remainder
# has necessarily the same degree as the divisor since it is
# what we couldn't divide from the dividend), so we compute the index
# where this separation is, and return the quotient and remainder.
separator = 1 - len(divisor)
return out[:separator], out[separator:] # Return quotient, remainder.
See also
Euclidean domain
Greatest common divisor of two polynomials
Gröbner basis
Horner scheme
Polynomial remainder theorem
Ruffini's rule
References
External links
Computer algebra
Division (mathematics)
Polynomials | Synthetic division | [
"Mathematics",
"Technology"
] | 1,640 | [
"Polynomials",
"Computer algebra",
"Computational mathematics",
"Computer science",
"Algebra"
] |
314,730 | https://en.wikipedia.org/wiki/Monic%20polynomial | In algebra, a monic polynomial is a non-zero univariate polynomial (that is, a polynomial in a single variable) in which the leading coefficient (the nonzero coefficient of highest degree) is equal to 1. That is to say, a monic polynomial is one that can be written as
with
Uses
Monic polynomials are widely used in algebra and number theory, since they produce many simplifications and they avoid divisions and denominators. Here are some examples.
Every polynomial is associated to a unique monic polynomial. In particular, the unique factorization property of polynomials can be stated as: Every polynomial can be uniquely factorized as the product of its leading coefficient and a product of monic irreducible polynomials.
Vieta's formulas are simpler in the case of monic polynomials: The th elementary symmetric function of the roots of a monic polynomial of degree equals where is the coefficient of the th power of the indeterminate.
Euclidean division of a polynomial by a monic polynomial does not introduce divisions of coefficients. Therefore, it is defined for polynomials with coefficients in a commutative ring.
Algebraic integers are defined as the roots of monic polynomials with integer coefficients.
Properties
Every nonzero univariate polynomial (polynomial with a single indeterminate) can be written
where are the coefficients of the polynomial, and the leading coefficient is not zero. By definition, such a polynomial is monic if
A product of monic polynomials is monic. A product of polynomials is monic if and only if the product of the leading coefficients of the factors equals .
This implies that, the monic polynomials in a univariate polynomial ring over a commutative ring form a monoid under polynomial multiplication.
Two monic polynomials are associated if and only if they are equal, since the multiplication of a polynomial by a nonzero constant produces a polynomial with this constant as its leading coefficient.
Divisibility induces a partial order on monic polynomials. This results almost immediately from the preceding properties.
Polynomial equations
Let be a polynomial equation, where is a univariate polynomial of degree . If one divides all coefficients of by its leading coefficient one obtains a new polynomial equation that has the same solutions and consists to equate to zero a monic polynomial.
For example, the equation
is equivalent to the monic equation
When the coefficients are unspecified, or belong to a field where division does not result into fractions (such as or a finite field), this reduction to monic equations may provide simplification. On the other hand, as shown by the previous example, when the coefficients are explicit integers, the associated monic polynomial is generally more complicated. Therefore, primitive polynomials are often used instead of monic polynomials when dealing with integer coefficients.
Integral elements
Monic polynomial equations are at the basis of the theory of algebraic integers, and, more generally of integral elements.
Let be a subring of a field ; this implies that is an integral domain. An element of is integral over if it is a root of a monic polynomial with coefficients in .
A complex number that is integral over the integers is called an algebraic integer. This terminology is motivated by the fact that the integers are exactly the rational numbers that are also algebraic integers. This results from the rational root theorem, which asserts that, if the rational number is a root of a polynomial with integer coefficients, then is a divisor of the leading coefficient; so, if the polynomial is monic, then and the number is an integer. Conversely, an integer is a root of the monic polynomial
It can be proved that, if two elements of a field are integral over a subring of , then the sum and the product of these elements are also integral over . It follows that the elements of that are integral over form a ring, called the integral closure of in . An integral domain that equals its integral closure in its field of fractions is called an integrally closed domain.
These concepts are fundamental in algebraic number theory. For example, many of the numerous wrong proofs of the Fermat's Last Theorem that have been written during more than three centuries were wrong because the authors supposed wrongly that the algebraic integers in an algebraic number field have unique factorization.
Multivariate polynomials
Ordinarily, the term monic is not employed for polynomials of several variables. However, a polynomial in several variables may be regarded as a polynomial in one variable with coefficients being polynomials in the other variables. Being monic depends thus on the choice of one "main" variable. For example, the polynomial
is monic, if considered as a polynomial in with coefficients that are polynomials in :
but it is not monic when considered as a polynomial in with coefficients polynomial in :
In the context of Gröbner bases, a monomial order is generally fixed. In this case, a polynomial may be said to be monic, if it has 1 as its leading coefficient (for the monomial order).
For every definition, a product of monic polynomials is monic, and, if the coefficients belong to a field, every polynomial is associated to exactly one monic polynomial.
Citations
References
Polynomials | Monic polynomial | [
"Mathematics"
] | 1,051 | [
"Polynomials",
"Algebra"
] |
314,743 | https://en.wikipedia.org/wiki/Hume%27s%20principle | Hume's principle or HP says that the number of Fs is equal to the number of Gs if and only if there is a one-to-one correspondence (a bijection) between the Fs and the Gs. HP can be stated formally in systems of second-order logic. Hume's principle is named for the Scottish philosopher David Hume and was coined by George Boolos.
HP plays a central role in Gottlob Frege's philosophy of mathematics. Frege shows that HP and suitable definitions of arithmetical notions entail all axioms of what we now call second-order arithmetic. This result is known as Frege's theorem, which is the foundation for a philosophy of mathematics known as neo-logicism.
Origins
Hume's principle appears in Frege's Foundations of Arithmetic (§63), which quotes from Part III of Book I of David Hume's A Treatise of Human Nature (1740). Hume there sets out seven fundamental relations between ideas. Concerning one of these, proportion in quantity or number, Hume argues that our reasoning about proportion in quantity, as represented by geometry, can never achieve "perfect precision and exactness", since its principles are derived from sense-appearance. He contrasts this with reasoning about number or arithmetic, in which such a precision can be attained:
Algebra and arithmetic [are] the only sciences in which we can carry on a chain of reasoning to any degree of intricacy, and yet preserve a perfect exactness and certainty. We are possessed of a precise standard, by which we can judge of the equality and proportion of numbers; and according as they correspond or not to that standard, we determine their relations, without any possibility of error. When two numbers are so combined, as that the one has always a unit answering to every unit of the other, we pronounce them equal; and it is for want of such a standard of equality in [spatial] extension, that geometry can scarce be esteemed a perfect and infallible science. (I. III. I.)
Note Hume's use of the word number in the ancient sense, to mean a set or collection of things rather than the common modern notion of "positive integer". The ancient Greek notion of number (arithmos) is of a finite plurality composed of units. See Aristotle, Metaphysics, 1020a14 and Euclid, Elements, Book VII, Definition 1 and 2. The contrast between the old and modern conception of number is discussed in detail in Mayberry (2000).
Influence on set theory
The principle that cardinal number was to be characterized in terms of one-to-one correspondence had previously been used by Georg Cantor, whose writings Frege knew. The suggestion has therefore been made that Hume's principle ought better be called "Cantor's Principle" or "The Hume-Cantor Principle". But Frege criticized Cantor on the ground that Cantor defines cardinal numbers in terms of ordinal numbers, whereas Frege wanted to give a characterization of cardinals that was independent of the ordinals. Cantor's point of view, however, is the one embedded in contemporary theories of transfinite numbers, as developed in axiomatic set theory.
References
Citations
External links
Stanford Encyclopedia of Philosophy: "Frege's Logic, Theorem, and Foundations for Arithmetic" by Edward Zalta.
"The Logical and Metaphysical Foundations of Classical Mathematics."
Arche: The Centre for Philosophy of Logic, Language, Mathematics and Mind at St. Andrew's University.
Set theory
Philosophy of mathematics
Mathematical principles
Concepts in logic | Hume's principle | [
"Mathematics"
] | 729 | [
"Mathematical principles",
"Mathematical logic",
"nan",
"Set theory"
] |
314,788 | https://en.wikipedia.org/wiki/Clostridium | Clostridium is a genus of anaerobic, Gram-positive bacteria. Species of Clostridium inhabit soils and the intestinal tracts of animals, including humans. This genus includes several significant human pathogens, including the causative agents of botulism and tetanus. It also formerly included an important cause of diarrhea, Clostridioides difficile, which was reclassified into the Clostridioides genus in 2016.
History
In the late 1700s, Germany experienced several outbreaks of an illness connected to eating specific sausages. In 1817, the German neurologist Justinus Kerner detected rod-shaped cells in his investigations into this so-called sausage poisoning. In 1897, the Belgian biology professor Emile van Ermengem published his finding of an endospore-forming organism he isolated from spoiled ham. Biologists classified van Ermengem's discovery along with other known gram-positive spore formers in the genus Bacillus. This classification presented problems, however, because the isolate grew only in anaerobic conditions, but Bacillus grew well in oxygen.
Circa 1880, in the course of studying fermentation and butyric acid synthesis, a scientist surnamed Prazmowski first assigned a binomial name to Clostridium butyricum. The mechanisms of anaerobic respiration were still not yet well elucidated at that time, so taxonomy of anaerobes was still developing.
In 1924, Ida A. Bengtson separated van Ermengem's microorganisms from the Bacillus group and assigned them to the genus Clostridium. By Bengtson's classification scheme, Clostridium contained all of the anaerobic endospore-forming rod-shaped bacteria, except the genus Desulfotomaculum.
Taxonomy
As of October 2022, there are 164 validly published species in Clostridium.
The genus, as traditionally defined, contains many organisms not closely related to its type species. The issue was originally illustrated in full detail by a rRNA phylogeny from Collins 1994, which split the traditional genus (now corresponding to a large slice of Clostridia) into twenty clusters, with cluster I containing the type species Clostridium butyricum and its close relatives. Over the years, this has resulted in many new genera being split out, with the ultimate goal of constraining Clostridium to cluster I.
"Clostridium" cluster XIVa (now Lachnospiraceae) and "Clostridium" cluster IV (now Ruminococcaceae) efficiently ferment plant polysaccharide composing dietary fiber, making them important and abundant taxa in the rumen and the human large intestine. As mentioned before, these clusters are not part of current Clostridium, and use of these terms should be avoided due to ambiguous or inconsistent usage.
Biochemistry
Species of Clostridium are obligate anaerobe and capable of producing endospores. They generally stain gram-positive, but as well as Bacillus, are often described as Gram-variable, because they show an increasing number of gram-negative cells as the culture ages.
The normal, reproducing cells of Clostridium, called the vegetative form, are rod-shaped, which gives them their name, from the Greek κλωστήρ or spindle. Clostridium endospores have a distinct bowling pin or bottle shape, distinguishing them from other bacterial endospores, which are usually ovoid in shape. The Schaeffer–Fulton stain (0.5% malachite green in water) can be used to distinguish endospores of Bacillus and Clostridium from other microorganisms.
Clostridium can be differentiated from the also endospore forming genus Bacillus by its obligate anaerobic growth, the shape of endospores and the lack of catalase. Species of Desulfotomaculum form similar endospores and can be distinguished by their requirement for sulfur.
Glycolysis and fermentation of pyruvic acid by Clostridia yield the end products butyric acid, butanol, acetone, isopropanol, and carbon dioxide.
There is a commercially available polymerase chain reaction (PCR) test kit (Bactotype) for the detection of C. perfringens and other pathogenic bacteria.
Biology and pathogenesis
Clostridium species are readily found inhabiting soils and intestinal tracts. Clostridium species are also a normal inhabitant of the healthy lower reproductive tract of females.
The main species responsible for disease in humans are:
Clostridium botulinum can produce botulinum toxin in food or wounds and can cause botulism. This same toxin is known as Botox and is used in cosmetic surgery to paralyze facial muscles to reduce the signs of aging; it also has numerous other therapeutic uses.
Clostridium perfringens causes a wide range of symptoms, from food poisoning to cellulitis, fasciitis, necrotic enteritis and gas gangrene.
Clostridium tetani causes tetanus.
Several more pathogenic species, that were previously described in Clostridium, have been found to belong to other genera.
Clostridium difficile, now placed in Clostridioides.
Clostridium histolyticum, now placed in Hathewaya.
Clostridium sordellii, now placed in Paraclostridium, can cause a fatal infection in exceptionally rare cases after medical abortions.
Treatment
In general, the treatment of clostridial infection is high-dose penicillin G, to which the organism has remained susceptible. Clostridium welchii and Clostridium tetani respond to sulfonamides. Clostridia are also susceptible to tetracyclines, carbapenems (imipenem), metronidazole, vancomycin, and chloramphenicol.
The vegetative cells of clostridia are heat-labile and are killed by short heating at temperatures above . The thermal destruction of Clostridium spores requires higher temperatures (above , for example in an autoclave) and longer cooking times (20 min, with a few exceptional cases of more than 50 min recorded in the literature). Clostridia and Bacilli are quite radiation-resistant, requiring doses of about 30 kGy, which is a serious obstacle to the development of shelf-stable irradiated foods for general use in the retail market. The addition of lysozyme, nitrate, nitrite and propionic acid salts inhibits clostridia in various foods.
Fructooligosaccharides (fructans) such as inulin, occurring in relatively large amounts in a number of foods such as chicory, garlic, onion, leek, artichoke, and asparagus, have a prebiotic or bifidogenic effect, selectively promoting the growth and metabolism of beneficial bacteria in the colon, such as Bifidobacteria and Lactobacilli, while inhibiting harmful ones, such as clostridia, fusobacteria, and Bacteroides.
Use
Clostridium thermocellum can use lignocellulosic waste and generate ethanol, thus making it a possible candidate for use in production of ethanol fuel. It also has no oxygen requirement and is thermophilic, which reduces cooling cost.
Clostridium acetobutylicum was first used by Chaim Weizmann to produce acetone and biobutanol from starch in 1916 for the production of cordite (smokeless gunpowder).
Clostridium botulinum produces a potentially lethal neurotoxin used in a diluted form in the drug Botox, which is carefully injected to nerves in the face, which prevents the movement of the expressive muscles of the forehead, to delay the wrinkling effect of aging. It is also used to treat spasmodic torticollis and provides relief for around 12 to 16 weeks.
Clostridium butyricum MIYAIRI 588 strain is marketed in Japan, Korea, and China for Clostridium difficile prophylaxis due to its reported ability to interfere with the growth of the latter.
Clostridium histolyticum has been used as a source of the enzyme collagenase, which degrades animal tissue. Clostridium species excrete collagenase to eat through tissue and, thus, help the pathogen spread throughout the body. The medical profession uses collagenase for the same reason in the débridement of infected wounds. Hyaluronidase, deoxyribonuclease, lecithinase, leukocidin, protease, lipase, and hemolysin are also produced by some clostridia that cause gas gangrene.
Clostridium ljungdahlii, recently discovered in commercial chicken wastes, can produce ethanol from single-carbon sources including synthesis gas, a mixture of carbon monoxide and hydrogen, that can be generated from the partial combustion of either fossil fuels or biomass.
Clostridium butyricum converts glycerol to 1,3-propanediol.
Genes from Clostridium thermocellum have been inserted into transgenic mice to allow the production of endoglucanase. The experiment was intended to learn more about how the digestive capacity of monogastric animals could be improved.
Nonpathogenic strains of Clostridium may help in the treatment of diseases such as cancer. Research shows that Clostridium can selectively target cancer cells. Some strains can enter and replicate within solid tumors. Clostridium could, therefore, be used to deliver therapeutic proteins to tumours. This use of Clostridium has been demonstrated in a variety of preclinical models.
Mixtures of Clostridium species, such as Clostridium beijerinckii, Clostridium butyricum, and species from other genera have been shown to produce biohydrogen from yeast waste.
References
External links
Clostridium genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID
Todar's Online Textbook of Bacteriology
UK Clostridium difficile Support Group
Pathema-Clostridium Resource
Water analysis: Clostridium video
Gram-positive bacteria
Gut flora bacteria
Pathogenic bacteria
Bacteria genera
Taxa described in 1880 | Clostridium | [
"Biology"
] | 2,258 | [
"Gut flora bacteria",
"Bacteria"
] |
314,798 | https://en.wikipedia.org/wiki/In%20vitro%20toxicology | In vitro toxicity testing is the scientific analysis of the toxic effects of chemical substances on cultured bacteria or mammalian cells. In vitro (literally 'in glass') testing methods are employed primarily to identify potentially hazardous chemicals and/or to confirm the lack of certain toxic properties in the early stages of the development of potentially useful new substances such as therapeutic drugs, agricultural chemicals and food additives.
In vitro assays for xenobiotic toxicity are recently carefully considered by key government agencies (e.g., EPA; NIEHS/NTP; FDA), to better assess human risks. There are substantial activities in using in vitro systems to advance mechanistic understanding of toxicant activities, and the use of human cells and tissue to define human-specific toxic effects.
Improvement over animal testing
Most toxicologists believe that in vitro toxicity testing methods can be more useful, more time and cost-effective than toxicology studies in living animals (which are termed in vivo or "in life" methods). However, the extrapolation from in vitro to in vivo requires some careful consideration and is an active research area.
Due to regulatory constraints and ethical considerations, the quest for alternatives to animal testing has gained a new momentum. In many cases the in vitro tests are better than animal tests because they can be used to develop safer products.
The United States Environmental Protection Agency studied 1,065 chemical and drug substances in their ToxCast program (part of the CompTox Chemicals Dashboard) using in silica modelling and a human pluripotent stem cell-based assay to predict in vivo developmental intoxicants based on changes in cellular metabolism following chemical exposure. Major findings from the analysis of this ToxCast_STM dataset published in 2020 include: (1) 19% of 1065 chemicals yielded a prediction of developmental toxicity, (2) assay performance reached 79%–82% accuracy with high specificity (> 84%) but modest sensitivity (< 67%) when compared with in vivo animal models of human prenatal developmental toxicity, (3) sensitivity improved as more stringent weights of evidence requirements were applied to the animal studies, and (4) statistical analysis of the most potent chemical hits on specific biochemical targets in ToxCast revealed positive and negative associations with the STM response, providing insights into the mechanistic underpinnings of the targeted endpoint and its biological domain.
Examples of cell viability and other cytotoxicity assays used for in-vitro toxicology
Many methods of analysis exist for assaying test substances for cytotoxicity and other cellular responses.
Hemolysis assay
The hemolysis assay examines the propensity of chemicals, drugs or medication, or any blood-contacting medical device or material to lyse red blood cells (erythrocytes). The lysis is easily detected due to the release of hemoglobin.
MTT and MTS
MTT assay is used often in determining cell viability and has been validated for use by international organisations. MTT assay involves two steps of introducing the assay to the chemicals and then a solubilisation step.
The colorimetric MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2Htetrazolium) in vitro assay is an updated version of the validated MTT method, MTS assay has the advantage of being soluble. Hence, no solubilisation step is required.
ATP
ATP assay has the main advantage of providing results quickly (within 15 minutes) and only requires fewer sample cells. The assay performs lysis on the cells and the following chemical reaction between the assay and ATP content of cells produces luminescence. The amount of luminescence is then measured by a photometer and can be translated into number cells alive since
ATP assay assumes alive cells still have ATP inside them, and
Luminescence level recorded is proportional to the ATP content in the sample cells.
Neutral red
Another cell viability endpoint can be neutral red (NR) uptake. Neutral red, a weak cationic dye penetrates cellular membranes by non-diffusion and accumulates intercellularly in lysosomes. Viable cells take up the NR dye, damaged or dead cells do not.
Cytokine quantification via ELISA
ELISA kits can be used to examine up and down regulation of proinflammatory mediators such as cytokines (IL-1, TNF alpha, PGE2)....
Measurement of these types of cellular responses can be windows into the interaction of the test article on the test models (monolayer cell cultures, 3D tissue models, tissue explants).
Types of in vitro studies
Broadly speaking, there are two different types of in vitro studies depending on the type system developed to perform the experiment. The two types of systems generally used are : a) Static well plate system and b) the multi-compartmental perfused systems.
Static well plate system
The static well plate or layer systems are the most traditional and simplest form of assays widely used for in vitro study. These assays are quite beneficial as they are quite simple and provide a very accessible testing environment for monitoring chemicals in the culture medium as well as in the cell. However the disadvantage of using these simple static well plate assays is that, they cannot represent the cellular interactions and physiologic fluid flow conditions taking place inside the body.
Multi-compartmental perfused systems
New testing platforms are now developed to solve problems related to cellular interactions. These new platforms are much more complex based on multi-compartmental perfused systems.
The main objective of these systems is to reproduce in vivo mechanisms more reliably by providing cell culture environment close to the in vivo situation. Each compartment in the system represent a specific organ of the living organism and thus each compartment has a specific characteristics and criteria. Each compartment in these systems are connected by tubes and pumps through which the fluid flows thus mimicking the blood flow in the in vivo situation. The draw back behind the use of these perfused systems is that, the adverse effects ( influence of both the biological and non-biological components of the system on the fate of the chemical under study) are more compared to the static systems. In order to reduce the effect of non-biological components of the system, all the compartments are made of glass and the connecting tubes are made up of teflon. A number of kinetic models have been proposed to take care of these non-specific bindings taking place in these in vitro systems.
To improve the biological difficulties arising from the use of different culture in vitro conditions, the traditional models used in flasks or micro-well plates has to be modified. With parallel development in micro-technologies and tissue engineering, these problems are solved using new pertinent tools called "micro-fluidic biochips".
References
Toxicology
Alternatives to animal testing | In vitro toxicology | [
"Chemistry",
"Environmental_science"
] | 1,455 | [
"Animal testing",
"Toxicology",
"Alternatives to animal testing"
] |
314,881 | https://en.wikipedia.org/wiki/International%20Style | The International Style is a major architectural style and movement that began in western Europe in the 1920s and dominated modern architecture until the 1970s. It is defined by strict adherence to functional and utilitarian designs and construction methods, typically expressed through minimalism. The style is characterized by modular and rectilinear forms, flat surfaces devoid of ornamentation and decoration, open and airy interiors that blend with the exterior, and the use of glass, steel, and concrete.
The International Style is sometimes called rationalist architecture and the modern movement, although the former is mostly used in English to refer specifically to either Italian rationalism or the style that developed in 1920s Europe more broadly. In continental Europe, this and related styles are variably called Functionalism, Neue Sachlichkeit ("New Objectivity"), De Stijl ("The Style"), and Rationalism, all of which are contemporaneous movements and styles that share similar principles, origins, and proponents.
Rooted in the modernism movement, the International Style is closely related to "Modern architecture" and likewise reflects several intersecting developments in culture, politics, and technology in the early 20th century. After being brought to the United States by European architects in the 1930s, it quickly became an "unofficial" North American style, particularly after World War II. The International Style reached its height in the 1950s and 1960s, when it was widely adopted worldwide for its practicality and as a symbol of industry, progress, and modernity. The style remained the prevailing design philosophy for urban development and reconstruction into the 1970s, especially in the Western world.
The International Style was one of the first architectural movements to receive critical renown and global popularity. Regarded as the high point of modernist architecture, it is sometimes described as the "architecture of the modern movement" and credited with "single-handedly transforming the skylines of every major city in the world with its simple cubic forms". The International Style's emphasis on transcending historical and cultural influences, while favoring utility and mass-production methods, made it uniquely versatile in its application; the style was ubiquitous in a wide range of purposes, ranging from social housing and governmental buildings to corporate parks and skyscrapers.
Nevertheless, these same qualities provoked negative reactions against the style as monotonous, austere, and incongruent with existing landscapes; these critiques are conveyed through various movements such as postmodernism, new classical architecture, and deconstructivism.
Postmodern architecture was developed in the 1960s in reaction to the International Style, becoming dominant in the 1980s and 1990s.
Concept and definition
The term "International Style" was first used in 1932 by the historian Henry-Russell Hitchcock and architect Philip Johnson to describe a movement among European architects in the 1920s that was distinguished by three key design principles: (1) "Architecture as volume – thin planes or surfaces create the building’s form, as opposed to a solid mass"; (2) "Regularity in the facade, as opposed to building symmetry"; and (3) "No applied ornament".
International style is an ambiguous term; the unity and integrity of this direction is deceptive. Its formal features were revealed differently in different countries. Despite the unconditional commonality, the international style has never been a single phenomenon. However, International Style architecture demonstrates a unity of approach and general principles: lightweight structures, skeletal frames, new materials, a modular system, an open plan, and the use of simple geometric shapes.
The problem of the International Style is that it is not obvious what type of material the term should be applied to: at the same time, there are key monuments of the 20th century (Le Corbusier's Villa Savoye; Wright's Fallingwater House) and mass-produced architectural products of their time. Here it is appropriate to talk about the use of recognizable formal techniques and the creation of a standard architectural product, rather than iconic objects.
Hitchcock and Johnson's 1932 MoMA exhibition catalog identified three principles of the style: volume of internal space (as opposed to mass and solidity), flexibility and regularity (liberation from classical symmetry). and the expulsion of applied ornamentation ('artificial accents').
Common characteristics of the International Style include: a radical simplification of form, a rejection of superfluous ornamentation, bold repetition and embracement of sleek glass, steel and efficient concrete as preferred materials. Accents were found to be suitably derived from natural design irregularities, such as the position of doors and fire escapes, stair towers, ventilators and even electric signs.
Further, the transparency of buildings, construction (called the honest expression of structure), and acceptance of industrialized mass-production techniques contributed to the international style's design philosophy. Finally, the machine aesthetic, and logical design decisions leading to support building function were used by the International architect to create buildings reaching beyond historicism. The ideals of the style are commonly summed up in three slogans: ornament is a crime, truth to materials, form follows function; and Le Corbusier's description: "A house is a machine to live in".
International style is sometimes understood as a general term associated with such architectural phenomena as Brutalist architecture, constructivism, functionalism, and rationalism.
Phenomena similar in nature also existed in other artistic fields, for example in graphics, such as the International Typographic Style and Swiss Style.
The Getty Research Institute defines it as "the style of architecture that emerged in The Netherlands, France, and Germany after World War I and spread throughout the world, becoming the dominant architectural style until the 1970s. The style is characterized by an emphasis on volume over mass, the use of lightweight, mass-produced, industrial materials, rejection of all ornament and colour, repetitive modular forms, and the use of flat surfaces, typically alternating with areas of glass." Some researchers consider the International Style as one of the attempts to create an ideal and utilitarian form.
Background
Around the start of the 20th century, a number of architects around the world began developing new architectural solutions to integrate traditional precedents with new social demands and technological possibilities. The work of Victor Horta and Henry van de Velde in Brussels, Antoni Gaudí in Barcelona, Otto Wagner in Vienna and Charles Rennie Mackintosh in Glasgow, among many others, can be seen as a common struggle between old and new. These architects were not considered part of the International Style because they practiced in an "individualistic manner" and seen as the last representatives of Romanticism.
The International Style can be traced to buildings designed by a small group of modernists, the major figures of which include Ludwig Mies van der Rohe, Jacobus Oud, Le Corbusier, Richard Neutra and Philip Johnson.
The founder of the Bauhaus school, Walter Gropius, along with prominent Bauhaus instructor, Ludwig Mies van der Rohe, became known for steel frame structures employing glass curtain walls. One of the world's earliest modern buildings where this can be seen is a shoe factory designed by Gropius in 1911 in Alfeld, Germany, called the Fagus Works building. The first building built entirely on Bauhaus design principles was the concrete and steel Haus am Horn, built in 1923 in Weimar, Germany, designed by Georg Muche. The Gropius-designed Bauhaus school building in Dessau, built 1925–26 and the Harvard Graduate Center (Cambridge, Massachusetts; 1949–50) also known as the Gropius Complex, exhibit clean lines and a "concern for uncluttered interior spaces".
Marcel Breuer, a recognized leader in Béton Brut (Brutalist) architecture and notable alumnus of the Bauhaus, who also pioneered the use of plywood and tubular steel in furniture design, and who after leaving the Bauhaus would later teach alongside Gropius at Harvard, is as well an important contributor to Modernism and the International Style.
Prior to use of the term 'International Style', some American architects—such as Louis Sullivan, Frank Lloyd Wright, and Irving Gill—exemplified qualities of simplification, honesty and clarity.
Frank Lloyd Wright's Wasmuth Portfolio had been exhibited in Europe and influenced the work of European modernists, and his travels there probably influenced his own work, although he refused to be categorized with them. His buildings of the 1920s and 1930s clearly showed a change in the style of the architect, but in a different direction than the International Style.
In Europe the modern movement in architecture had been called Functionalism or Neue Sachlichkeit (New Objectivity), L'Esprit Nouveau, or simply Modernism and was very much concerned with the coming together of a new architectural form and social reform, creating a more open and transparent society.
The "International Style", as defined by Hitchcock and Johnson, had developed in 1920s Western Europe, shaped by the activities of the Dutch De Stijl movement, Le Corbusier, and the Deutscher Werkbund and the Bauhaus. Le Corbusier had embraced Taylorist and Fordist strategies adopted from American industrial models in order to reorganize society. He contributed to a new journal called L'Esprit Nouveau that advocated the use of modern industrial techniques and strategies to create a higher standard of living on all socio-economic levels. In 1927, one of the first and most defining manifestations of the International Style was the Weissenhof Estate in Stuttgart, overseen by Ludwig Mies van der Rohe. It was enormously popular, with thousands of daily visitors.
1932 MoMA exhibition
The exhibition Modern Architecture: International Exhibition ran from February 9 to March 23, 1932, at the Museum of Modern Art (MoMA), in the Heckscher Building at Fifth Avenue and 56th Street in New York. Beyond a foyer and office, the exhibition was divided into six rooms: the "Modern Architects" section began in the entrance room, featuring a model of William Lescaze's Chrystie-Forsyth Street Housing Development in New York. From there visitors moved to the centrally placed Room A, featuring a model of a mid-rise housing development for Evanston, Illinois, by Chicago architect brothers Monroe Bengt Bowman and Irving Bowman, as well as a model and photos of Walter Gropius's Bauhaus building in Dessau. In the largest exhibition space, Room C, were works by Le Corbusier, Ludwig Mies van der Rohe, J. J. P. Oud and Frank Lloyd Wright (including a project for a house on the Mesa in Denver, 1932). Room B was a section titled "Housing", presenting "the need for a new domestic environment" as it had been identified by historian and critic Lewis Mumford. In Room D were works by Raymond Hood (including "Apartment Tower in the Country" and the McGraw-Hill Building) and Richard Neutra. In Room E was a section titled "The extent of modern architecture", added at the last minute, which included the works of thirty-seven modern architects from fifteen countries who were said to be influenced by the works of Europeans of the 1920s. Among these works was shown Alvar Aalto's Turun Sanomat newspaper offices building in Turku, Finland.
After a six-week run in New York City, the exhibition then toured the US – the first such "traveling-exhibition" of architecture in the US – for six years.
Curators
MoMA director Alfred H. Barr hired architectural historian and critic Henry-Russell Hitchcock and Philip Johnson to curate the museum's first architectural exhibition. The three of them toured Europe together in 1929 and had also discussed Hitchcock's book about modern art. By December 1930, the first written proposal for an exhibition of the "new architecture" was set down, yet the first draft of the book was not complete until some months later.
Publications
The 1932 exhibition led to two publications by Hitchcock and Johnson:
The exhibition catalog, "Modern Architecture: International Exhibition"
The book, The International Style: Architecture Since 1922, published by W. W. Norton & Co. in 1932.
reprinted in 1997 by W. W. Norton & Company
Previous to the 1932 exhibition and book, Hitchcock had concerned himself with the themes of modern architecture in his 1929 book Modern Architecture: Romanticism and Reintegration.
According to Terence Riley: "Ironically the (exhibition) catalogue, and to some extent, the book The International Style, published at the same time of the exhibition, have supplanted the actual historical event."
Exemplary Uses of the International Style
The following architects and buildings were selected by Hitchcock and Johnson for display at the exhibition Modern Architecture: International Exhibition:
Notable omissions
The exhibition excluded other contemporary styles that were exploring the boundaries of architecture at the time, including: Art Deco; German Expressionism, for instance the works of Hermann Finsterlin; and the organicist movement, popularized in the work of Antoni Gaudí. As a result of the 1932 exhibition, the principles of the International Style were endorsed, while other styles were classed less significant.
In 1922, the competition for the Tribune Tower and its famous second-place entry by Eliel Saarinen gave some indication of what was to come, though these works would not have been accepted by Hitchcock and Johnson as representing the "International Style". Similarly, Johnson, writing about Joseph Urban's recently completed New School for Social Research in New York, stated: "In the New School we have an anomaly of a building supposed to be in a style of architecture based on the development of the plan from function and facade from plan but which is a formally and pretentiously conceived as a Renaissance palace. Urban's admiration for the New Style is more complete than his understanding."
California architect Rudolph Schindler's work was not a part of the exhibit, though Schindler had pleaded with Hitchcock and Johnson to be included. Then, "[f]or more than 20 years, Schindler had intermittently launched a series of spirited, cantankerous exchanges with the museum."
Before 1932
1932–1944
The gradual rise of the Nazi regime in Weimar Germany in the 1930s, and the Nazis' rejection of modern architecture, meant that an entire generation of avant-gardist architects, many of them Jews, were forced out of continental Europe. Some, such as Mendelsohn, found shelter in England, while a considerable number of the Jewish architects made their way to Palestine, and others to the US. However, American anti-Communist politics after the war and Philip Johnson's influential rejection of functionalism have tended to mask the fact that many of the important architects, including contributors to the original Weissenhof project, fled to the Soviet Union. This group also tended to be far more concerned with functionalism and its social agenda. Bruno Taut, Mart Stam, the second Bauhaus director Hannes Meyer, Ernst May and other important figures of the International Style went to the Soviet Union in 1930 to undertake huge, ambitious, idealistic urban planning projects, building entire cities from scratch. In 1936, when Stalin ordered them out of the country, many of these architects became stateless and sought refuge elsewhere; for example, Ernst May moved to Kenya.
The White City of Tel Aviv is a collection of over 4,000 buildings built in the International Style in the 1930s. Many Jewish architects who had studied at the German Bauhaus school designed significant buildings here. A large proportion of the buildings built in the International Style can be found in the area planned by Patrick Geddes, north of Tel Aviv's main historical commercial center. In 1994, UNESCO proclaimed the White City a World Heritage Site, describing the city as "a synthesis of outstanding significance of the various trends of the Modern Movement in architecture and town planning in the early part of the 20th century". In 1996, Tel Aviv's White City was listed as a World Monuments Fund endangered site.
The residential area of Södra Ängby in western Stockholm, Sweden, blended an international or functionalist style with garden city ideals. Encompassing more than 500 buildings, most of them designed by Edvin Engström, it remains the largest coherent functionalist or "International Style" villa area in Sweden and possibly the world, still well-preserved more than a half-century after its construction in 1933–40 and protected as a national cultural heritage.
Zlín is a city in the Czech Republic which was in the 1930s completely reconstructed on principles of functionalism. In that time the city was a headquarters of Bata Shoes company and Tomáš Baťa initiated a complex reconstruction of the city which was inspired by functionalism and the Garden city movement. Tomas Bata Memorial is the most valuable monument of the Zlín functionalism. It is a modern paraphrase of the constructions of high gothic style period: the supporting system and colourful stained glass and the reinforced concrete skeleton and glass.
With the rise of Nazism, a number of key European modern architects fled to the US. When Walter Gropius and Marcel Breuer fled Germany they both arrived at the Harvard Graduate School of Design, in an excellent position to extend their influence and promote the Bauhaus as the primary source of architectural modernism. When Mies fled in 1938, he first fled to England, but on emigrating to the US he went to Chicago, founded the Second School of Chicago at IIT and solidified his reputation as a prototypical modern architect.
1945–present
After World War II, the International Style matured; Hellmuth, Obata & Kassabaum (later renamed HOK) and Skidmore, Owings & Merrill (SOM) perfected the corporate practice, and it became the dominant approach for decades in the US and Canada. Beginning with the initial technical and formal inventions of 860-880 Lake Shore Drive Apartments in Chicago, its most famous examples include the United Nations headquarters, the Lever House, the Seagram Building in New York City, and the campus of the United States Air Force Academy in Colorado Springs, Colorado, as well as the Toronto-Dominion Centre in Toronto. Further examples can be found in mid-century institutional buildings throughout North America and the "corporate architecture" spread from there, especially to Europe.
In Canada, this period coincided with a major building boom and few restrictions on massive building projects. International Style skyscrapers came to dominate many of Canada's major cities, especially Ottawa, Montreal, Vancouver, Calgary, Edmonton, Hamilton, and Toronto. While these glass boxes were at first unique and interesting, the idea was soon repeated to the point of ubiquity. A typical example is the development of so-called Place de Ville, a conglomeration of three glass skyscrapers in downtown Ottawa, where the plans of the property developer Robert Campeau in the mid-1960s and early 1970s—in the words of historian Robert W. Collier, were "forceful and abrasive[;] he was not well-loved at City Hall"—had no regard for existing city plans, and "built with contempt for the existing city and for city responsibilities in the key areas of transportation and land use". Architects attempted to put new twists into such towers, such as the Toronto City Hall by Finnish architect Viljo Revell. By the late 1970s a backlash was under way against modernism—prominent anti-modernists such as Jane Jacobs and George Baird were partly based in Toronto.
The typical International Style or "corporate architecture" high-rise usually consists of the following:
Square or rectangular footprint
Simple cubic "extruded rectangle" form
Windows running in broken horizontal rows forming a grid
All facade angles are 90 degrees.
In 2000 UNESCO proclaimed University City of Caracas in Caracas, Venezuela, as a World Heritage Site, describing it as "a masterpiece of modern city planning, architecture and art, created by the Venezuelan architect Carlos Raúl Villanueva and a group of distinguished avant-garde artists".
In June 2007 UNESCO proclaimed Ciudad Universitaria of the Universidad Nacional Autónoma de México (UNAM), in Mexico City, a World Heritage Site due to its relevance and contribution in terms of international style movement. It was designed in the late 1940s and built in the mid-1950s based upon a masterplan created by architect Enrique del Moral. His original idea was enriched by other students, teachers, and diverse professionals of several disciplines. The university houses murals by Diego Rivera, Juan O'Gorman and others. The university also features Olympic Stadium (1968). In his first years of practice, Pritzker Prize winner and Mexican architect Luis Barragán designed buildings in the International Style. But later he evolved to a more traditional local architecture. Other notable Mexican architects of the International Style or modern period are Carlos Obregón Santacilia, Augusto H. Alvarez, Mario Pani, , Vladimir Kaspé, Enrique del Moral, Juan Sordo Madaleno, Max Cetto, among many others.
In Brazil Oscar Niemeyer proposed a more organic and sensual International Style. He designed the political landmarks (headquarters of the three state powers) of the new, planned capital Brasília. The masterplan for the city was proposed by Lúcio Costa.
Criticism
In 1930, Frank Lloyd Wright wrote: "Human houses should not be like boxes, blazing in the sun, nor should we outrage the Machine by trying to make dwelling-places too complementary to Machinery."
In Elizabeth Gordon's well-known 1953 essay, "The Threat to the Next America", she criticized the style as non-practical, citing many instances where "glass houses" are too hot in summer and too cold in winter, empty, take away private space, lack beauty and generally are not livable. Moreover, she accused this style's proponents of taking away a sense of beauty from people and thus covertly pushing for a totalitarian society.
In 1966, architect Robert Venturi published Complexity and Contradiction in Architecture, essentially a book-length critique of the International Style. Architectural historian Vincent Scully regarded Venturi's book as 'probably the most important writing on the making of architecture since Le Corbusier's Vers une Architecture. It helped to define postmodernism.
Best-selling American author Tom Wolfe wrote a book-length critique, From Bauhaus to Our House, portraying the style as elitist.
One of the supposed strengths of the International Style has been said to be that the design solutions were indifferent to location, site, and climate; the solutions were supposed to be universally applicable; the style made no reference to local history or national vernacular. This was soon identified as one of the style's primary weaknesses.
In 2006, Hugh Pearman, the British architectural critic of The Times, observed that those using the style today are simply "another species of revivalist", noting the irony. The negative reaction to internationalist modernism has been linked to public antipathy to overall development.
In the preface to the fourth edition of his book Modern Architecture: A Critical History (2007), Kenneth Frampton argued that there had been a "disturbing Eurocentric bias" in histories of modern architecture. This "Eurocentrism" included the US.
Architects
Alvar Aalto
Max Abramovitz
Luis Barragán
Welton Becket
Pietro Belluschi
Geoffrey Bazeley
Max Bill
Marcel Breuer
Roberto Burle Marx
Gordon Bunshaft
Natalie de Blois
Henry N. Cobb
George Dahl
Sir Frederick Gibberd
Charles and Ray Eames
Otto Eisler
Joseph Emberton
Bohuslav Fuchs
Paul Furiet
Heydar Ghiai
Landis Gores
Bruce Graham
Eileen Gray
Walter Gropius
Otto Haesler
Arieh El-Hanani
Wallace Harrison
Hermann Henselmann
Raymond Hood
George Howe
Muzharul Islam
Arne Jacobsen
Marcel Janco
John M. Johansen
Philip Johnson
Roger Johnson
Louis Kahn
Dov Karmi
Oskar Kaufmann
Richard Kauffmann
Fazlur Khan
Frederick John Kiesler
Friedrich Silaban
Le Corbusier
William Lescaze
Charles Luckman
Yehuda Magidovitch
Michael Manser
Alfred Mansfeld
Erich Mendelsohn
John O. Merrill
Hannes Meyer
Ludwig Mies van der Rohe
Richard Neutra
Oscar Niemeyer
Eliot Noyes
Gyo Obata
Jacobus Oud
Nathaniel A. Owings
Mario Pani
I. M. Pei
Frits Peutz
Ernst Plischke
Ralph Rapson
Zeev Rechter
Viljo Revell
Gerrit Rietveld
Carl Rubin
Eero Saarinen
Rudolph Schindler
Michael Scott
Arieh Sharon
Louis Skidmore
Ben-Ami Shulman
Jerzy Sołtan
Raphael Soriano
Edward Durell Stone
Paul Thiry
Carlos Raúl Villanueva
Leendert van der Vlugt
Munio Weinraub
Lloyd Wright
Minoru Yamasaki
The Architects Collaborative
Toyo Ito
See also
Critical regionalism
Expressionist architecture
Functionalism (architecture)
High-tech architecture
Modern architecture
Northwest Regional style
Organic architecture
Swiss Style (design)
International Typographic Style
References
Further reading
Boness, Stefan. Tel Aviv: The White City, Jovis, Berlin 2012,
Elderfield, John (ed.). Philip Johnson and the Museum of Modern Art, Museum of Modern Art, New York, 1998
Gössel, Gabriel. Functional Architecture. Funktionale Architektur. Le Style International. 1925–1940, Taschen, Berlin, 1990
Riley, Terence. The International Style: Exhibition 15 and The Museum of Modern Art, Rizzoli, New York, 1992
Tabibi, Baharak Exhibitions as the Medium of Architectural Reproduction – "Modern Architecture: International Exhibition", Department of Architecture, Middle East Technical University, 2005]
Vasileva E. (2016) Ideal and utilitarian in the international style system: subject and object in the design concept of the 20th century // International Journal of Cultural Research, 4 (25), 72–80.
External links
"How Chicago Sparked the International Style of Architecture in America". Architectural Digest.
I
20th-century architectural styles
Architectural design
International Exhibition of Modern Architecture | International Style | [
"Engineering"
] | 5,327 | [
"Design",
"Architectural design",
"Architecture"
] |
314,905 | https://en.wikipedia.org/wiki/Reflective%20programming | In computer science, reflective programming or reflection is the ability of a process to examine, introspect, and modify its own structure and behavior.
Historical background
The earliest computers were programmed in their native assembly languages, which were inherently reflective, as these original architectures could be programmed by defining instructions as data and using self-modifying code. As the bulk of programming moved to higher-level compiled languages such as Algol, Cobol, Fortran, Pascal, and C, this reflective ability largely disappeared until new programming languages with reflection built into their type systems appeared.
Brian Cantwell Smith's 1982 doctoral dissertation introduced the notion of computational reflection in procedural programming languages and the notion of the meta-circular interpreter as a component of 3-Lisp.
Uses
Reflection helps programmers make generic software libraries to display data, process different formats of data, perform serialization and deserialization of data for communication, or do bundling and unbundling of data for containers or bursts of communication.
Effective use of reflection almost always requires a plan: A design framework, encoding description, object library, a map of a database or entity relations.
Reflection makes a language more suited to network-oriented code. For example, it assists languages such as Java to operate well in networks by enabling libraries for serialization, bundling and varying data formats. Languages without reflection such as C are required to use auxiliary compilers for tasks like Abstract Syntax Notation to produce code for serialization and bundling.
Reflection can be used for observing and modifying program execution at runtime. A reflection-oriented program component can monitor the execution of an enclosure of code and can modify itself according to a desired goal of that enclosure. This is typically accomplished by dynamically assigning program code at runtime.
In object-oriented programming languages such as Java, reflection allows inspection of classes, interfaces, fields and methods at runtime without knowing the names of the interfaces, fields, methods at compile time. It also allows instantiation of new objects and invocation of methods.
Reflection is often used as part of software testing, such as for the runtime creation/instantiation of mock objects.
Reflection is also a key strategy for metaprogramming.
In some object-oriented programming languages such as C# and Java, reflection can be used to bypass member accessibility rules. For C#-properties this can be achieved by writing directly onto the (usually invisible) backing field of a non-public property. It is also possible to find non-public methods of classes and types and manually invoke them. This works for project-internal files as well as external libraries such as .NET's assemblies and Java's archives.
Implementation
A language that supports reflection provides a number of features available at runtime that would otherwise be difficult to accomplish in a lower-level language. Some of these features are the abilities to:
Discover and modify source-code constructions (such as code blocks, classes, methods, protocols, etc.) as first-class objects at runtime.
Convert a string matching the symbolic name of a class or function into a reference to or invocation of that class or function.
Evaluate a string as if it were a source-code statement at runtime.
Create a new interpreter for the language's bytecode to give a new meaning or purpose for a programming construct.
These features can be implemented in different ways. In MOO, reflection forms a natural part of everyday programming idiom. When verbs (methods) are called, various variables such as verb (the name of the verb being called) and this (the object on which the verb is called) are populated to give the context of the call. Security is typically managed by accessing the caller stack programmatically: Since callers() is a list of the methods by which the current verb was eventually called, performing tests on callers()[0] (the command invoked by the original user) allows the verb to protect itself against unauthorised use.
Compiled languages rely on their runtime system to provide information about the source code. A compiled Objective-C executable, for example, records the names of all methods in a block of the executable, providing a table to correspond these with the underlying methods (or selectors for these methods) compiled into the program. In a compiled language that supports runtime creation of functions, such as Common Lisp, the runtime environment must include a compiler or an interpreter.
Reflection can be implemented for languages without built-in reflection by using a program transformation system to define automated source-code changes.
Security considerations
Reflection may allow a user to create unexpected control flow paths through an application, potentially bypassing security measures. This may be exploited by attackers. Historical vulnerabilities in Java caused by unsafe reflection allowed code retrieved from potentially untrusted remote machines to break out of the Java sandbox security mechanism. A large scale study of 120 Java vulnerabilities in 2013 concluded that unsafe reflection is the most common vulnerability in Java, though not the most exploited.
Examples
The following code snippets create an instance of class and invoke its method . For each programming language, normal and reflection-based call sequences are shown.
Common Lisp
The following is an example in Common Lisp using the Common Lisp Object System:
(defclass foo () ())
(defmethod print-hello ((f foo)) (format T "Hello from ~S~%" f))
;; Normal, without reflection
(let ((foo (make-instance 'foo)))
(print-hello foo))
;; With reflection to look up the class named "foo" and the method
;; named "print-hello" that specializes on "foo".
(let* ((foo-class (find-class (read-from-string "foo")))
(print-hello-method (find-method (symbol-function (read-from-string "print-hello"))
nil (list foo-class))))
(funcall (sb-mop:method-generic-function print-hello-method)
(make-instance foo-class)))
C#
The following is an example in C#:
// Without reflection
var foo = new Foo();
foo.PrintHello();
// With reflection
Object foo = Activator.CreateInstance("complete.classpath.and.Foo");
MethodInfo method = foo.GetType().GetMethod("PrintHello");
method.Invoke(foo, null);
Delphi, Object Pascal
This Delphi and Object Pascal example assumes that a class has been declared in a unit called :
uses RTTI, Unit1;
procedure WithoutReflection;
var
Foo: TFoo;
begin
Foo := TFoo.Create;
try
Foo.Hello;
finally
Foo.Free;
end;
end;
procedure WithReflection;
var
RttiContext: TRttiContext;
RttiType: TRttiInstanceType;
Foo: TObject;
begin
RttiType := RttiContext.FindType('Unit1.TFoo') as TRttiInstanceType;
Foo := RttiType.GetMethod('Create').Invoke(RttiType.MetaclassType, []).AsObject;
try
RttiType.GetMethod('Hello').Invoke(Foo, []);
finally
Foo.Free;
end;
end;
eC
The following is an example in eC:
// Without reflection
Foo foo { };
foo.hello();
// With reflection
Class fooClass = eSystem_FindClass(__thisModule, "Foo");
Instance foo = eInstance_New(fooClass);
Method m = eClass_FindMethod(fooClass, "hello", fooClass.module);
((void (*)())(void *)m.function)(foo);
Go
The following is an example in Go:
import "reflect"
// Without reflection
f := Foo{}
f.Hello()
// With reflection
fT := reflect.TypeOf(Foo{})
fV := reflect.New(fT)
m := fV.MethodByName("Hello")
if m.IsValid() {
m.Call(nil)
}
Java
The following is an example in Java:
import java.lang.reflect.Method;
// Without reflection
Foo foo = new Foo();
foo.hello();
// With reflection
try {
Object foo = Foo.class.getDeclaredConstructor().newInstance();
Method m = foo.getClass().getDeclaredMethod("hello", new Class<?>[0]);
m.invoke(foo);
} catch (ReflectiveOperationException ignored) {}
JavaScript
The following is an example in JavaScript:
// Without reflection
const foo = new Foo()
foo.hello()
// With reflection
const foo = Reflect.construct(Foo)
const hello = Reflect.get(foo, 'hello')
Reflect.apply(hello, foo, [])
// With eval
eval('new Foo().hello()')
Julia
The following is an example in Julia:
julia> struct Point
x::Int
y
end
# Inspection with reflection
julia> fieldnames(Point)
(:x, :y)
julia> fieldtypes(Point)
(Int64, Any)
julia> p = Point(3,4)
# Access with reflection
julia> getfield(p, :x)
3
Objective-C
The following is an example in Objective-C, implying either the OpenStep or Foundation Kit framework is used:
// Foo class.
@interface Foo : NSObject
- (void)hello;
@end
// Sending "hello" to a Foo instance without reflection.
Foo *obj = [[Foo alloc] init];
[obj hello];
// Sending "hello" to a Foo instance with reflection.
id obj = [[NSClassFromString(@"Foo") alloc] init];
[obj performSelector: @selector(hello)];
Perl
The following is an example in Perl:
# Without reflection
my $foo = Foo->new;
$foo->hello;
# or
Foo->new->hello;
# With reflection
my $class = "Foo"
my $constructor = "new";
my $method = "hello";
my $f = $class->$constructor;
$f->$method;
# or
$class->$constructor->$method;
# with eval
eval "new Foo->hello;";
PHP
The following is an example in PHP:
// Without reflection
$foo = new Foo();
$foo->hello();
// With reflection, using Reflections API
$reflector = new ReflectionClass("Foo");
$foo = $reflector->newInstance();
$hello = $reflector->getMethod("hello");
$hello->invoke($foo);
Python
The following is an example in Python:
# Without reflection
obj = Foo()
obj.hello()
# With reflection
obj = globals()["Foo"]()
getattr(obj, "hello")()
# With eval
eval("Foo().hello()")
R
The following is an example in R:
# Without reflection, assuming foo() returns an S3-type object that has method "hello"
obj <- foo()
hello(obj)
# With reflection
class_name <- "foo"
generic_having_foo_method <- "hello"
obj <- do.call(class_name, list())
do.call(generic_having_foo_method, alist(obj))
Ruby
The following is an example in Ruby:
# Without reflection
obj = Foo.new
obj.hello
# With reflection
obj = Object.const_get("Foo").new
obj.send :hello
# With eval
eval "Foo.new.hello"
Xojo
The following is an example using Xojo:
' Without reflection
Dim fooInstance As New Foo
fooInstance.PrintHello
' With reflection
Dim classInfo As Introspection.Typeinfo = GetTypeInfo(Foo)
Dim constructors() As Introspection.ConstructorInfo = classInfo.GetConstructors
Dim fooInstance As Foo = constructors(0).Invoke
Dim methods() As Introspection.MethodInfo = classInfo.GetMethods
For Each m As Introspection.MethodInfo In methods
If m.Name = "PrintHello" Then
m.Invoke(fooInstance)
End If
Next
See also
List of reflective programming languages and platforms
Mirror (programming)
Programming paradigms
Self-hosting (compilers)
Self-modifying code
Type introspection
typeof
References
Citations
Sources
Jonathan M. Sobel and Daniel P. Friedman. An Introduction to Reflection-Oriented Programming (1996), Indiana University.
Anti-Reflection technique using C# and C++/CLI wrapper to prevent code thief
Further reading
Ira R. Forman and Nate Forman, Java Reflection in Action (2005),
Ira R. Forman and Scott Danforth, Putting Metaclasses to Work (1999),
External links
Reflection in logic, functional and object-oriented programming: a short comparative study
An Introduction to Reflection-Oriented Programming
Brian Foote's pages on Reflection in Smalltalk
Java Reflection API Tutorial from Oracle
Programming constructs
Programming language comparisons
Articles with example BASIC code
Articles with example C code
Articles with example C Sharp code
Articles with example Java code
Articles with example JavaScript code
Articles with example Julia code
Articles with example Lisp (programming language) code
Articles with example Objective-C code
Articles with example Pascal code
Articles with example Perl code
Articles with example PHP code
Articles with example Python (programming language) code
Articles with example R code
Articles with example Ruby code | Reflective programming | [
"Technology"
] | 3,013 | [
"Programming language comparisons",
"Computing comparisons"
] |
314,919 | https://en.wikipedia.org/wiki/Boolean%20prime%20ideal%20theorem | In mathematics, the Boolean prime ideal theorem states that ideals in a Boolean algebra can be extended to prime ideals. A variation of this statement for filters on sets is known as the ultrafilter lemma. Other theorems are obtained by considering different mathematical structures with appropriate notions of ideals, for example, rings and prime ideals (of ring theory), or distributive lattices and maximal ideals (of order theory). This article focuses on prime ideal theorems from order theory.
Although the various prime ideal theorems may appear simple and intuitive, they cannot be deduced in general from the axioms of Zermelo–Fraenkel set theory without the axiom of choice (abbreviated ZF). Instead, some of the statements turn out to be equivalent to the axiom of choice (AC), while others—the Boolean prime ideal theorem, for instance—represent a property that is strictly weaker than AC. It is due to this intermediate status between ZF and ZF + AC (ZFC) that the Boolean prime ideal theorem is often taken as an axiom of set theory. The abbreviations BPI or PIT (for Boolean algebras) are sometimes used to refer to this additional axiom.
Prime ideal theorems
An order ideal is a (non-empty) directed lower set. If the considered partially ordered set (poset) has binary suprema (a.k.a. joins), as do the posets within this article, then this is equivalently characterized as a non-empty lower set I that is closed for binary suprema (that is, implies ). An ideal I is prime if its set-theoretic complement in the poset is a filter (that is, implies or ). Ideals are proper if they are not equal to the whole poset.
Historically, the first statement relating to later prime ideal theorems was in fact referring to filters—subsets that are ideals with respect to the dual order. The ultrafilter lemma states that every filter on a set is contained within some maximal (proper) filter—an ultrafilter. Recall that filters on sets are proper filters of the Boolean algebra of its powerset. In this special case, maximal filters (i.e. filters that are not strict subsets of any proper filter) and prime filters (i.e. filters that with each union of subsets X and Y contain also X or Y) coincide. The dual of this statement thus assures that every ideal of a powerset is contained in a prime ideal.
The above statement led to various generalized prime ideal theorems, each of which exists in a weak and in a strong form. Weak prime ideal theorems state that every non-trivial algebra of a certain class has at least one prime ideal. In contrast, strong prime ideal theorems require that every ideal that is disjoint from a given filter can be extended to a prime ideal that is still disjoint from that filter. In the case of algebras that are not posets, one uses different substructures instead of filters. Many forms of these theorems are actually known to be equivalent, so that the assertion that "PIT" holds is usually taken as the assertion that the corresponding statement for Boolean algebras (BPI) is valid.
Another variation of similar theorems is obtained by replacing each occurrence of prime ideal by maximal ideal. The corresponding maximal ideal theorems (MIT) are often—though not always—stronger than their PIT equivalents.
Boolean prime ideal theorem
The Boolean prime ideal theorem is the strong prime ideal theorem for Boolean algebras. Thus the formal statement is:
Let B be a Boolean algebra, let I be an ideal and let F be a filter of B, such that I and F are disjoint. Then I is contained in some prime ideal of B that is disjoint from F.
The weak prime ideal theorem for Boolean algebras simply states:
Every Boolean algebra contains a prime ideal.
We refer to these statements as the weak and strong BPI. The two are equivalent, as the strong BPI clearly implies the weak BPI, and the reverse implication can be achieved by using the weak BPI to find prime ideals in the appropriate quotient algebra.
The BPI can be expressed in various ways. For this purpose, recall the following theorem:
For any ideal I of a Boolean algebra B, the following are equivalent:
I is a prime ideal.
I is a maximal ideal, i.e. for any proper ideal J, if I is contained in J then I = J.
For every element a of B, I contains exactly one of {a, ¬a}.
This theorem is a well-known fact for Boolean algebras. Its dual establishes the equivalence of prime filters and ultrafilters. Note that the last property is in fact self-dual—only the prior assumption that I is an ideal gives the full characterization. All of the implications within this theorem can be proven in ZF.
Thus the following (strong) maximal ideal theorem (MIT) for Boolean algebras is equivalent to BPI:
Let B be a Boolean algebra, let I be an ideal and let F be a filter of B, such that I and F are disjoint. Then I is contained in some maximal ideal of B that is disjoint from F.
Note that one requires "global" maximality, not just maximality with respect to being disjoint from F. Yet, this variation yields another equivalent characterization of BPI:
Let B be a Boolean algebra, let I be an ideal and let F be a filter of B, such that I and F are disjoint. Then I is contained in some ideal of B that is maximal among all ideals disjoint from F.
The fact that this statement is equivalent to BPI is easily established by noting the following theorem: For any distributive lattice L, if an ideal I is maximal among all ideals of L that are disjoint to a given filter F, then I is a prime ideal. The proof for this statement (which can again be carried out in ZF set theory) is included in the article on ideals. Since any Boolean algebra is a distributive lattice, this shows the desired implication.
All of the above statements are now easily seen to be equivalent. Going even further, one can exploit the fact the dual orders of Boolean algebras are exactly the Boolean algebras themselves. Hence, when taking the equivalent duals of all former statements, one ends up with a number of theorems that equally apply to Boolean algebras, but where every occurrence of is replaced by . It is worth noting that for the special case where the Boolean algebra under consideration is a powerset with the subset ordering, the "maximal filter theorem" is called the ultrafilter lemma.
Summing up, for Boolean algebras, the weak and strong MIT, the weak and strong PIT, and these statements with filters in place of ideals are all equivalent. It is known that all of these statements are consequences of the Axiom of Choice, AC, (the easy proof makes use of Zorn's lemma), but cannot be proven in ZF (Zermelo-Fraenkel set theory without AC), if ZF is consistent. Yet, the BPI is strictly weaker than the axiom of choice, though the proof of this statement, due to J. D. Halpern and Azriel Lévy is rather non-trivial.
Further prime ideal theorems
The prototypical properties that were discussed for Boolean algebras in the above section can easily be modified to include more general lattices, such as distributive lattices or Heyting algebras. However, in these cases maximal ideals are different from prime ideals, and the relation between PITs and MITs is not obvious.
Indeed, it turns out that the MITs for distributive lattices and even for Heyting algebras are equivalent to the axiom of choice. On the other hand, it is known that the strong PIT for distributive lattices is equivalent to BPI (i.e. to the MIT and PIT for Boolean algebras). Hence this statement is strictly weaker than the axiom of choice. Furthermore, observe that Heyting algebras are not self dual, and thus using filters in place of ideals yields different theorems in this setting. Maybe surprisingly, the MIT for the duals of Heyting algebras is not stronger than BPI, which is in sharp contrast to the abovementioned MIT for Heyting algebras.
Finally, prime ideal theorems do also exist for other (not order-theoretical) abstract algebras. For example, the MIT for rings implies the axiom of choice. This situation requires to replace the order-theoretic term "filter" by other concepts—for rings a "multiplicatively closed subset" is appropriate.
The ultrafilter lemma
A filter on a set is a nonempty collection of nonempty subsets of that is closed under finite intersection and under superset. An ultrafilter is a maximal filter.
The ultrafilter lemma states that every filter on a set is a subset of some ultrafilter on .
An ultrafilter that does not contain finite sets is called "non-principal". The ultrafilter lemma, and in particular the existence of non-principal ultrafilters (consider the filter of all sets with finite complements), can be proven using from Zorn's lemma.
The ultrafilter lemma is equivalent to the Boolean prime ideal theorem, with the equivalence provable in ZF set theory without the axiom of choice. The idea behind the proof is that the subsets of any set form a Boolean algebra partially ordered by inclusion, and any Boolean algebra is representable as an algebra of sets by Stone's representation theorem.
If the set is finite then the ultrafilter lemma can be proven from the axioms ZF. This is no longer true for infinite sets; an additional axiom be assumed. Zorn's lemma, the axiom of choice, and Tychonoff's theorem can all be used to prove the ultrafilter lemma. The ultrafilter lemma is strictly weaker than the axiom of choice.
The ultrafilter lemma has many applications in topology. The ultrafilter lemma can be used to prove the Hahn-Banach theorem and the Alexander subbase theorem.
Applications
Intuitively, the Boolean prime ideal theorem states that there are "enough" prime ideals in a Boolean algebra in the sense that we can extend every ideal to a maximal one. This is of practical importance for proving Stone's representation theorem for Boolean algebras, a special case of Stone duality, in which one equips the set of all prime ideals with a certain topology and can indeed regain the original Boolean algebra (up to isomorphism) from this data. Furthermore, it turns out that in applications one can freely choose either to work with prime ideals or with prime filters, because every ideal uniquely determines a filter: the set of all Boolean complements of its elements. Both approaches are found in the literature.
Many other theorems of general topology that are often said to rely on the axiom of choice are in fact equivalent to BPI. For example, the theorem that a product of compact Hausdorff spaces is compact is equivalent to it. If we leave out "Hausdorff" we get a theorem equivalent to the full axiom of choice.
In graph theory, the de Bruijn–Erdős theorem is another equivalent to BPI. It states that, if a given infinite graph requires at least some finite number in any graph coloring, then it has a finite subgraph that also requires .
A not too well known application of the Boolean prime ideal theorem is the existence of a non-measurable set (the example usually given is the Vitali set, which requires the axiom of choice). From this and the fact that the BPI is strictly weaker than the axiom of choice, it follows that the existence of non-measurable sets is strictly weaker than the axiom of choice.
In linear algebra, the Boolean prime ideal theorem can be used to prove that any two bases of a given vector space have the same cardinality.
See also
List of Boolean algebra topics
Notes
References
An easy to read introduction, showing the equivalence of PIT for Boolean algebras and distributive lattices.
The theory in this book often requires choice principles. The notes on various chapters discuss the general relation of the theorems to PIT and MIT for various structures (though mostly lattices) and give pointers to further literature.
Discusses the status of the ultrafilter lemma.
Gives many equivalent statements for the BPI, including prime ideal theorems for other algebraic structures. PITs are considered as special instances of separation lemmas.
Axiom of choice
Boolean algebra
Order theory
Theorems in lattice theory | Boolean prime ideal theorem | [
"Mathematics"
] | 2,711 | [
"Boolean algebra",
"Mathematical logic",
"Mathematical axioms",
"Fields of abstract algebra",
"Axiom of choice",
"Axioms of set theory",
"Order theory"
] |
314,928 | https://en.wikipedia.org/wiki/Aquatic%20animal | An aquatic animal is any animal, whether vertebrate or invertebrate, that lives in a body of water for all or most of its lifetime. Aquatic animals generally conduct gas exchange in water by extracting dissolved oxygen via specialised respiratory organs called gills, through the skin or across enteral mucosae, although some are evolved from terrestrial ancestors that re-adapted to aquatic environments (e.g. marine reptiles and marine mammals), in which case they actually use lungs to breathe air and are essentially holding their breath when living in water. Some species of gastropod mollusc, such as the eastern emerald sea slug, are even capable of kleptoplastic photosynthesis via endosymbiosis with ingested yellow-green algae.
Almost all aquatic animals reproduce in water, either oviparously or viviparously, and many species routinely migrate between different water bodies during their life cycle. Some animals have fully aquatic life stages (typically as eggs and larvae), while as adults they become terrestrial or semi-aquatic after undergoing metamorphosis. Such examples include amphibians such as frogs, many flying insects such as mosquitoes, mayflies, dragonflies, damselflies and caddisflies, as well as some species of cephalopod molluscs such as the algae octopus (whose larvae are completely planktonic, but adults are highly terrestrial).
Aquatic animals are a diverse polyphyletic group based purely on the natural environments they inhabit, and many morphological and behavioral similarities among them are the result of convergent evolution. They are distinct from terrestrial and semi-aquatic animals, who can survive away from water bodies, while aquatic animals often die of dehydration or hypoxia after prolonged removal out of water due to either gill failure or compressive asphyxia by their own body weight (as in the case of whale beaching). Along with aquatic plants, algae and microbes, aquatic animals form the food webs of various marine, brackish and freshwater aquatic ecosystems.
Description
The term aquatic can be applied to animals that live in either fresh water or salt water. However, the adjective marine is most commonly used for animals that live in saltwater or sometimes brackish water, i.e. in oceans, shallow seas, estuaries, etc.
Aquatic animals can be separated into four main groups according to their positions within the water column.
Neustons ("floaters"), more specifically the zooneustons, inhabit the surface ecosystem and use buoyancy to stay at the water surface, sometimes with appendages hanging from the underside for foraging (e.g. Portuguese man o' war, chondrophores and the buoy barnacle). They only move around via passive locomotion, meaning they have vagility but no motility.
Planktons ("drifters"), more specifically the metazoan zooplanktons, are suspended within the water column with no motility (most aquatic larvae) or limited motility (e.g. jellyfish, salps, larvaceans, and escape responses of copepods), causing them to be mostly carried by the water currents.
Nektons ("swimmers") have active motility that are strong enough to propel and overcome the influence of water currents. These are the aquatic animals most familiar to the common knowledge, as their movements are obvious on the macroscopic scale and the cultivation and harvesting of their biomass is most important to humans as seafoods. Nektons often have powerful tails, paddle/fan-shaped appendages with large wetted surfaces (e.g. fins, flippers or webbed feet) and/or jet propulsion (in the case of cephalopods) to achieve aquatic locomotion.
Benthos ("bottom dwellers") inhabit the benthic zone at the floor of water bodies, which include both shallow sea (coastal, littoral and neritic) and deep sea communities. These animals include sessile organisms (e.g. sponges, sea anemones, corals, sea pens, sea lilies and sea squirts, some of which are reef-builders crucial to the biodiversity of marine ecosystems), sedentary filter feeders (e.g. bivalve molluscs) and ambush predators (e.g. flatfishes and bobbit worms, who often burrow or camouflage within the marine sediment), and more actively moving bottom feeders who swim (e.g. demersal fishes) and crawl around (e.g. decapod crustaceans, marine chelicerates, octopus, most non-bivalvian molluscs, echinoderms etc.). Many benthic animals are algivores, detrivores and scavengers who are important basal consumers and intermediate recyclers in the marine nitrogen cycle.
Aquatic animals (especially freshwater animals) are often of special concern to conservationists because of the fragility of their environments. Aquatic animals are subject to pressure from overfishing/hunting, destructive fishing, water pollution, acidification, climate change and competition from invasive species. Many aquatic ecosystems are at risk of habitat destruction/fragmentation, which puts aquatic animals at risk as well. Aquatic animals play an important role in the world. The biodiversity of aquatic animals provide food, energy, and even jobs.
Freshwater aquatic animals
Fresh water creates a hypotonic environment for aquatic organisms. This is problematic for organisms with pervious skins and gills, whose cell membranes may rupture if excess water is not excreted. Some protists accomplish this using contractile vacuoles, while freshwater fish excrete excess water via the kidney. Although most aquatic organisms have a limited ability to regulate their osmotic balance and therefore can only live within a narrow range of salinity, diadromous fish have the ability to migrate between fresh and saline water bodies. During these migrations they undergo changes to adapt to the surroundings of the changed salinities; these processes are hormonally controlled. The European eel (Anguilla anguilla) uses the hormone prolactin, while in salmon (Salmo salar) the hormone cortisol plays a key role during this process.
Freshwater molluscs include freshwater snails and freshwater bivalves. Freshwater crustaceans include freshwater shrimps, crabs, crayfish,freshwater pirahnas and copepods.
Air-breathing aquatic animals
In addition to water-breathing animals (e.g. fish, most molluscs, etc.), the term "aquatic animal" can be applied to air-breathing tetrapods who have evolved for aquatic life. The most proliferative extant group are the marine mammals, such as Cetacea (whales, dolphins and porpoises, with some freshwater species) and Sirenia (dugongs and manatees), who are too evolved for aquatic life to survive on land at all (where they will die of beaching), as well as the highly aquatically adapted but land-dwelling pinnipeds (true seals, eared seals and the walrus). The term "aquatic mammal" is also applied to riparian mammals like the river otter (Lontra canadensis) and beavers (family Castoridae), although they are technically semiaquatic or amphibious. Unlike the more common gill-bearing aquatic animals, these air-breathing animals have lungs (which are homologous to the swim bladders in bony fish) and need to surface periodically to change breaths, but their ranges are not restricted by oxygen saturation in water, although salinity changes can still affect their physiology to an extent.
There are also reptilian animals that are highly evolved for life in water, although most extant aquatic reptiles, including crocodilians, turtles, water snakes and the marine iguana, are technically semi-aquatic rather than fully aquatic, and most of them only inhabit freshwater ecosystems. Marine reptiles were once a dominant group of ocean predators that altered the marine fauna during the Mesozoic, although most of them died out during the Cretaceous-Paleogene extinction event and now only the sea turtles (the only remaining descendants of the Mesozoic marine reptiles) and sea snakes (which only evolved during the Cenozoic) remain fully aquatic in saltwater ecosystems.
Amphibians, while still requiring access to water to inhabit, are separated into their own ecological classification. The majority of amphibians — except the order Gymnophiona (caecilians), which are mainly terrestrial burrowers — have a fully aquatic larval form known as tadpoles, but those from the order Anura (frogs and toads) and some of the order Urodela (salamanders) will metamorphosize into lung-bearing and sometimes skin-breathing terrestrial adults, and most of them may return to the water to breed. Axolotl, a Mexican salamander that retains its larval external gills into adulthood, is the only extant amphibian that remains fully aquatic throughout the entire life cycle.
Certain amphibious fish also evolved to breathe air to survive oxygen-deprived waters, such as lungfishes, mudskippers, labyrinth fishes, bichirs, arapaima and walking catfish. Their abilities to breathe atmospheric oxygen are achieved via skin-breathing, enteral respiration, or specialized gill organs such as the labyrinth organ and even primitive lungs (lungfish and bichirs).
Most molluscs have gills, while some freshwater gastropods (e.g. Planorbidae) have evolved pallial lungs and some amphibious species (e.g. Ampullariidae) have both. Many species of octopus have cutaneous respiration that allows them to survive out of water at the intertidal zones, with at least one species (Abdopus aculeatus) being routinely terrestrial hunting crabs among the tidal pools of rocky shores.
Importance
Environmental
Aquatic animals play an important role for the environment as indicator species, as they are particularly sensitive to deterioration in water quality and climate change. Biodiversity of aquatic animals is also an important factor for the sustainability of aquatic ecosystems as it reflects the food web status and the carrying capacity of the local habitats. Many migratory aquatic animals, predominantly forage fish (such as sardines) and euryhaline fish (such as salmon), are keystone species that accumulate and transfer biomass between marine, freshwater and even to terrestrial ecosystems.
Importance to humans
As a food source
Aquatic animals are important to humans as a source of food (i.e. seafood) and as raw material for fodders (e.g. feeder fish and fish meal), pharmaceuticals (e.g. fish oil, krill oil, cytarabine and bryostatin) and various industrial chemicals (e.g. chitin and bioplastics, formerly also whale oil). The harvesting of aquatic animals, especially finfish, shellfish and inkfish, provides direct and indirect employment to the livelihood of over 500 million people in developing countries, and both the fishing industry and aquaculture make up a major component of the primary sector of the economy.
The United Nations Food and Agriculture Organization estimates that global consumption of aquatic animals in 2022 was 185 million tonnes (live weight equivalent), an increase of 4 percent from 2020. The value of the 2022 global trade was estimated at USD 452 billion, comprising USD 157 billion for wild fisheries and USD 296 billion for aquaculture. Of the total 185 million tonnes of aquatic animals produced in 2022, about 164.6 million tonnes (89%) were destined for human consumption, equivalent to an estimated 20.7 kg per capita. The remaining 20.8 million tonnes were destined for non-food uses, to produce mainly fishmeal and fish oil. In 2022, China remained the major producer (36% of the total), followed by India (8%), Indonesia (7%), Vietnam (5%) and Peru (3%).
Total fish production in 2016 reached an all-time high of 171 million tonnes, of which 88% was utilized for direct human consumption, resulting in a record-high per capita consumption of . Since 1961 the annual global growth in fish consumption has been twice as high as population growth. While annual growth of aquaculture has declined in recent years, significant double-digit growth is still recorded in some countries, particularly in Africa and Asia. Overfishing and destructive fishing practices fuelled by commercial incentives have reduced fish stocks beyond sustainable levels in many world regions, causing the fishery industry to maladaptively fishing down the food web. It was estimated in 2014 that global fisheries were adding US$270 billion a year to global GDP, but by full implementation of sustainable fishing, that figure could rise by as much as US$50 billion. UN Food and Agriculture Organization projects world production of aquatic animals to reach 205 million tonnes by 2032.
Where sex-disaggregated data are available, approximately 24 percent of the total workforce were women; of these, 53 percent were employed in the sector on a full-time basis, a great improvement since 1995, when only 32 percent of women were employed full time.
Aquatic animal are highly perishable and several chemical and biological changes take place immediately after death; this can result in spoilage and food safety risks if good handling and preservation practices are not applied all along the supply chain. These practices are based on temperature reduction (chilling and freezing), heat treatment (canning, boiling and smoking), reduction of available water (drying, salting and smoking) and changing of the storage environment (vacuum packing, modified atmosphere packaging and refrigeration). Aquatic animal products also require special facilities such as cold storage and refrigerated transport, and rapid delivery to consumers.
Recreational fishing
In addition to commercial and subsistence fishing, recreational fishing is a popular pastime in both developed and developing countries, and the manufacturing, retail and service sectors associated with recreational fishing have together conglomerated into a multibillion-dollar industry. In 2014 alone, around 11 million saltwater sportfishing participants the United States generated USD$58 billion of retail revenue (comparatively, commercial fishing generated USD$141 billion that same year). In 2021, the total revenue of recreational fishing industry in the United States overtook those of Lockheed Martin, Intel, Chrysler and Google; and together with personnel salary (about USD$39.5 billion) and various tolls and fees collected by fisheries management agencies (about USD$17 billion), contributed almost USD$129 billion to the GDP of the United States, roughly 1% of the national GDP and more than the economic sum of 17 U.S. states.
See also
Aquatic ecosystem
Aquatic locomotion
Aquatic mammal
Aquatic plant
Freshwater snail
Marine biology
Marine invertebrates
Marine mammal
Marine vertebrate
Terrestrial animal
Terrestrial ecosystem
Terrestrial locomotion
Terrestrial plant
Wetland indicator status
Zoology
Sources
References
Animals by adaptation | Aquatic animal | [
"Biology"
] | 3,073 | [
"Organisms by adaptation",
"Animals",
"Animals by adaptation"
] |
314,960 | https://en.wikipedia.org/wiki/Oxime | In organic chemistry, an oxime is an organic compound belonging to the imines, with the general formula , where R is an organic side-chain and R' may be hydrogen, forming an aldoxime, or another organic group, forming a ketoxime. O-substituted oximes form a closely related family of compounds. Amidoximes are oximes of amides () with general structure .
Oximes are usually generated by the reaction of hydroxylamine with aldehydes () or ketones (). The term oxime dates back to the 19th century, a combination of the words oxygen and imine.
Structure and properties
If the two side-chains on the central carbon are different from each other—either an aldoxime, or a ketoxime with two different "R" groups—the oxime can often have two different geometric stereoisomeric forms according to the E/Z configuration. An older terminology of syn and anti was used to identify especially aldoximes according to whether the R group was closer or further from the hydroxyl. Both forms are often stable enough to be separated from each other by standard techniques.
Oximes have three characteristic bands in the infrared spectrum, whose wavelengths corresponding to the stretching vibrations of its three types of bonds: 3600 cm−1 (O−H), 1665 cm−1 (C=N) and 945 cm−1 (N−O).
In aqueous solution, aliphatic oximes are 102- to 103-fold more resistant to hydrolysis than analogous hydrazones.
Preparation
Oximes can be synthesized by condensation of an aldehyde or a ketone with hydroxylamine. The condensation of aldehydes with hydroxylamine gives aldoximes, and ketoximes are produced from ketones and hydroxylamine. In general, oximes exist as colorless crystals or as thick liquids and are poorly soluble in water. Therefore, oxime formation can be used for the identification of ketone or aldehyde functional groups.
Oximes can also be obtained from reaction of nitrites such as isoamyl nitrite with compounds containing an acidic hydrogen atom. Examples are the reaction of ethyl acetoacetate and sodium nitrite in acetic acid, the reaction of methyl ethyl ketone with ethyl nitrite in hydrochloric acid. and a similar reaction with propiophenone, the reaction of phenacyl chloride, the reaction of malononitrile with sodium nitrite in acetic acid
A conceptually related reaction is the Japp–Klingemann reaction.
Reactions
The hydrolysis of oximes proceeds easily by heating in the presence of various inorganic acids, and the oximes decompose into the corresponding ketones or aldehydes, and hydroxylamines. The reduction of oximes by sodium metal, sodium amalgam, hydrogenation, or reaction with hydride reagents produces amines. Typically the reduction of aldoximes gives both primary amines and secondary amines; however, reaction conditions can be altered (such as the addition of potassium hydroxide in a 1/30 molar ratio) to yield solely primary amines.
In general, oximes can be changed to the corresponding amide derivatives by treatment with various acids. This reaction is called Beckmann rearrangement. In this reaction, a hydroxyl group is exchanged with the group that is in the anti position of the hydroxyl group. The amide derivatives that are obtained by Beckmann rearrangement can be transformed into a carboxylic acid by means of hydrolysis (base or acid catalyzed). Beckmann rearrangement is used for the industrial synthesis of caprolactam (see applications below).
The Ponzio reaction (1906) concerning the conversion of m-nitrobenzaldoxime to m-nitrophenyldinitromethane using dinitrogen tetroxide was the result of research into TNT analogues:
In the Neber rearrangement certain oximes are converted to the corresponding alpha-amino ketones.
Oximes can be dehydrated using acid anhydrides to yield corresponding nitriles.
Certain amidoximes react with benzenesulfonyl chloride to make substituted ureas in the Tiemann rearrangement:
Uses
In their largest application, an oxime is an intermediate in the industrial production of caprolactam, a precursor to Nylon 6. About half of the world's supply of cyclohexanone, more than a million tonnes annually, is converted to the oxime. In the presence of sulfuric acid catalyst, the oxime undergoes the Beckmann rearrangement to give the cyclic amide caprolactam:
Metal extractant
Oximes are commonly used as ligands and sequestering agents for metal ions. Dimethylglyoxime (dmgH2) is a reagent for the analysis of nickel and a popular ligand in its own right. In the typical reaction, a metal reacts with two equivalents of dmgH2 concomitant with ionization of one proton. Salicylaldoxime is a chelator in hydrometallurgy.
Amidoximes such as polyacrylamidoxime can be used to capture trace amounts of uranium from sea water. In 2017 researchers announced a configuration that absorbed up to nine times as much uranyl as previous fibers without saturating.
Other applications
Oxime compounds are used as antidotes for nerve agents. A nerve agent inactivates acetylcholinesterase by phosphorylation. Oxime compounds can reactivate acetylcholinesterase by attaching to phosphorus, forming an oxime-phosphonate, which then splits away from the acetylcholinesterase molecule. Oxime nerve-agent antidotes are pralidoxime (also known as 2-PAM), obidoxime, methoxime, HI-6, Hlo-7, and TMB-4. The effectiveness of the oxime treatment depends on the particular nerve agent used.
Perillartine, the oxime of perillaldehyde, is used as an artificial sweetener in Japan. It is 2000 times sweeter than sucrose.
Diaminoglyoxime is a key precursor to various compounds containing the highly reactive furazan ring.
Methyl ethyl ketoxime is a skin-preventing additive in many oil-based paints.
Buccoxime and 5-methyl-3-heptanone oxime ("Stemone") are perfume ingredients.
Fluvoxamine is used as an antidepressant.
See also
:Category:Oximes – specific chemicals containing this functional group
Nitrone – the N-oxide of an imine
References
Functional groups
Organic compounds
Chelating agents | Oxime | [
"Chemistry"
] | 1,448 | [
"Functional groups",
"Organic compounds",
"Oximes",
"Chelating agents",
"Process chemicals"
] |
314,978 | https://en.wikipedia.org/wiki/Expander%20cycle | The expander cycle is a power cycle of a bipropellant rocket engine. In this cycle, the fuel is used to cool the engine's combustion chamber, picking up heat and changing phase. The now heated and gaseous fuel then powers the turbine that drives the engine's fuel and oxidizer pumps before being injected into the combustion chamber and burned.
Because of the necessary phase change, the expander cycle is thrust limited by the square–cube law. When a bell-shaped nozzle is scaled, the nozzle surface area with which to heat the fuel increases as the square of the radius, but the volume of fuel to be heated increases as the cube of the radius. Thus beyond approximately 3000 kN (700,000 lbf) of thrust, there is no longer enough nozzle area to heat enough fuel to drive the turbines and hence the fuel pumps. Higher thrust levels can be achieved using a bypass expander cycle where a portion of the fuel bypasses the turbine and or thrust chamber cooling passages and goes directly to the main chamber injector. Non-toroidal aerospike engines are not subject to the limitations from the square-cube law because the engine's linear shape does not scale isometrically: the fuel flow and nozzle area scale linearly with the engine's width. All expander cycle engines need to use a cryogenic fuel such as liquid hydrogen, liquid methane, or liquid propane that easily reaches its boiling point.
Some expander cycle engines may use a gas generator of some kind to start the turbine and run the engine until the heat input from the thrust chamber and nozzle skirt increases as the chamber pressure builds up.
Some examples of an expander cycle engine are the Aerojet Rocketdyne RL10 and the Vinci engine for Ariane 6.
Expander bleed cycle
This operational cycle is a modification of the traditional expander cycle. In the bleed (or open) cycle, instead of routing all of the heated propellant through the turbine and sending it back to be combusted, only a small portion of the heated propellant is used to drive the turbine and is then bled off, being vented overboard without going through the combustion chamber. The other portion is injected into the combustion chamber. Bleeding off the turbine exhaust allows for a higher turbopump efficiency by decreasing backpressure and maximizing the pressure drop through the turbine. Compared with a standard expander cycle, this allows higher engine thrust at the cost of efficiency by dumping the turbine exhaust.
The Mitsubishi LE-5A was the world's first expander bleed cycle engine to be put into operational service. The Mitsubishi LE-9 is the world's first first stage expander bleed cycle engine.
Blue Origin chose the expander bleed cycle for the BE-3U engine used on the upper stage of its New Glenn launch vehicle.
Dual expander
In a similar way that the staged combustion can be implemented separately on the oxidizer and fuel on the full flow cycle, the expander cycle can be implemented on two separate paths as the dual expander cycle. The use of hot gases of the same chemistry as the liquid for the turbine and pump side of the turbopumps eliminates the need for purges and some failure modes. Additionally, when the density of the fuel and oxidizer is significantly different, as it is in the H2/LOX case, the optimal turbopump speeds differ so much that they need a gearbox between the fuel and oxidizer pumps. The use of dual expander cycle, with separate turbines, eliminates this failure-prone piece of equipment.
Dual expander cycle can be implemented by either using separated sections on the regenerative cooling system for the fuel and the oxidizer, or by using a single fluid for cooling and a heat exchanger to boil the second fluid. In the first case, for example, you could use the fuel to cool the combustion chamber, and the oxidizer to cool the nozzle. In the second case, you could use the fuel to cool the whole engine and a heat exchanger to boil the oxidizer.
Advantages
The expander cycle has a number of advantages over other designs:
Low temperatureAfter they have turned gaseous, the propellants are usually near room temperature, and do very little or no damage to the turbine, allowing the engine to be reusable. In contrast gas-generator or staged combustion engines operate their turbines at high temperature.
Tolerance During the development of the RL10 engineers were worried that insulation foam mounted on the inside of the tank might break off and damage the engine. They tested this by putting loose foam in a fuel tank and running it through the engine. The RL10 chewed it up without problems or noticeable degradation in performance. Conventional gas-generators are in practice miniature rocket engines, with all the complexity that implies. Blocking even a small part of a gas generator can lead to a hot spot, which can cause violent loss of the engine. Using the engine bell as a 'gas generator' also makes it very tolerant of fuel contamination because of the wider fuel flow channels used.
Inherent safety Because a bell-type expander-cycle engine is thrust limited, it can easily be designed to withstand its maximum thrust conditions. In other engine types, a stuck fuel valve or similar problem can lead to engine thrust spiraling out of control due to unintended feedback systems. Other engine types require complex mechanical or electronic controllers to ensure this does not happen. Expander cycles are by design incapable of malfunctioning that way.
Higher vacuum performance Compared to a pressure-fed engine, pump-fed engines and hence, expander cycle engines have higher combustion chamber pressures. Increased combustion chamber pressures allow for a reduced throat area Ath, and therefore, leads to a larger expansion ratio, e = Ae/Ath for an identical nozzle exit area Ae, which ultimately leads to higher vacuum performance.
Usage
Expander cycle engines include the following:
Aerojet Rocketdyne RL10
Pratt & Whitney RL60
ArianeGroup Vinci
CADB & Pratt & Whitney RD-0146
Chinese YF-75D
Mitsubishi Heavy Industries LE-5A / 5B
Mitsubishi Heavy Industries LE-9
Aerojet Rocketdyne & MHI MARC-60 (MB-60)
Blue Origin BE-3U and BE-7
Avio M10
Demonstration Rocket for Agile Cislunar Operations (DRACO) nuclear thermal engine
Comparison of upper-stage expander-cycle engines
See also
Gas-generator cycle
Combustion tap-off cycle
Staged combustion cycle
Pressure-fed engine
References
External links
Rocket power cycles
Rocket propulsion
Rocket engines
Spacecraft propulsion
Engineering thermodynamics | Expander cycle | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,355 | [
"Engines",
"Rocket engines",
"Engineering thermodynamics",
"Thermodynamics",
"Mechanical engineering"
] |
314,983 | https://en.wikipedia.org/wiki/Hydroxylamine | Hydroxylamine (also known as hydroxyammonia) is an inorganic compound with the chemical formula . The compound is in a form of a white hygroscopic crystals. Hydroxylamine is almost always provided and used as an aqueous solution. It is consumed almost exclusively to produce Nylon-6. The oxidation of to hydroxylamine is a step in biological nitrification.
History
Hydroxylamine was first prepared as hydroxylammonium chloride in 1865 by the German chemist Wilhelm Clemens Lossen (1838-1906); he reacted tin and hydrochloric acid in the presence of ethyl nitrate. It was first prepared in pure form in 1891 by the Dutch chemist Lobry de Bruyn and by the French chemist Léon Maurice Crismer (1858-1944). The coordination complex (zinc dichloride di(hydroxylamine)), known as Crismer's salt, releases hydroxylamine upon heating.
Production
Hydroxylamine or its salts (salts containing hydroxylammonium cations ) can be produced via several routes but only two are commercially viable. It is also produced naturally as discussed in a section on biochemistry.
From nitric oxide
is mainly produced as its sulfuric acid salt, hydroxylammonium hydrogen sulfate (), by the hydrogenation of nitric oxide over platinum catalysts in the presence of sulfuric acid.
Raschig process
Another route to is the Raschig process: aqueous ammonium nitrite is reduced by and at 0 °C to yield a hydroxylamido-N,N-disulfonate anion:
This anion is then hydrolyzed to give hydroxylammonium sulfate :
Solid can be collected by treatment with liquid ammonia. Ammonium sulfate, , a side-product insoluble in liquid ammonia, is removed by filtration; the liquid ammonia is evaporated to give the desired product.
The net reaction is:
A base then frees the hydroxylamine from the salt:
Other methods
Julius Tafel discovered that hydroxylamine hydrochloride or sulfate salts can be produced by electrolytic reduction of nitric acid with HCl or respectively:
Hydroxylamine can also be produced by the reduction of nitrous acid or potassium nitrite with bisulfite:
(100 °C, 1 h)
Hydrochloric acid disproportionates nitromethane to hydroxylamine hydrochloride and carbon monoxide via the hydroxamic acid.
A direct lab synthesis of hydroxylamine from molecular nitrogen in water plasma was demonstrated in 2024.
Reactions
Hydroxylamine reacts with electrophiles, such as alkylating agents, which can attach to either the oxygen or the nitrogen atoms:
The reaction of with an aldehyde or ketone produces an oxime.
(in NaOH solution)
This reaction is useful in the purification of ketones and aldehydes: if hydroxylamine is added to an aldehyde or ketone in solution, an oxime forms, which generally precipitates from solution; heating the precipitate with an inorganic acid then restores the original aldehyde or ketone.
Oximes such as dimethylglyoxime are also employed as ligands.
reacts with chlorosulfonic acid to give hydroxylamine-O-sulfonic acid:
When heated, hydroxylamine explodes. A detonator can easily explode aqueous solutions concentrated above 80% by weight, and even 50% solution might prove detonable if tested in bulk. In air, the combustion is rapid and complete:
Absent air, pure hydroxylamine requires stronger heating and the detonation does not complete combustion:
Partial isomerisation to the amine oxide contributes to the high reactivity.
Functional group
Hydroxylamine derivatives substituted in place of the hydroxyl or amine hydrogen are (respectively) called O- or Nhydroxylamines. In general Nhydroxylamines are more common. Examples are Ntertbutylhydroxylamine or the glycosidic bond in calicheamicin. N,ODimethylhydroxylamine is a precursor to Weinreb amides.
Similarly to amines, one can distinguish hydroxylamines by their degree of substitution: primary, secondary and tertiary. When stored exposed to air for weeks, secondary hydroxylamines degrade to nitrones.
Norganylhydroxylamines, , where R is an organyl group, can be reduced to amines :
Synthesis
Amine oxidation with benzoyl peroxide is the most common method to synthesize hydroxylamines. Care must be taken to prevent over-oxidation to a nitrone. Other methods include:
Hydrogenation of an oxime
Alkylating a precursor hydroxylamine
Amine oxide pyrolysis (the Cope reaction)
Uses
Approximately 95% of hydroxylamine is used in the synthesis of cyclohexanone oxime, a precursor to Nylon 6. The treatment of this oxime with acid induces the Beckmann rearrangement to give caprolactam (3). The latter can then undergo a ring-opening polymerization to yield Nylon 6.
Laboratory uses
Hydroxylamine and its salts are commonly used as reducing agents in myriad organic and inorganic reactions. They can also act as antioxidants for fatty acids.
High concentrations of hydroxylamine are used by biologists to introduce mutations by acting as a DNA nucleobase amine-hydroxylating agent. In is thought to mainly act via hydroxylation of cytidine to hydroxyaminocytidine, which is misread as thymidine, thereby inducing C:G to T:A transition mutations. But high concentrations or over-reaction of hydroxylamine in vitro are seemingly able to modify other regions of the DNA & lead to other types of mutations. This may be due to the ability of hydroxylamine to undergo uncontrolled free radical chemistry in the presence of trace metals and oxygen, in fact in the absence of its free radical affects Ernst Freese noted hydroxylamine was unable to induce reversion mutations of its C:G to T:A transition effect and even considered hydroxylamine to be the most specific mutagen known. Practically, it has been largely surpassed by more potent mutagens such as EMS, ENU, or nitrosoguanidine, but being a very small mutagenic compound with high specificity, it found some specialized uses such as mutation of DNA packed within bacteriophage capsids, and mutation of purified DNA in vitro.
An alternative industrial synthesis of paracetamol developed by Hoechst–Celanese involves the conversion of ketone to a ketoxime with hydroxylamine.
Some non-chemical uses include removal of hair from animal hides and photographic developing solutions. In the semiconductor industry, hydroxylamine is often a component in the "resist stripper", which removes photoresist after lithography.
Hydroxylamine can also be used to better characterize the nature of a post-translational modification onto proteins. For example, poly(ADP-Ribose) chains are sensitive to hydroxylamine when attached to glutamic or aspartic acids but not sensitive when attached to serines. Similarly, Ubiquitin molecules bound to serines or threonines residues are sensitive to hydroxylamine, but those bound to lysine (isopeptide bond) are resistant.
Biochemistry
In biological nitrification, the oxidation of to hydroxylamine is mediated by the ammonia monooxygenase (AMO). Hydroxylamine oxidoreductase (HAO) further oxidizes hydroxylamine to nitrite.
Cytochrome P460, an enzyme found in the ammonia-oxidizing bacteria Nitrosomonas europea, can convert hydroxylamine to nitrous oxide, a potent greenhouse gas.
Hydroxylamine can also be used to highly selectively cleave asparaginyl-glycine peptide bonds in peptides and proteins. It also bonds to and permanently disables (poisons) heme-containing enzymes. It is used as an irreversible inhibitor of the oxygen-evolving complex of photosynthesis on account of its similar structure to water.
Safety and environmental concerns
Hydroxylamine can be an explosive, with a theoretical decomposition energy of about 5 kJ/g, and aqueous solutions above 80% can be easily detonated by detonator or strong heating under confinement. At least two factories dealing in hydroxylamine have been destroyed since 1999 with loss of life. It is known, however, that ferrous and ferric iron salts accelerate the decomposition of 50% solutions. Hydroxylamine and its derivatives are more safely handled in the form of salts.
It is an irritant to the respiratory tract, skin, eyes, and other mucous membranes. It may be absorbed through the skin, is harmful if swallowed, and is a possible mutagen.
See also
Amine
Amino acid
References
Further reading
Hydroxylamine
Walters, Michael A. and Andrew B. Hoem. "Hydroxylamine." e-Encyclopedia of Reagents for Organic Synthesis. 2001.
Schupf Computational Chemistry Lab
M. W. Rathke A. A. Millard "Boranes in Functionalization of Olefins to Amines: 3-Pinanamine" Organic Syntheses, Coll. Vol. 6, p. 943; Vol. 58, p. 32. (preparation of hydroxylamine-O-sulfonic acid).
External links
Calorimetric studies of hydroxylamine decomposition
Chemical company BASF info
MSDS
Deadly detonation of hydroxylamine at Concept Sciences facility
Functional groups
Inorganic amines
Photographic chemicals
Rocket fuels
Reducing agents
Nitrogen oxoacids | Hydroxylamine | [
"Chemistry"
] | 2,090 | [
"Functional groups",
"Hydroxylamines",
"Redox",
"Reducing agents"
] |
315,008 | https://en.wikipedia.org/wiki/Magnetoresistive%20RAM | Magnetoresistive random-access memory (MRAM) is a type of non-volatile random-access memory which stores data in magnetic domains. Developed in the mid-1980s, proponents have argued that magnetoresistive RAM will eventually surpass competing technologies to become a dominant or even universal memory. Currently, memory technologies in use such as flash RAM and DRAM have practical advantages that have so far kept MRAM in a niche role in the market.
Description
Unlike conventional RAM chip technologies, data in MRAM is not stored as electric charge or current flows, but by magnetic storage elements. The elements are formed from two ferromagnetic plates, each of which can hold a magnetization, separated by a thin insulating layer. One of the two plates is a permanent magnet set to a particular polarity; the other plate's magnetization can be changed to match that of an external field to store memory. This configuration is known as a magnetic tunnel junction (MTJ) and is the simplest structure for an MRAM bit. A memory device is built from a grid of such "cells".
The simplest method of reading is accomplished by measuring the electrical resistance of the cell. A particular cell is (typically) selected by powering an associated transistor that switches current from a supply line through the cell to ground. Because of tunnel magnetoresistance, the electrical resistance of the cell changes with the relative orientation of the magnetization in the two plates. By measuring the resulting current, the resistance inside any particular cell can be determined, and from this the magnetization polarity of the writable plate. Typically if the two plates have the same magnetization alignment (low resistance state) this is considered to mean "1", while if the alignment is antiparallel the resistance will be higher (high resistance state) and this means "0".
Data is written to the cells using a variety of means. In the simplest "classic" design, each cell lies between a pair of write lines arranged at right angles to each other, parallel to the cell, one above and one below the cell. When current is passed through them, an induced magnetic field is created at the junction, which the writable plate picks up. This pattern of operation is similar to magnetic-core memory, a system commonly used in the 1960s.
However, due to process and material variations, an array of memory cells has a distribution of switching fields with a deviation σ. Therefore, to program all the bits in a large array with the same current, the applied field needs to be larger than the mean "selected" switching field by greater than 6σ. In addition,the applied field must be kept below a maximum value. Thus, this "conventional" MRAM must keep these two distributions well-separated. As a result, there is a narrow operating window for programming fields; and only inside this window, can all the bits be programmed without errors or disturbs. In 2005, a "Savtchenko switching" relying on the unique behavior of a synthetic antiferromagnet (SAF) free layer is applied to solve this problem. The SAF layer is formed from two ferromagnetic layers separated by a nonmagnetic coupling spacer layer. For a synthetic antiferromagnet having some net anisotropy Hk in each layer, there exists a critical spin flop field Hsw at which the two antiparallel layer magnetizations will rotate (flop) to be orthogonal to the applied field H with each layer scissoring slightly in the direction of H. Therefore, if only a single line current is applied (half-selected bits), the 45° field angle cannot switch the state. Below the toggling transition, there are no disturbs all the way up to the highest fields.
This approach still requires a fairly substantial current to generate the field, however, which makes it less interesting for low-power uses, one of MRAM's primary disadvantages. Additionally, as the device is scaled down in size, there comes a time when the induced field overlaps adjacent cells over a small area, leading to potential false writes. This problem, the half-select (or write disturb) problem, appears to set a fairly large minimal size for this type of cell. One experimental solution to this problem was to use circular domains written and read using the giant magnetoresistive effect, but it appears that this line of research is no longer active.
A newer technique, spin-transfer torque (STT) or spin-transfer switching, uses spin-aligned ("polarized") electrons to directly torque the domains. Specifically, if the electrons flowing into a layer have to change their spin, this will develop a torque that will be transferred to the nearby layer. This lowers the amount of current needed to write the cells, making it about the same as the read process. There are concerns that the "classic" type of MRAM cell will have difficulty at high densities because of the amount of current needed during writes, a problem that STT avoids. For this reason, the STT proponents expect the technique to be used for devices of 65 nm and smaller. The downside is the need to maintain the spin coherence. Overall, the STT requires much less write current than conventional or toggle MRAM. Research in this field indicates that STT current can be reduced up to 50 times by using a new composite structure. However, higher-speed operation still requires higher current.
Other potential arrangements include "vertical transport MRAM" (VMRAM), which uses current through a vertical column to change magnetic orientation, a geometric arrangement that reduces the write disturb problem and so can be used at higher density.
A review article provides the details of materials and challenges associated with MRAM in the perpendicular geometry. The authors describe a new term called "Pentalemma", which represents a conflict in five different requirements such as write current, stability of the bits, readability, read/write speed and the process integration with CMOS. The selection of materials and the design of MRAM to fulfill those requirements are discussed.
Comparison with other systems
Density
The main determinant of a memory system's cost is the density of the components used to make it up. Smaller components, and fewer of them, mean that more "cells" can be packed onto a single chip, which in turn means more can be produced at once from a single silicon wafer. This improves yield, which is directly related to cost.
DRAM uses a small capacitor as a memory element, wires to carry current to and from it, and a transistor to control it – referred to as a "1T1C" cell. This makes DRAM the highest-density RAM currently available, and thus the least expensive, which is why it is used for the majority of RAM found in computers.
MRAM is physically similar to DRAM in makeup, and often does require a transistor for the write operation (though not strictly necessary). The scaling of transistors to higher density necessarily leads to lower available current, which could limit MRAM performance at advanced nodes.
Power consumption
Since the capacitors used in DRAM lose their charge over time, memory assemblies that use DRAM must refresh all the cells in their chips several times a second, reading each one and re-writing its contents. As DRAM cells decrease in size it is necessary to refresh the cells more often, resulting in greater power consumption.
In contrast, MRAM never requires a refresh. This means that not only does it retain its memory with the power turned off but also there is no constant power-draw. While the read process in theory requires more power than the same process in a DRAM, in practice the difference appears to be very close to zero. However, the write process requires more power to overcome the existing field stored in the junction, varying from three to eight times the power required during reading. Although the exact amount of power savings depends on the nature of the work — more frequent writing will require more power – in general MRAM proponents expect much lower power consumption (up to 99% less) compared to DRAM. STT-based MRAMs eliminate the difference between reading and writing, further reducing power requirements.
It is also worth comparing MRAM with another common memory system — flash RAM. Like MRAM, flash does not lose its memory when power is removed, which makes it very common in applications requiring persistent storage. When used for reading, flash and MRAM are very similar in power requirements. However, flash is re-written using a large pulse of voltage (about 10 V) that is stored up over time in a charge pump, which is both power-hungry and time-consuming. In addition, the current pulse physically degrades the flash cells, which means flash can only be written to some finite number of times before it must be replaced.
In contrast, MRAM requires only slightly more power to write than read, and no change in the voltage, eliminating the need for a charge pump. This leads to much faster operation, lower power consumption, and an indefinitely long lifetime.
Data retention
MRAM is often touted as being a non-volatile memory. However, the current mainstream high-capacity MRAM, spin-transfer torque memory, provides improved retention at the cost of higher power consumption, i.e., higher write current. In particular, the critical (minimum) write current is directly proportional to the thermal stability factor Δ. The retention is in turn proportional to exp(Δ). The retention, therefore, degrades exponentially with reduced write current.
Speed
Dynamic random-access memory (DRAM) performance is limited by the rate at which the charge stored in the cells can be drained (for reading) or stored (for writing). MRAM operation is based on measuring voltages rather than charges or currents, so there is less "settling time" needed. IBM researchers have demonstrated MRAM devices with access times on the order of 2 ns, somewhat better than even the most advanced DRAMs built on much newer processes. A team at the German Physikalisch-Technische Bundesanstalt have demonstrated MRAM devices with 1 ns settling times, better than the currently accepted theoretical limits for DRAM, although the demonstration was a single cell. The differences compared to flash are far more significant, with write speeds as much as thousands of times faster. However, these speed comparisons are not for like-for-like current. High-density memory requires small transistors with reduced current, especially when built for low standby leakage. Under such conditions, write times shorter than 30 ns may not be reached so easily. In particular, to meet solder reflow stability of 260 °C over 90 seconds, 250 ns pulses have been required. This is related to the elevated thermal stability requirement driving up the write bit error rate. In order to avoid breakdown from higher current, longer pulses are needed.
For the perpendicular STT MRAM, the switching time is largely determined by the thermal stability Δ as well as the write current. A larger Δ (better for data retention) would require a larger write current or a longer pulse. A combination of high speed and adequate retention is only possible with a sufficiently high write current.
The only current memory technology that easily competes with MRAM in terms of performance at comparable density is static random-access memory (SRAM). SRAM consists of a series of transistors arranged in a flip-flop, which will hold one of two states as long as power is applied. Since the transistors have a very low power requirement, their switching time is very low. However, since an SRAM cell consists of several transistors, typically four or six, its density is much lower than DRAM. This makes it expensive, which is why it is used only for small amounts of high-performance memory, notably the CPU cache in almost all modern central processing unit designs.
Although MRAM is not quite as fast as SRAM, it is close enough to be interesting even in this role. Given its much higher density, a CPU designer may be inclined to use MRAM to offer a much larger but somewhat slower cache, rather than a smaller but faster one. It remains to be seen how this trade-off will play out in the future.
Endurance
The endurance of MRAM is affected by write current, just like retention and speed, as well as read current. When the write current is sufficiently large for speed and retention, the probability of MTJ breakdown needs to be considered. If the read current/write current ratio is not small enough, read disturb becomes more likely, i.e., a read error occurs during one of the many switching cycles. The read disturb error rate is given by
,
where τ is the relaxation time (1 ns) and Icrit is the critical write current. Higher endurance requires a sufficiently low . However, a lower Iread also reduces read speed.
Endurance is mainly limited by the possible breakdown of the thin MgO layer.
Overall
MRAM has similar performance to SRAM, enabled by the use of sufficient write current. However, this dependence on write current also makes it a challenge to compete with the higher density comparable to mainstream DRAM and Flash. Nevertheless, some opportunities for MRAM exist where density need not be maximized. From a fundamental physics point of view, the spin-transfer torque approach to MRAM is bound to a "rectangle of death" formed by retention, endurance, speed, and power requirements, as covered above.
While the power-speed tradeoff is universal for electronic devices, the endurance-retention tradeoff at high current and the degradation of both at low Δ is problematic. Endurance is largely limited to 108 cycles.
Alternatives to MRAM
Flash and EEPROM's limited write-cycles are a serious problem for any real RAM-like role. In addition, the high power needed to write the cells is a problem in low-power nodes, where non-volatile RAM is often used. The power also needs time to be "built up" in a device known as a charge pump, which makes writing dramatically slower than reading, often as low as 1/1000 as fast. While MRAM was certainly designed to address some of these issues, a number of other new memory devices are in production or have been proposed to address these shortcomings.
To date, the only similar system to enter widespread production is ferroelectric RAM, or F-RAM (sometimes referred to as FeRAM).
Also seeing renewed interest are silicon-oxide-nitride-oxide-silicon (SONOS) memory and ReRAM. 3D XPoint has also been in development, but is known to have a higher power budget than DRAM.
History
1955 — Magnetic-core memory had the same reading writing principle as MRAM
1984 — Arthur V. Pohm and James M. Daughton, while working for Honeywell, developed the first magnetoresistance memory devices.
1988 — European scientists (Albert Fert and Peter Grünberg) discovered the "giant magnetoresistive effect" in thin-film structures.
1989 — Pohm and Daughton left Honeywell to form Nonvolatile Electronics, Inc. (later renamed to NVE Corp.) sublicensing the MRAM technology they have created.
1995 — Motorola (later to become Freescale Semiconductor, and subsequently NXP Semiconductors) initiates work on MRAM development
1996 — Spin torque transfer is proposed
1997 — Sony published the first Japan Patent Application for S.P.I.N.O.R. (Spin Polarized Injection Non-Volatile Orthogonal Read/Write RAM), a forerunner of STT RAM.
1998 — Motorola develops 256Kb MRAM test chip.
2000 — IBM and Infineon established a joint MRAM development program.
2000 — Spintec laboratory's first Spin-Torque Transfer patent.
2002
NVE announces technology exchange with Cypress Semiconductor.
Toggle patent granted to Motorola
2003 — A 128 kbit MRAM chip was introduced, manufactured with a 180 nm lithographic process
2004
June — Infineon unveiled a 16-Mbit prototype, manufactured with a 180 nm lithographic process
September — MRAM becomes a standard product offering at Freescale.
October — Taiwan developers of MRAM tape out 1 Mbit parts at TSMC.
October — Micron drops MRAM, mulls other memories.
December — TSMC, NEC and Toshiba describe novel MRAM cells.
December — Renesas Technology promotes a high performance, high-reliability MRAM technology.
Spintech laboratory's first observation of Thermal Assisted Switching (TAS) as MRAM approach.
Crocus Technology is founded; the company is a developer of second-generation MRAM
2005
January — Cypress Semiconductor samples MRAM, using NVE IP.
March — Cypress to Sell MRAM Subsidiary.
June — Honeywell posts data sheet for 1-Mbit rad-hard MRAM using a 150 nm lithographic process.
August — MRAM record: memory cell runs at 2 GHz.
November — Renesas Technology and Grandis collaborate on development of 65 nm MRAM employing spin torque transfer (STT).
November — NVE receives an SBIR grant to research cryptographic tamper-responsive memory.
December — Sony announced Spin-RAM, the first lab-produced spin-torque-transfer MRAM, which utilizes a spin-polarized current through the tunneling magnetoresistance layer to write data. This method consumes less power and is more scalable than conventional MRAM. With further advances in materials, this process should allow for densities higher than those possible in DRAM.
December — Freescale Semiconductor Inc. demonstrates an MRAM that uses magnesium oxide, rather than an aluminum oxide, allowing for a thinner insulating tunnel barrier and improved bit resistance during the write cycle, thereby reducing the required write current.
Spintec laboratory gives Crocus Technology exclusive license on its patents.
2006
February — Toshiba and NEC announced a 16 Mbit MRAM chip with a new "power-forking" design. It achieves a transfer rate of 200 Mbit/s, with a 34 ns cycle time, the best performance of any MRAM chip. It also boasts the smallest physical size in its class — 78.5 square millimeters — and the low voltage requirement of 1.8 volts.
July — On July 10, Austin Texas — Freescale Semiconductor begins marketing a 4-Mbit MRAM chip, which sells for approximately $25.00 per chip.
2007
R&D moving to spin transfer torque RAM (SPRAM)
February — Tohoku University and Hitachi developed a prototype 2-Mbit non-volatile RAM chip employing spin-transfer torque switching.
August — "IBM, TDK Partner In Magnetic Memory Research on Spin Transfer Torque Switching" IBM and TDK to lower the cost and boost performance of MRAM to hopefully release a product to market.
November — Toshiba applied and proved the spin-transfer torque switching with perpendicular magnetic anisotropy MTJ device.
November — NEC develops world's fastest SRAM-compatible MRAM with operation speed of 250 MHz.
2008
Japanese satellite, SpriteSat, to use Freescale MRAM to replace SRAM and FLASH components
June — Samsung and Hynix become partner on STT-MRAM
June — Freescale spins off MRAM operations as new company Everspin
August — Scientists in Germany have developed next-generation MRAM that is said to operate as fast as fundamental performance limits allow, with write cycles under 1 nanosecond.
November — Everspin announces BGA packages, product family from 256 Kb to 4 Mb
2009
June — Hitachi and Tohoku University demonstrated a 32-Mbit spin-transfer torque RAM (SPRAM).
June — Crocus Technology and Tower Semiconductor announce deal to port Crocus' MRAM process technology to Tower's manufacturing environment
November — Everspin releases SPI MRAM product family and ships first embedded MRAM samples
2010
April — Everspin releases 16 Mb density
June — Hitachi and Tohoku Univ announce Multi-level SPRAM
2011
March — PTB, Germany, announces below 500 ps (2 Gbit/s) write cycle
2012
November — Chandler, Arizona, USA, Everspin debuts 64 Mb ST-MRAM on a 90 nm process.
December — A team from University of California, Los Angeles presents voltage-controlled MRAM at IEEE International Electron Devices Meeting.
2013
November — Buffalo Technology and Everspin announce a new industrial SATA III SSD that incorporates Everspin's Spin-Torque MRAM (ST-MRAM) as cache memory.
2014
January — Researchers announce the ability to control the magnetic properties of core/shell antiferromagnetic nanoparticles using only temperature and magnetic field changes.
October — Everspin partners with GlobalFoundries to produce ST-MRAM on 300 mm wafers.
2016
April — Samsung's semiconductor chief Kim Ki-nam says Samsung is developing an MRAM technology that "will be ready soon".
July — IBM and Samsung report an MRAM device capable of scaling down to 11 nm with a switching current of 7.5 microamps at 10 ns.
August — Everspin announced it was shipping samples of the industry's first 256 Mb ST-MRAM to customers.
October — Avalanche Technology partners with Sony Semiconductor Manufacturing to manufacture STT-MRAM on 300 mm wafers, based on "a variety of manufacturing nodes".
December — Inston and Toshiba independently present results on voltage-controlled MRAM at International Electron Devices Meeting.
2019
January — Everspin starts shipping samples of 28 nm 1 Gb STT-MRAM chips.
March — Samsung commence commercial production of its first embedded STT-MRAM based on a 28 nm process.
May — Avalanche partners with United Microelectronics Corporation to jointly develop and produce embedded MRAM based on the latter's 28 nm CMOS manufacturing process.
2020
December — IBM announces a 14 nm MRAM node.
2021
May — TSMC revealed a roadmap for developing the eMRAM technology at 12/14 nm node as an offering to replace eFLASH.
November — Taiwan Semiconductor Research Institute announced the development of a SOT-MRAM device.
Applications
Possible practical application of the MRAM includes virtually every device that has some type of memory inside such as aerospace and military systems, digital cameras, notebooks, smart cards, mobile telephones, cellular base stations, personal computers, battery-backed SRAM replacement, datalogging specialty memories (black box solutions), media players, and book readers etc.
See also
Magnetic bubble memory
EEPROM
Everspin Technologies
F-RAM
Ferromagnetism
Magnetoresistance
Memristor
MOSFET
NRAM
nvSRAM
Phase-change memory (PRAM)
Spin valve
Spin-transfer torque
Tunnel magnetoresistance
References
External links
Wired News article from February, 2006
NEC Press Release from February, 2006
BBC news article from July, 2006
Freescale MRAM – an in-depth examination from August 2006
MRAM – The Birth of the Super Memory – An article and an interview with Freescale about their MRAM technology
Spin torque applet – An applet illustrating the principles underlying spin-torque transfer MRAM
New Speed Record for Magnetic Memories – The Future of Things article
Types of RAM
Non-volatile memory
Spintronics | Magnetoresistive RAM | [
"Physics",
"Materials_science"
] | 4,805 | [
"Spintronics",
"Condensed matter physics"
] |
315,021 | https://en.wikipedia.org/wiki/Simultaneous%20multithreading | Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution to better use the resources provided by modern processor architectures.
Details
The term multithreading is ambiguous, because not only can multiple threads be executed simultaneously on one CPU core, but also multiple tasks (with different page tables, different task state segments, different protection rings, different I/O permissions, etc.). Although running on the same core, they are completely separated from each other.
Multithreading is similar in concept to preemptive multitasking but is implemented at the thread level of execution in modern superscalar processors.
Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form being temporal multithreading (also known as super-threading). In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. In simultaneous multithreading, instructions from more than one thread can be executed in any given pipeline stage at a time. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads is decided by the chip designers. Two concurrent threads per CPU core are common, but some processors support many more.
Because it inevitably increases conflict on shared resources, measuring or agreeing on its effectiveness can be difficult. However, measured energy efficiency of SMT with parallel native and managed workloads on historical 130 nm to 32 nm Intel SMT (hyper-threading) implementations found that in 45 nm and 32 nm implementations, SMT is extremely energy efficient, even with in-order Atom processors. In modern systems, SMT effectively exploits concurrency with very little additional dynamic power. That is, even when performance gains are minimal the power consumption savings can be considerable.
Some researchers have shown that the extra threads can be used proactively to seed a shared resource like a cache, to improve the performance of another single thread, and claim this shows that SMT does not only increase efficiency. Others use SMT to provide redundant computation, for some level of error detection and recovery.
However, in most current cases, SMT is about hiding memory latency, increasing efficiency, and increasing throughput of computations per amount of hardware used.
Taxonomy
In processor design, there are two ways to increase on-chip parallelism with fewer resource requirements: one is superscalar technique which tries to exploit instruction-level parallelism (ILP); the other is multithreading approach exploiting thread-level parallelism (TLP).
Superscalar means executing multiple instructions at the same time while thread-level parallelism (TLP) executes instructions from multiple threads within one processor chip at the same time. There are many ways to support more than one thread within a chip, namely:
Interleaved multithreading: Interleaved issue of multiple instructions from different threads, also referred to as temporal multithreading. It can be further divided into fine-grained multithreading or coarse-grained multithreading depending on the frequency of interleaved issues. Fine-grained multithreading—such as in a barrel processor—issues instructions for different threads after every cycle, while coarse-grained multithreading only switches to issue instructions from another thread when the current executing thread causes some long latency events (like page fault etc.). Coarse-grain multithreading is more common for less context switch between threads. For example, Intel's Montecito processor uses coarse-grained multithreading, while Sun's UltraSPARC T1 uses fine-grained multithreading. For those processors that have only one pipeline per core, interleaved multithreading is the only possible way, because it can issue at most one instruction per cycle.
Simultaneous multithreading (SMT): Issue multiple instructions from multiple threads in one cycle. The processor must be superscalar to do so.
Chip-level multiprocessing (CMP or multicore): integrates two or more processors into one chip, each executing threads independently.
Any combination of multithreaded/SMT/CMP.
The key factor to distinguish them is to look at how many instructions the processor can issue in one cycle and how many threads from which the instructions come. For example, Sun Microsystems' UltraSPARC T1 is a multicore processor combined with fine-grain multithreading technique instead of simultaneous multithreading because each core can only issue one instruction at a time.
Historical implementations
While multithreading CPUs have been around since the 1950s, simultaneous multithreading was first researched by IBM in 1968 as part of the ACS-360 project. The first major commercial microprocessor developed with SMT was the Alpha 21464 (EV8). This microprocessor was developed by DEC in coordination with Dean Tullsen of the University of California, San Diego, and Susan Eggers and Henry Levy of the University of Washington. The microprocessor was never released, since the Alpha line of microprocessors was discontinued shortly before HP acquired Compaq which had in turn acquired DEC. Dean Tullsen's work was also used to develop the hyper-threaded versions of the Intel Pentium 4 microprocessors, such as the "Northwood" and "Prescott".
Modern commercial implementations
The Intel Pentium 4 was the first modern desktop processor to implement simultaneous multithreading, starting from the 3.06 GHz model released in 2002, and since introduced into a number of their processors. Intel calls the functionality Hyper-Threading Technology, and provides a basic two-thread SMT engine. Intel claims up to a 30% speed improvement compared against an otherwise identical, non-SMT Pentium 4. The performance improvement seen is very application-dependent; however, when running two programs that require full attention of the processor it can actually seem like one or both of the programs slows down slightly when Hyper-threading is turned on. This is due to the replay system of the Pentium 4 tying up valuable execution resources, increasing contention for resources such as bandwidth, caches, TLBs, re-order buffer entries, and equalizing the processor resources between the two programs which adds a varying amount of execution time. The Pentium 4 Prescott core gained a replay queue, which reduces execution time needed for the replay system. This was enough to completely overcome that performance hit.
The latest Imagination Technologies MIPS architecture designs include an SMT system known as "MIPS MT". MIPS MT provides for both heavyweight virtual processing elements and lighter-weight hardware microthreads. RMI, a Cupertino-based startup, is the first MIPS vendor to provide a processor SOC based on eight cores, each of which runs four threads. The threads can be run in fine-grain mode where a different thread can be executed each cycle. The threads can also be assigned priorities. Imagination Technologies MIPS CPUs have two SMT threads per core.
IBM's Blue Gene/Q has 4-way SMT.
The IBM POWER5, announced in May 2004, comes as either a dual core dual-chip module (DCM), or quad-core or oct-core multi-chip module (MCM), with each core including a two-thread SMT engine. IBM's implementation is more sophisticated than the previous ones, because it can assign a different priority to the various threads, is more fine-grained, and the SMT engine can be turned on and off dynamically, to better execute those workloads where an SMT processor would not increase performance. This is IBM's second implementation of generally available hardware multithreading. In 2010, IBM released systems based on the POWER7 processor with eight cores with each having four Simultaneous Intelligent Threads. This switches the threading mode between one thread, two threads or four threads depending on the number of process threads being scheduled at the time. This optimizes the use of the core for minimum response time or maximum throughput. IBM POWER8 has 8 intelligent simultaneous threads per core (SMT8).
IBM Z starting with the z13 processor in 2013 has two threads per core (SMT-2).
Although many people reported that Sun Microsystems' UltraSPARC T1 (known as "Niagara" until its 14 November 2005 release) and the now defunct processor codenamed "Rock" (originally announced in 2005, but after many delays cancelled in 2010) are implementations of SPARC focused almost entirely on exploiting SMT and CMP techniques, Niagara is not actually using SMT. Sun refers to these combined approaches as "CMT", and the overall concept as "Throughput Computing". The Niagara has eight cores, but each core has only one pipeline, so actually it uses fine-grained multithreading. Unlike SMT, where instructions from multiple threads share the issue window each cycle, the processor uses a round robin policy to issue instructions from the next active thread each cycle. This makes it more similar to a barrel processor. Sun Microsystems' Rock processor is different: it has more complex cores that have more than one pipeline.
The Oracle Corporation SPARC T3 has eight fine-grained threads per core; SPARC T4, SPARC T5, SPARC M5, M6 and M7 have eight fine-grained threads per core of which two can be executed simultaneously.
Fujitsu SPARC64 VI has coarse-grained Vertical Multithreading (VMT) SPARC VII and newer have 2-way SMT.
Intel Itanium Montecito uses coarse-grained multithreading and Tukwila and newer ones use 2-way SMT (with dual-domain multithreading).
Intel Xeon Phi has 4-way SMT (with time-multiplexed multithreading) with hardware-based threads which cannot be disabled, unlike regular Hyper-Threading. The Intel Atom, first released in 2008, is the first Intel product to feature 2-way SMT (marketed as Hyper-Threading) without supporting instruction reordering, speculative execution, or register renaming. Intel reintroduced Hyper-Threading with the Nehalem microarchitecture, after its absence on the Core microarchitecture.
AMD Bulldozer microarchitecture FlexFPU and Shared L2 cache are multithreaded but integer cores in module are single threaded, so it is only a partial SMT implementation.
AMD Zen microarchitecture has 2-way SMT.
VISC architecture uses the Virtual Software Layer (translation layer) to dispatch a single thread of instructions to the Global Front End which splits instructions into virtual hardware threadlets which are then dispatched to separate virtual cores. These virtual cores can then send them to the available resources on any of the physical cores. Multiple virtual cores can push threadlets into the reorder buffer of a single physical core, which can split partial instructions and data from multiple threadlets through the execution ports at the same time. Each virtual core keeps track of the position of the relative output. This form of multithreading can increase single threaded performance by allowing a single thread to use all resources of the CPU. The allocation of resources is dynamic on a near-single cycle latency level (1–4 cycles depending on the change in allocation depending on individual application needs. Therefore, if two virtual cores are competing for resources, there are appropriate algorithms in place to determine what resources are to be allocated where.
Disadvantages
Depending on the design and architecture of the processor, simultaneous multithreading can decrease performance if any of the shared resources are bottlenecks for performance. Critics argue that it is a considerable burden to put on software developers that they have to test whether simultaneous multithreading is good or bad for their application in various situations and insert extra logic to turn it off if it decreases performance. Current operating systems lack convenient API calls for this purpose and for preventing processes with different priority from taking resources from each other.
There is also a security concern with certain simultaneous multithreading implementations. Intel's hyperthreading in NetBurst-based processors has a vulnerability through which it is possible for one application to steal a cryptographic key from another application running in the same processor by monitoring its cache use. There are also sophisticated machine learning exploits to HT implementation that were explained at Black Hat 2018.
See also
Hardware scout
Speculative multithreading
Symmetric multiprocessing
References
General
External links
SMT news articles and academic papers
SMT research at the University of Washington
Central processing unit
Computer architecture
Flynn's taxonomy
Threads (computing) | Simultaneous multithreading | [
"Technology",
"Engineering"
] | 2,672 | [
"Computers",
"Computer engineering",
"Computer architecture"
] |
315,043 | https://en.wikipedia.org/wiki/Monoclonal%20antibody | A monoclonal antibody (mAb, more rarely called moAb) is an antibody produced from a cell lineage made by cloning a unique white blood cell. All subsequent antibodies derived this way trace back to a unique parent cell.
Monoclonal antibodies can have monovalent affinity, binding only to the same epitope (the part of an antigen that is recognized by the antibody). In contrast, polyclonal antibodies bind to multiple epitopes and are usually made by several different antibody-secreting plasma cell lineages. Bispecific monoclonal antibodies can also be engineered, by increasing the therapeutic targets of one monoclonal antibody to two epitopes.
It is possible to produce monoclonal antibodies that specifically bind to almost any suitable substance; they can then serve to detect or purify it. This capability has become an investigative tool in biochemistry, molecular biology, and medicine. Monoclonal antibodies are used in the diagnosis of illnesses such as cancer and infections and are used therapeutically in the treatment of e.g. cancer and inflammatory diseases.
History
In the early 1900s, immunologist Paul Ehrlich proposed the idea of a Zauberkugel – "magic bullet", conceived of as a compound which selectively targeted a disease-causing organism, and could deliver a toxin for that organism. This underpinned the concept of monoclonal antibodies and monoclonal drug conjugates. Ehrlich and Élie Metchnikoff received the 1908 Nobel Prize for Physiology or Medicine for providing the theoretical basis for immunology.
By the 1970s, lymphocytes producing a single antibody were known, in the form of multiple myeloma – a cancer affecting B-cells. These abnormal antibodies or paraproteins were used to study the structure of antibodies, but it was not yet possible to produce identical antibodies specific to a given antigen. In 1973, Jerrold Schwaber described the production of monoclonal antibodies using human–mouse hybrid cells. This work remains widely cited among those using human-derived hybridomas. In 1975, Georges Köhler and César Milstein succeeded in making fusions of myeloma cell lines with B cells to create hybridomas that could produce antibodies, specific to known antigens and that were immortalized. They and Niels Kaj Jerne shared the Nobel Prize in Physiology or Medicine in 1984 for the discovery.
In 1988, Gregory Winter and his team pioneered the techniques to humanize monoclonal antibodies, eliminating the reactions that many monoclonal antibodies caused in some patients. By the 1990s research was making progress in using monoclonal antibodies therapeutically, and in 2018, James P. Allison and Tasuku Honjo received the Nobel Prize in Physiology or Medicine for their discovery of cancer therapy by inhibition of negative immune regulation, using monoclonal antibodies that prevent inhibitory linkages.
The translational work needed to implement these ideas is credited to Lee Nadler. As explained in an NIH article, "He was the first to discover monoclonal antibodies directed against human B-cell–specific antigens and, in fact, all the known human B-cell–specific antigens were discovered in his laboratory. He is a true translational investigator, since he used these monoclonal antibodies to classify human B-cell leukemia and lymphomas as well as to create therapeutic agents for patients. . . More importantly, he was the first in the world to administer a monoclonal antibody to a human (a patient with B-cell lymphoma)."
Production
Hybridoma development
Much of the work behind production of monoclonal antibodies is rooted in the production of hybridomas, which involves identifying antigen-specific plasma/plasmablast cells that produce antibodies specific to an antigen of interest and fusing these cells with myeloma cells. Rabbit B-cells can be used to form a rabbit hybridoma. Polyethylene glycol is used to fuse adjacent plasma membranes, but the success rate is low, so a selective medium in which only fused cells can grow is used. This is possible because myeloma cells have lost the ability to synthesize hypoxanthine-guanine-phosphoribosyl transferase (HGPRT), an enzyme necessary for the salvage synthesis of nucleic acids. The absence of HGPRT is not a problem for these cells unless the de novo purine synthesis pathway is also disrupted. Exposing cells to aminopterin (a folic acid analogue which inhibits dihydrofolate reductase) makes them unable to use the de novo pathway and become fully auxotrophic for nucleic acids, thus requiring supplementation to survive.
The selective culture medium is called HAT medium because it contains hypoxanthine, aminopterin and thymidine. This medium is selective for fused (hybridoma) cells. Unfused myeloma cells cannot grow because they lack HGPRT and thus cannot replicate their DNA. Unfused spleen cells cannot grow indefinitely because of their limited life span. Only fused hybrid cells referred to as hybridomas, are able to grow indefinitely in the medium because the spleen cell partner supplies HGPRT and the myeloma partner has traits that make it immortal (similar to a cancer cell).
This mixture of cells is then diluted and clones are grown from single parent cells on microtitre wells. The antibodies secreted by the different clones are then assayed for their ability to bind to the antigen (with a test such as ELISA or antigen microarray assay) or immuno-dot blot. The most productive and stable clone is then selected for future use.
The hybridomas can be grown indefinitely in a suitable cell culture medium. They can also be injected into mice (in the peritoneal cavity, surrounding the gut). There, they produce tumors secreting an antibody-rich fluid called ascites fluid.
The medium must be enriched during in vitro selection to further favour hybridoma growth. This can be achieved by the use of a layer of feeder fibrocyte cells or supplement medium such as briclone. Culture-media conditioned by macrophages can be used. Production in cell culture is usually preferred as the ascites technique is painful to the animal. Where alternate techniques exist, ascites is considered unethical.
Novel mAb development technology
Several monoclonal antibody technologies have been developed recently, such as phage display, single B cell culture, single cell amplification from various B cell populations and single plasma cell interrogation technologies. Different from traditional hybridoma technology, the newer technologies use molecular biology techniques to amplify the heavy and light chains of the antibody genes by PCR and produce in either bacterial or mammalian systems with recombinant technology. One of the advantages of the new technologies is applicable to multiple animals, such as rabbit, llama, chicken and other common experimental animals in the laboratory.
Purification
After obtaining either a media sample of cultured hybridomas or a sample of ascites fluid, the desired antibodies must be extracted. Cell culture sample contaminants consist primarily of media components such as growth factors, hormones and transferrins. In contrast, the in vivo sample is likely to have host antibodies, proteases, nucleases, nucleic acids and viruses. In both cases, other secretions by the hybridomas such as cytokines may be present. There may also be bacterial contamination and, as a result, endotoxins that are secreted by the bacteria. Depending on the complexity of the media required in cell culture and thus the contaminants, one or the other method (in vivo or in vitro) may be preferable.
The sample is first conditioned, or prepared for purification. Cells, cell debris, lipids, and clotted material are first removed, typically by centrifugation followed by filtration with a 0.45 μm filter. These large particles can cause a phenomenon called membrane fouling in later purification steps. In addition, the concentration of product in the sample may not be sufficient, especially in cases where the desired antibody is produced by a low-secreting cell line. The sample is therefore concentrated by ultrafiltration or dialysis.
Most of the charged impurities are usually anions such as nucleic acids and endotoxins. These can be separated by ion exchange chromatography. Either cation exchange chromatography is used at a low enough pH that the desired antibody binds to the column while anions flow through, or anion exchange chromatography is used at a high enough pH that the desired antibody flows through the column while anions bind to it. Various proteins can also be separated along with the anions based on their isoelectric point (pI). In proteins, the isoelectric point (pI) is defined as the pH at which a protein has no net charge. When the pH > pI, a protein has a net negative charge, and when the pH < pI, a protein has a net positive charge. For example, albumin has a pI of 4.8, which is significantly lower than that of most monoclonal antibodies, which have a pI of 6.1. Thus, at a pH between 4.8 and 6.1, the average charge of albumin molecules is likely to be more negative, while mAbs molecules are positively charged and hence it is possible to separate them. Transferrin, on the other hand, has a pI of 5.9, so it cannot be easily separated by this method. A difference in pI of at least 1 is necessary for a good separation.
Transferrin can instead be removed by size exclusion chromatography. This method is one of the more reliable chromatography techniques. Since we are dealing with proteins, properties such as charge and affinity are not consistent and vary with pH as molecules are protonated and deprotonated, while size stays relatively constant. Nonetheless, it has drawbacks such as low resolution, low capacity and low elution times.
A much quicker, single-step method of separation is protein A/G affinity chromatography. The antibody selectively binds to protein A/G, so a high level of purity (generally >80%) is obtained. The generally harsh conditions of this method may damage easily damaged antibodies. A low pH can break the bonds to remove the antibody from the column. In addition to possibly affecting the product, low pH can cause protein A/G itself to leak off the column and appear in the eluted sample. Gentle elution buffer systems that employ high salt concentrations are available to avoid exposing sensitive antibodies to low pH. Cost is also an important consideration with this method because immobilized protein A/G is a more expensive resin.
To achieve maximum purity in a single step, affinity purification can be performed, using the antigen to provide specificity for the antibody. In this method, the antigen used to generate the antibody is covalently attached to an agarose support. If the antigen is a peptide, it is commonly synthesized with a terminal cysteine, which allows selective attachment to a carrier protein, such as KLH during development and to support purification. The antibody-containing medium is then incubated with the immobilized antigen, either in batch or as the antibody is passed through a column, where it selectively binds and can be retained while impurities are washed away. An elution with a low pH buffer or a more gentle, high salt elution buffer is then used to recover purified antibody from the support.
Antibody heterogeneity
Product heterogeneity is common in monoclonal antibodies and other recombinant biological products and is typically introduced either upstream during expression or downstream during manufacturing.
These variants are typically aggregates, deamidation products, glycosylation variants, oxidized amino acid side chains, as well as amino and carboxyl terminal amino acid additions. These seemingly minute structural changes can affect preclinical stability and process optimization as well as therapeutic product potency, bioavailability and immunogenicity. The generally accepted purification method of process streams for monoclonal antibodies includes capture of the product target with protein A, elution, acidification to inactivate potential mammalian viruses, followed by ion chromatography, first with anion beads and then with cation beads.
Displacement chromatography has been used to identify and characterize these often unseen variants in quantities that are suitable for subsequent preclinical evaluation regimens such as animal pharmacokinetic studies. Knowledge gained during the preclinical development phase is critical for enhanced product quality understanding and provides a basis for risk management and increased regulatory flexibility. The recent Food and Drug Administration's Quality by Design initiative attempts to provide guidance on development and to facilitate design of products and processes that maximizes efficacy and safety profile while enhancing product manufacturability.
Recombinant
The production of recombinant monoclonal antibodies involves repertoire cloning, CRISPR/Cas9, or phage display/yeast display technologies. Recombinant antibody engineering involves antibody production by the use of viruses or yeast, rather than mice. These techniques rely on rapid cloning of immunoglobulin gene segments to create libraries of antibodies with slightly different amino acid sequences from which antibodies with desired specificities can be selected. The phage antibody libraries are a variant of phage antigen libraries. These techniques can be used to enhance the specificity with which antibodies recognize antigens, their stability in various environmental conditions, their therapeutic efficacy and their detectability in diagnostic applications. Fermentation chambers have been used for large scale antibody production.
Chimeric antibodies
While mouse and human antibodies are structurally similar, the differences between them were sufficient to invoke an immune response when murine monoclonal antibodies were injected into humans, resulting in their rapid removal from the blood, as well as systemic inflammatory effects and the production of human anti-mouse antibodies (HAMA).
Recombinant DNA has been explored since the late 1980s to increase residence times. In one approach called "CDR grafting", mouse DNA encoding the binding portion of a monoclonal antibody was merged with human antibody-producing DNA in living cells. The expression of this "chimeric" or "humanised" DNA through cell culture yielded part-mouse, part-human antibodies.
Human antibodies
Ever since the discovery that monoclonal antibodies could be generated, scientists have targeted the creation of fully human products to reduce the side effects of humanised or chimeric antibodies. Several successful approaches have been proposed: transgenic mice, phage display and single B cell cloning.
Cost
Monoclonal antibodies are more expensive to manufacture than small molecules due to the complex processes involved and the general size of the molecules, all in addition to the enormous research and development costs involved in bringing a new chemical entity to patients. They are priced to enable manufacturers to recoup the typically large investment costs, and where there are no price controls, such as the United States, prices can be higher if they provide great value. Seven University of Pittsburgh researchers concluded, "The annual price of mAb therapies is about $100,000 higher in oncology and hematology than in other disease states", comparing them on a per patient basis, to those for cardiovascular or metabolic disorders, immunology, infectious diseases, allergy, and ophthalmology.
Applications
Diagnostic tests
Once monoclonal antibodies for a given substance have been produced, they can be used to detect the presence of this substance. Proteins can be detected using the Western blot and immuno dot blot tests. In immunohistochemistry, monoclonal antibodies can be used to detect antigens in fixed tissue sections, and similarly, immunofluorescence can be used to detect a substance in either frozen tissue section or live cells.
Analytic and chemical uses
Antibodies can also be used to purify their target compounds from mixtures, using the method of immunoprecipitation.
Therapeutic uses
Therapeutic monoclonal antibodies act through multiple mechanisms, such as blocking of targeted molecule functions, inducing apoptosis in cells which express the target, or by modulating signalling pathways.
Cancer treatment
One possible treatment for cancer involves monoclonal antibodies that bind only to cancer-cell-specific antigens and induce an immune response against the target cancer cell. Such mAbs can be modified for delivery of a toxin, radioisotope, cytokine or other active conjugate or to design bispecific antibodies that can bind with their Fab regions both to target antigen and to a conjugate or effector cell. Every intact antibody can bind to cell receptors or other proteins with its Fc region.
MAbs approved by the FDA for cancer include:
Alemtuzumab
Bevacizumab
Cetuximab
Dostarlimab
Gemtuzumab ozogamicin
Ipilimumab
Nivolumab
Ofatumumab
Panitumumab
Pembrolizumab
Ranibizumab
Rituximab
Trastuzumab
Autoimmune diseases
Monoclonal antibodies used for autoimmune diseases include infliximab and adalimumab, which are effective in rheumatoid arthritis, Crohn's disease, ulcerative colitis and ankylosing spondylitis by their ability to bind to and inhibit TNF-α. Basiliximab and daclizumab inhibit IL-2 on activated T cells and thereby help prevent acute rejection of kidney transplants. Omalizumab inhibits human immunoglobulin E (IgE) and is useful in treating moderate-to-severe allergic asthma.
Examples of therapeutic monoclonal antibodies
Monoclonal antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine like CiteAb. Below are examples of clinically important monoclonal antibodies.
COVID-19
In 2020, the monoclonal antibody therapies bamlanivimab/etesevimab and casirivimab/imdevimab were given emergency use authorizations by the US Food and Drug Administration to reduce the number of hospitalizations, emergency room visits, and deaths because of COVID-19. In September 2021, the Biden administration purchased billion worth of Regeneron monoclonal antibodies at $2,100 per dose to curb the shortage.
As of December 2021, in vitro neutralization tests indicate monoclonal antibody therapies (with the exception of sotrovimab and tixagevimab/cilgavimab) were not likely to be active against the Omicron variant.
Over 2021–22, two Cochrane reviews found insufficient evidence for using neutralizing monoclonal antibodies to treat COVID-19 infections. The reviews applied only to people who were unvaccinated against COVID‐19, and only to the COVID-19 variants existing during the studies, not to newer variants, such as Omicron.
In March 2024, pemivibart, a monoclonal antibody drug, received an emergency use authorization from the US FDA for use as pre-exposure prophylaxis to protect certain moderately to severely immunocompromised individuals against COVID-19.
Side effects
Several monoclonal antibodies, such as bevacizumab and cetuximab, can cause different kinds of side effects. These side effects can be categorized into common and serious side effects.
Some common side effects include:
Dizziness
Headaches
Allergies
Diarrhea
Cough
Fever
Itching
Back pain
General weakness
Loss of appetite
Insomnia
Constipation
Among the possible serious side effects are:
Anaphylaxis
Bleeding
Arterial and venous blood clots
Autoimmune thyroiditis
Hypothyroidism
Hepatitis
Heart failure
Cancer
Anemia
Decrease in white blood cells
Stomatitis
Enterocolitis
Gastrointestinal perforation
Mucositis
See also
List of monoclonal antibodies
References
Further reading
External links
Antibodypedia, open-access virtual repository publishing data and commentary on any antibodies available to the scientific community.
Antibody Purification Handbook
Biotechnology
Cancer treatments
Immune system
Immunology
Reagents for biochemistry | Monoclonal antibody | [
"Chemistry",
"Biology"
] | 4,185 | [
"Biochemistry methods",
"Immune system",
"Biotechnology",
"Immunology",
"Organ systems",
"nan",
"Biochemistry",
"Reagents for biochemistry"
] |
315,050 | https://en.wikipedia.org/wiki/Cytotoxicity | Cytotoxicity is the quality of being toxic to cells. Examples of toxic agents are toxic metals, toxic chemicals, microbe neurotoxins, radiation particles and even specific neurotransmitters when the system is out of balance. Also some types of drugs, e.g alcohol, and some venom, e.g. from the puff adder (Bitis arietans) or brown recluse spider (Loxosceles reclusa) are toxic to cells.
Cell physiology
Treating cells with the cytotoxic compound can result in a variety of prognoses. The cells may undergo necrosis, in which they lose membrane integrity and die rapidly as a result of cell lysis. The cells can stop actively growing and dividing (a decrease in cell viability), or the cells can activate a genetic program of controlled cell death (apoptosis).
Cells undergoing necrosis typically exhibit rapid swelling, lose membrane integrity, shut down metabolism, and release their contents into the environment. Cells that undergo rapid necrosis in vitro do not have sufficient time or energy to activate apoptotic machinery and will not express apoptotic markers. Apoptosis is characterized by well defined cytological and molecular events including a change in the refractive index of the cell, cytoplasmic shrinkage, nuclear condensation and cleavage of DNA into regularly sized fragments. Cells in culture that are undergoing apoptosis eventually undergo secondary necrosis. They will shut down metabolism, lose membrane integrity and lyse.
Measurement
Cytotoxicity assays are widely used by the pharmaceutical industry to screen for cytotoxicity in compound libraries. Researchers can either look for cytotoxic compounds, if they are interested in developing a therapeutic that targets rapidly dividing cancer cells, for instance; or they can screen "hits" from initial high-throughput drug screens for unwanted cytotoxic effects before investing in their development as a pharmaceutical.
Assessing cell membrane integrity is one of the most common ways to measure cell viability and cytotoxic effects. Compounds that have cytotoxic effects often compromise cell membrane integrity. Vital dyes, such as trypan blue or propidium iodide are normally excluded from the inside of healthy cells; however, if the cell membrane has been compromised, they freely cross the membrane and stain intracellular components. Alternatively, membrane integrity can be assessed by monitoring the passage of substances that are normally sequestered inside cells to the outside. One molecule, lactate dehydrogenase (LDH), is commonly measured using LDH assay. LDH reduces NAD to NADH which elicits a colour change by interaction with a specific probe. Protease biomarkers have been identified that allow researchers to measure relative numbers of live and dead cells within the same cell population. The live-cell protease is only active in cells that have a healthy cell membrane, and loses activity once the cell is compromised and the protease is exposed to the external environment. The dead-cell protease cannot cross the cell membrane, and can only be measured in culture media after cells have lost their membrane integrity.
Cytotoxicity can also be monitored using 3-(4, 5-Dimethyl-2-thiazolyl)-2, 5-diphenyl-2H-tetrazolium bromide (MTT) or with 2,3-bis-(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide (XTT), which yields a water-soluble product, or the MTS assay. This assay measures the reducing potential of the cell using a colorimetric reaction. Viable cells will reduce the MTS reagent to a colored formazan product. A similar redox-based assay has also been developed using the fluorescent dye, resazurin. In addition to using dyes to indicate the redox potential of cells in order to monitor their viability, researchers have developed assays that use ATP content as a marker of viability. Such ATP-based assays include bioluminescent assays in which ATP is the limiting reagent for the luciferase reaction.
Cytotoxicity can also be measured by the sulforhodamine B (SRB) assay, WST assay and clonogenic assay.
Suitable assays can be combined and performed sequentially on the same cells in order to reduce assay-specific false positive or false negative results. A possible combination is LDH-XTT-NR (Neutral red assay)-SRB which is also available in a kit format.
A label-free approach to follow the cytotoxic response of adherent animal cells in real-time is based on electric impedance measurements when the cells are grown on gold-film electrodes. This technology is referred to as electric cell-substrate impedance sensing (ECIS). Label-free real-time techniques provide the kinetics of the cytotoxic response rather than just a snapshot like many colorimetric endpoint assays.
Material that has been determined as cytotoxic, typically biomedical waste, is often marked with a symbol that consists of a capital letter "C" inside a triangle.
Prediction
A highly important topic is the prediction of cytotoxicity of chemical compounds based on previous measurements, i.e. in-silico testing. For this purpose many QSAR and virtual screening methods have been suggested. An independent comparison of these methods has been done within the "Toxicology in the 21st century" project.
Cancers
Some chemotherapies contain cytotoxic drugs, whose purpose is interfering with the cell division. These drugs cannot distinguish between normal and malignant cells, but they inhibit the overall process of cell division with the purpose to kill the cancers before the hosts.
Immune system
Antibody-dependent cell-mediated cytotoxicity (ADCC) describes the cell-killing ability of certain lymphocytes, which requires the target cell being marked by an antibody. Lymphocyte-mediated cytotoxicity, on the other hand, does not have to be mediated by antibodies; nor does complement-dependent cytotoxicity (CDC), which is mediated by the complement system.
Three groups of cytotoxic lymphocytes are distinguished:
Cytotoxic T cells
Natural killer cells
Natural killer T cells
See also
Antireticular Cytotoxic Serum
Host–pathogen interaction
Membrane vesicle trafficking
Snake toxins
References
External links
Toxicology
Immunology | Cytotoxicity | [
"Biology",
"Environmental_science"
] | 1,389 | [
"Immunology",
"Toxicology"
] |
315,051 | https://en.wikipedia.org/wiki/Newel | A newel, also called a central pole or support column, is the central supporting pillar of a staircase. It can also refer to an upright post that supports and/or terminates the handrail of a stair banister (the "newel post"). In stairs having straight flights it is the principal post at the foot of the staircase, but the term can also be used for the intermediate posts on landings and at the top of a staircase. Although its primary purpose is structural, newels have long been adorned with decorative trim and designed in different architectural styles.
Newel posts turned on a lathe are solid pieces that can be highly decorative, and they typically need to be fixed to a square newel base for installation. These are sometimes called solid newels in distinction from hollow newels due to varying techniques of construction. Hollow newels are known more accurately as box newel posts. In historic homes, folklore holds that the house plans were placed in the newel upon completion of the house before the newel was capped.
The most common means of fixing a newel post to the floor is to use a newel post fastener, which secures a newel post to a timber joist through either concrete or wooden flooring.
In popular culture
A loose ball cap finial on the newel post at the base of the stairway is a plot device in the 1946 classic It's a Wonderful Life. The same is used in jest in the 1989 film Christmas Vacation, in which Clark Griswold, in an emotional meltdown, cuts a loose finial off a newel post with a chainsaw. He casually exclaims, "Fixed the newel post!" and carries on.
In Family Guy Season 17, Episode 16, “You Can’t Handle the Booth”, Stewie and Brian argue over the semantics of Peter getting stuck in what Stewie calls “banister slats” and Brian corrects him by saying they are called “baluster slats”. Stewie then asks if the “baluster” is the big, round thing at the bottom of the stairs where the staircase begins, to which Brian laughs at him and corrects him by saying “I believe what you are referring to is a newel post”.
The comedy duo Nichols and May performed a routine about a frustrating attempt to obtain operator assistance from a trio of unhelpful telephone operators, the first of whom uses a bizarre phonetic alphabet to clarify the spelling of the requested name, where "newel post" represents the letter N: "'K' as in 'knife'; 'A' as in 'aardvark'; 'P' as in 'pneumonia'; 'L' as in 'luscious'; 'A' as in 'aardvark' again; 'N' as in 'newel post', Kaplan?"
Gallery
See also
Mortgage button
References
External links
Stairways
Architectural elements
Stairs | Newel | [
"Technology",
"Engineering"
] | 603 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
315,084 | https://en.wikipedia.org/wiki/Lip%20reading | Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.
Process
Although speech perception is considered to be an auditory skill, it is intrinsically multimodal, since producing speech requires the speaker to make movements of the lips, teeth and tongue which are often visible in face-to-face communication. Information from the lips and face supports aural comprehension and most fluent listeners of a language are sensitive to seen speech actions (see McGurk effect). The extent to which people make use of seen speech actions varies with the visibility of the speech action and the knowledge and skill of the perceiver.
Phonemes and visemes
The phoneme is the smallest detectable unit of sound in a language that serves to distinguish words from one another. /pit/ and /pik/ differ by one phoneme and refer to different concepts. Spoken English has about 44 phonemes. For lip reading, the number of visually distinctive units - visemes - is much smaller, thus several phonemes map onto a few visemes. This is because many phonemes are produced within the mouth and throat, and are hard to see. These include glottal consonants and most gestures of the tongue. Voiced and unvoiced pairs look identical, such as [p] and [b], [k] and [g], [t] and [d], [f] and [v], and [s] and [z]; likewise for nasalisation (e.g. [m] vs. [b]). Homophenes are words that look similar when lip read, but which contain different phonemes. Because there are about three times as many phonemes as visemes in English, it is often claimed that only 30% of speech can be lip read. Homophenes are a crucial source of mis-lip reading.
Co-articulation
Visemes can be captured as still images, but speech unfolds in time. The smooth articulation of speech sounds in sequence can mean that mouth patterns may be 'shaped' by an adjacent phoneme: the 'th' sound in 'tooth' and in 'teeth' appears very different because of the vocalic context. This feature of dynamic speech-reading affects lip-reading 'beyond the viseme'.
How can it 'work' with so few visemes?
While visemes offer a useful starting point for understanding lipreading, spoken distinctions within a viseme can be distinguished and can help support identification.
Moreover, the statistical distribution of phonemes within the lexicon of a language is uneven. While there are clusters of words which are phonemically similar to each other ('lexical neighbors', such as spit/sip/sit/stick...etc.), others are unlike all other words: they are 'unique' in terms of the distribution of their phonemes ('umbrella' may be an example). Skilled users of the language bring this knowledge to bear when interpreting speech, so it is generally harder to identify a heard word with many lexical neighbors than one with few neighbors. Applying this insight to seen speech, some words in the language can be unambiguously lip-read even when they contain few visemes - simply because no other words could possibly 'fit'.
Variation in readability and skill
Many factors affect the visibility of a speaking face, including illumination, movement of the head/camera, frame-rate of the moving image and distance from the viewer (see e.g.). Head movement that accompanies normal speech can also improve lip-reading, independently of oral actions. However, when lip-reading connected speech, the viewer's knowledge of the spoken language, familiarity with the speaker and style of speech, and the context of the lip-read material are as important as the visibility of the speaker. While most hearing people are sensitive to seen speech, there is great variability in individual speechreading skill. Good lipreaders are often more accurate than poor lipreaders at identifying phonemes from visual speech.
A simple visemic measure of 'lipreadability' has been questioned by some researchers. The 'phoneme equivalence class' measure takes into account the statistical structure of the lexicon and can also accommodate individual differences in lip-reading ability. In line with this, excellent lipreading is often associated with more broad-based cognitive skills including general language proficiency, executive function and working memory.
Lipreading and language learning in hearing infants and children
The first few months
Seeing the mouth plays a role in the very young infant's early sensitivity to speech, and prepares them to become speakers at 1 – 2 years. In order to imitate, a baby must learn to shape their lips in accordance with the sounds they hear; seeing the speaker may help them to do this. Newborns imitate adult mouth movements such as sticking out the tongue or opening the mouth, which could be a precursor to further imitation and later language learning. Infants are disturbed when audiovisual speech of a familiar speaker is desynchronized and tend to show different looking patterns for familiar than for unfamiliar faces when matched to (recorded) voices. Infants are sensitive to McGurk illusions months before they have learned to speak. These studies and many more point to a role for vision in the development of sensitivity to (auditory) speech in the first half-year of life.
The next six months; a role in learning a native language
Until around six months of age, most hearing infants are sensitive to a wide range of speech gestures - including ones that can be seen on the mouth - which may or may not later be part of the phonology of their native language. But in the second six months of life, the hearing infant shows perceptual narrowing for the phonetic structure of their own language - and may lose the early sensitivity to mouth patterns that are not useful. The speech sounds /v/ and /b/ which are visemically distinctive in English but not in Castilian Spanish are accurately distinguished in Spanish-exposed and English-exposed babies up to the age of around 6 months. However, older Spanish-exposed infants lose the ability to 'see' this distinction, while it is retained for English-exposed infants. Such studies suggest that rather than hearing and vision developing in independent ways in infancy, multimodal processing is the rule, not the exception, in (language) development of the infant brain.
Early language production: one to two years
Given the many studies indicating a role for vision in the development of language in the pre-lingual infant, the effects of congenital blindness on language development are surprisingly small. 18-month-olds learn new words more readily when they hear them, and do not learn them when they are shown the speech movements without hearing. However, children blind from birth can confuse /m/ and /n/ in their own early production of English words – a confusion rarely seen in sighted hearing children, since /m/ and /n/ are visibly distinctive, but auditorially confusable. The role of vision in children aged 1–2 years may be less critical to the production of their native language, since, by that age, they have attained the skills they need to identify and imitate speech sounds. However, hearing a non-native language can shift the child's attention to visual and auditory engagement by way of lipreading and listening in order to process, understand and produce speech.
In childhood
Studies with pre-lingual infants and children use indirect, non-verbal measures to indicate sensitivity to seen speech. Explicit lip-reading can be reliably tested in hearing preschoolers by asking them to 'say aloud what I say silently'. In school-age children, lipreading of familiar closed-set words such as number words can be readily elicited. Individual differences in lip-reading skill, as tested by asking the child to 'speak the word that you lip-read', or by matching a lip-read utterance to a picture, show a relationship between lip-reading skill and age.
In hearing adults: lifespan considerations
While lip-reading silent speech poses a challenge for most hearing people, adding sight of the speaker to heard speech improves speech processing under many conditions. The mechanisms for this, and the precise ways in which lip-reading helps, are topics of current research.
Seeing the speaker helps at all levels of speech processing from phonetic feature discrimination to interpretation of pragmatic utterances.
The positive effects of adding vision to heard speech are greater in noisy than quiet environments,
where by making speech perception easier, seeing the speaker can free up cognitive resources, enabling deeper processing of speech content.
As hearing becomes less reliable in old-age, people may tend to rely more on lip-reading, and are encouraged to do so. However, greater reliance on lip-reading may not always make good the effects of age-related hearing loss. Cognitive decline in aging may be preceded by and/or associated with measurable hearing loss. Thus lipreading may not always be able to fully compensate for the combined hearing and cognitive age-related decrements.
In specific (hearing) populations
A number of studies report anomalies of lipreading in populations with distinctive developmental disorders. Autism: People with autism may show reduced lipreading abilities and reduced reliance on vision in audiovisual speech perception. This may be associated with gaze-to-the-face anomalies in these people. Williams syndrome: People with Williams syndrome show some deficits in speechreading which may be independent of their visuo-spatial difficulties. Specific Language Impairment: Children with SLI are also reported to show reduced lipreading sensitivity, as are people with dyslexia.
Deafness
Debate has raged for hundreds of years over the role of lip-reading ('oralism') compared with other communication methods (most recently, total communication) in the education of deaf people. The extent to which one or other approach is beneficial depends on a range of factors, including level of hearing loss of the deaf person, age of hearing loss, parental involvement and parental language(s). Then there is a question concerning the aims of the deaf person and their community and carers. Is the aim of education to enhance communication generally, to develop sign language as a first language, or to develop skills in the spoken language of the hearing community? Researchers now focus on which aspects of language and communication may be best delivered by what means and in which contexts, given the hearing status of the child and her family, and their educational plans. Bimodal bilingualism (proficiency in both speech and sign language) is one dominant current approach in language education for the deaf child.
Deaf people are often better lip-readers than people with normal hearing. Some deaf people practice as professional lipreaders for instance in forensic lipreading. In deaf people who have a cochlear implant, pre-implant lip-reading skill can predict post-implant (auditory or audiovisual) speech processing. In adults, the later the age of implantation, the better the visual speechreading abilities of the deaf person. For many deaf people, access to spoken communication can be helped when a spoken message is relayed via a trained, professional lip-speaker.
In connection with lipreading and literacy development, children born deaf typically show delayed development of literacy skills which can reflect difficulties in acquiring elements of the spoken language. In particular, reliable phoneme-grapheme mapping may be more difficult for deaf children, who need to be skilled speech-readers in order to master this necessary step in literacy acquisition. Lip-reading skill is associated with literacy abilities in deaf adults and children and training in lipreading may help to develop literacy skills.
Cued Speech uses lipreading with accompanying hand shapes that disambiguate the visemic (consonant) lipshape. Cued speech is said to be easier for hearing parents to learn than a sign language, and studies, primarily from Belgium, show that a deaf child exposed to cued speech in infancy can make more efficient progress in learning a spoken language than from lipreading alone. The use of cued speech in cochlear implantation for deafness is likely to be positive. A similar approach, involving the use of handshapes accompanying seen speech, is Visual Phonics, which is used by some educators to support the learning of written and spoken language.
Teaching and training
The aim of teaching and training in lipreading is to develop awareness of the nature of lipreading, and to practice ways of improving the ability to perceive speech 'by eye'. While the value of lipreading training in improving 'hearing by eye' was not always clear, especially for people with acquired hearing loss, there is evidence that systematic training in alerting students to attend to seen speech actions can be beneficial. Lipreading classes, often called lipreading and managing hearing loss classes, are mainly aimed at adults who have hearing loss. The highest proportion of adults with hearing loss have an age-related, or noise-related loss; with both of these forms of hearing loss, the high-frequency sounds are lost first. Since many of the consonants in speech are high-frequency sounds, speech becomes distorted. Hearing aids help but may not cure this. Lipreading classes have been shown to be of benefit in UK studies commissioned by the Action on Hearing Loss charity (2012).
Trainers recognise that lipreading is an inexact art. Students are taught to watch the lips, tongue and jaw movements, to follow the stress and rhythm of language, to use their residual hearing, with or without hearing aids, to watch expression and body language, and to use their ability to reason and deduce. They are taught the lipreaders' alphabet, groups of sounds that look alike on the lips (visemes) like p, b, m, or f, v. The aim is to get the gist, so as to have the confidence to join in conversation and avoid the damaging social isolation that often accompanies hearing loss. Lipreading classes are recommended for anyone who struggles to hear in noise, and help to adjust to hearing loss.
Tests
Most tests of lipreading were devised to measure individual differences in performing specific speech-processing tasks and to detect changes in performance following training. Lipreading tests have been used with relatively small groups in experimental settings, or as clinical indicators with individual patients and clients. That is, most lipreading tests to date have limited validity as markers of lipreading skill in the general population.
Lipreading and lip-speaking by machine
Automated lip-reading has been a topic of interest in computational engineering, as well as in science fiction movies. The computational engineer Steve Omohundro, among others, pioneered its development. In facial animation, the aim is to generate realistic facial actions, especially mouth movements, that simulate human speech actions. Computer algorithms to deform or manipulate images of faces can be driven by heard or written language. Systems may be based on detailed models derived from facial movements (motion capture); on anatomical modelling of actions of the jaw, mouth and tongue; or on mapping of known viseme- phoneme properties. Facial animation has been used in speechreading training (demonstrating how different sounds 'look'). These systems are a subset of speech synthesis modelling which aim to deliver reliable 'text-to-(seen)-speech' outputs. A complementary aim—the reverse of making faces move in speech—is to develop computer algorithms that can deliver realistic interpretations of speech (i.e. a written transcript or audio record) from natural video data of a face in action: this is facial speech recognition. These models too can be sourced from a variety of data. Automatic visual speech recognition from video has been quite successful in distinguishing different languages (from a corpus of spoken language data). Demonstration models, using machine-learning algorithms, have had some success in lipreading speech elements, such as specific words, from video and for identifying hard-to-lipread phonemes from visemically similar seen mouth actions. Machine-based speechreading is now making successful use of neural-net based algorithms which use large databases of speakers and speech material (following the successful model for auditory automatic speech recognition).
Uses for machine lipreading could include automated lipreading of video-only records, automated lipreading of speakers with damaged vocal tracts, and speech processing in face-to-face video (i.e. from videophone data). Automated lipreading may help in processing noisy or unfamiliar speech. Automated lipreading may contribute to biometric person identification, replacing password-based identification.
The brain
Following the discovery that auditory brain regions, including Heschl's gyrus, were activated by seen speech, the neural circuitry for speechreading was shown to include supra-modal processing regions, especially superior temporal sulcus (all parts) as well as posterior inferior occipital-temporal regions including regions specialised for the processing of faces and biological motion. In some but not all studies, activation of Broca's area is reported for speechreading, suggesting that articulatory mechanisms can be activated in speechreading. Studies of the time course of audiovisual speech processing showed that sight of speech can prime auditory processing regions in advance of the acoustic signal. Better lipreading skill is associated with greater activation in (left) superior temporal sulcus and adjacent inferior temporal (visual) regions in hearing people. In deaf people, the circuitry devoted to speechreading appears to be very similar to that in hearing people, with similar associations of (left) superior temporal activation and lipreading skill.
References
Bibliography
D.Stork and M.Henneke (Eds) (1996) Speechreading by Humans and machines: Models Systems and Applications. Nato ASI series F Computer and Systems sciences Vol 150. Springer, Berlin Germany
E.Bailly, P.Perrier and E.Vatikiotis-Bateson (Eds)(2012) Audiovisual Speech processing, Cambridge University press, Cambridge UK
Hearing By Eye (1987), B.Dodd and R.Campbell (Eds), Erlbaum Asstes, Hillsdale NJ, USA; Hearing by Eye II, (1997) R.Campbell, B.Dodd and D.Burnham (Eds), Psychology Press, Hove UK
D. W. Massaro (1987, reprinted 2014) Speech perception by ear and by eye, Lawrence Erlbaum Associates, Hillsdale NJ
Further reading
Scottish Sensory Centre 2005: workshop on lipreading
Lipreading Classes in Scotland: the way forward. 2015 Report
AVISA; International Speech Communication Association special interest group focussed on lip-reading and audiovisual speech
Speechreading for information gathering: a survey of scientific sources
Successful Online Speechreading Training
See also
Automated Lip Reading (ALR)
Deaf culture
Human communication
Perception
Audiology
Education for the deaf | Lip reading | [
"Biology"
] | 3,976 | [
"Human communication",
"Behavior",
"Human behavior"
] |
315,139 | https://en.wikipedia.org/wiki/Metronidazole | Metronidazole, sold under the brand name Flagyl among others, is an antibiotic and antiprotozoal medication. It is used either alone or with other antibiotics to treat pelvic inflammatory disease, endocarditis, and bacterial vaginosis. It is effective for dracunculiasis, giardiasis, trichomoniasis, and amebiasis. It is an option for a first episode of mild-to-moderate Clostridioides difficile colitis if vancomycin or fidaxomicin is unavailable. Metronidazole is available orally (by mouth), as a cream or gel, and by slow intravenous infusion (injection into a vein).
Common side effects include nausea, a metallic taste, loss of appetite, and headaches. Occasionally seizures or allergies to the medication may occur. Some state that metronidazole should not be used in early pregnancy, while others state doses for trichomoniasis are safe. Metronidazole is generally considered compatible with breastfeeding.
Metronidazole began to be commercially used in 1960 in France. It is on the World Health Organization's List of Essential Medicines. It is available in most areas of the world. In 2022, it was the 133rd most commonly prescribed medication in the United States, with more than 4million prescriptions.
Medical uses
Metronidazole has activity against some protozoans and most anaerobic bacteria (both Gram-negative and Gram-positive classes) but not the aerobic bacteria.
Metronidazole is primarily used to treat: bacterial vaginosis, pelvic inflammatory disease (along with other antibacterials like ceftriaxone), pseudomembranous colitis, aspiration pneumonia, rosacea (topical), fungating wounds (topical), intra-abdominal infections, lung abscess, periodontal disease, amoebiasis, oral infections, giardiasis, trichomoniasis, and infections caused by susceptible anaerobic organisms such as Bacteroides, Fusobacterium, Clostridium, Peptostreptococcus, and Prevotella species. It is also often used to eradicate Helicobacter pylori along with other drugs and to prevent infection in people recovering from surgery.
Metronidazole is bitter and so the liquid suspension contains metronidazole benzoate. This may require hydrolysis in the gastrointestinal tract and some sources speculate that it may be unsuitable in people with diarrhea or feeding-tubes in the duodenum or jejunum.
Bacterial vaginosis
Drugs of choice for the treatment of bacterial vaginosis include metronidazole and clindamycin.
An effective treatment option for mixed infectious vaginitis is a combination of clotrimazole and metronidazole.
Trichomoniasis
The 5-nitroimidazole drugs (metronidazole and tinidazole) are the mainstay of treatment for infection with Trichomonas vaginalis. Treatment for both the infected patient and the patient's sexual partner is recommended, even if asymptomatic. Therapy other than 5-nitroimidazole drugs is also an option, but cure rates are much lower.
Giardiasis
Oral metronidazole is a treatment option for giardiasis, however, the increasing incidence of nitroimidazole resistance is leading to the increased use of other compound classes.
Dracunculus
In the case of Dracunculus medinensis (Guinea worm), metronidazole merely facilitates worm extraction rather than killing the worm.
C. difficile colitis
Initial antibiotic therapy for less-severe Clostridioides difficile infection colitis (pseudomembranous colitis) consists of metronidazole, vancomycin, or fidaxomicin by mouth. In 2017, the IDSA generally recommended vancomycin and fidaxomicin over metronidazole. Vancomycin by mouth has been shown to be more effective in treating people with severe C. difficile colitis.
E. histolytica
Entamoeba histolytica invasive amebiasis is treated with metronidazole for eradication, in combination with diloxanide to prevent recurrence. Although it is generally a standard treatment it is associated with some side effects.
Preterm births
Metronidazole has also been used in women to prevent preterm birth associated with bacterial vaginosis, amongst other risk factors including the presence of cervicovaginal fetal fibronectin (fFN). Metronidazole was ineffective in preventing preterm delivery in high-risk pregnant women (selected by history and a positive fFN test) and, conversely, the incidence of preterm delivery was found to be higher in women treated with metronidazole.
Hypoxic radiosensitizer
In addition to its anti-biotic properties, attempts were also made to use a possible radiation-sensitizing effect of metronidazole in the context of radiation therapy against hypoxic tumors. However, the neurotoxic side effects occurring at the required dosages have prevented the widespread use of metronidazole as an adjuvant agent in radiation therapy. However, other nitroimidazoles derived from metronidazole such as nimorazole with reduced electron affinity showed less serious neuronal side effects and have found their way into radio-onological practice for head and neck tumors in some countries.
Perioral dermatitis
Canadian Family Physician has recommended topical metronidazole as a third-line treatment for the perioral dermatitis either along with or without oral tetracycline or oral erythromycin as first and second line treatment respectively.
Adverse effects
Common adverse drug reactions (≥1% of those treated with the drug) associated with systemic metronidazole therapy include: nausea, diarrhea, weight loss, abdominal pain, vomiting, headache, dizziness, and metallic taste in the mouth. Intravenous administration is commonly associated with thrombophlebitis. Infrequent adverse effects include: hypersensitivity reactions (rash, itch, flushing, fever), headache, dizziness, vomiting, glossitis, stomatitis, dark urine, and paraesthesia. High doses and long-term systemic treatment with metronidazole are associated with the development of leucopenia, neutropenia, increased risk of peripheral neuropathy, and central nervous system toxicity. Common adverse drug reaction associated with topical metronidazole therapy include local redness, dryness and skin irritation; and eye watering (if applied near eyes). Metronidazole has been associated with cancer in animal studies. In rare cases, it can also cause temporary hearing loss that reverses after cessation of the treatment.
Some evidence from studies in rats indicates the possibility it may contribute to serotonin syndrome, although no case reports documenting this have been published to date.
Mutagenesis and carcinogenesis
In 2016 metronidazole was listed by the U.S. National Toxicology Program (NTP) as reasonably anticipated to be a human carcinogen. Although some of the testing methods have been questioned, oral exposure has been shown to cause cancer in experimental animals and has also demonstrated some mutagenic effects in bacterial cultures. The relationship between exposure to metronidazole and human cancer is unclear. One study found an excess in lung cancer among women (even after adjusting for smoking), while other studies found either no increased risk, or a statistically insignificant risk.
Metronidazole is listed as a possible carcinogen according to the World Health Organization (WHO) International Agency for Research on Cancer (IARC). A study in those with Crohn's disease also found chromosomal abnormalities in circulating lymphocytes in people treated with metronidazole.
Stevens–Johnson syndrome
Metronidazole alone rarely causes Stevens–Johnson syndrome, but is reported to occur at high rates when combined with mebendazole.
Neurotoxicity
Several studies in the human and animal models have recorded the neurotoxicity of metronidazole. One possible mechanism underlying this toxicity is that metronidazole may interference with postsynaptic central monoaminergic neurotransmission and immunomodulation. Additionally other research suggests that the role of nitric oxide isoforms and inflammatory cytokines may also play a role.
Drug interactions
Alcohol
Consuming alcohol while taking metronidazole has been suspected in case reports to cause a disulfiram-like reaction with effects that can include nausea, vomiting, flushing of the skin, tachycardia, and shortness of breath. People are often advised not to drink alcohol during systemic metronidazole therapy and for at least 48 hours after completion of treatment. However, some studies call into question the mechanism of the interaction of alcohol and metronidazole,
and a possible central toxic serotonin reaction for the alcohol intolerance is suggested. Metronidazole is also generally thought to inhibit the liver metabolism of propylene glycol (found in some foods, medicines, and in many electronic cigarette e-liquids), thus propylene glycol may potentially have similar interaction effects with metronidazole.
Other drug interactions
Metronidazole is a moderate inhibitor of the enzyme CYP2C9 belonging to the cytochrome P450 family. As a result, metronidazole may interact with medications metabolized by this enzyme. Examples of such medications are lomitapide and warfarin, to name a few.
Pharmacology
Mechanism of action
Metronidazole is of the nitroimidazole class. It is a prodrug that inhibits nucleic acid synthesis by forming nitroso radicals, which disrupt the DNA of microbial cells. Metronidazole activates by receiving an electron from the reduced ferredoxin produced by pyruvate synthase (PFOR) in anaerobic organisms, equivalent to pyruvate dehydrogenase in aerobic organisms, thus turning into a highly reactive radical anion. After the radical loses the electron to its target, it recycles back to the unactivated form of metronidazole, ready to be activated again.
This function only occurs when metronidazole is partially reduced, and because oxygen competes with metronidazole for the electron, this reduction requires a local environment with low oxygen concentration that usually happens only in anaerobic bacteria and protozoans. Therefore, it has relatively little effect upon human cells or aerobic bacteria. Elevation of oxygen level in the organism will decrease its rate of generating the activated metronidazole, but also increase the rate of recycling back to the unactivated metronidazole.
Pharmacokinetics
Oral metronidazole is approximately 80% bioavailable via the gut and peak blood plasma concentrations occur after one to two hours. Food may slow down absorption but does not diminish it. Of the circulating substance, about 20% is bound to plasma proteins. It penetrates well into tissues, the cerebrospinal fluid, the amniotic fluid and breast milk, as well as into abscess cavities.
About 60% of the metronidazole is metabolized by oxidation to the main metabolite hydroxymetronidazole and a carboxylic acid derivative, and by glucuronidation. The metabolites show antibiotic and antiprotozoal activity in vitro. Metronidazole and its metabolites are mainly excreted via the kidneys (77%) and to a lesser extent via the faeces (14%). The biological half-life of metronidazole in healthy adults is eight hours, in infants during the first two months of their lives about 23 hours, and in premature babies up to 100 hours.
The biological activity of hydroxymetronidazole is 30% to 65%, and the elimination half-life is longer than that of the parent compound. The serum half-life of hydroxymetronidazole after suppository was 10 hours, 19 hours after intravenous infusion, and 11 hours after a tablet.
Resistance
Resistance in parasites is found in T. vaginilis, and G. lamblia, but not E. histolytica, and two major methods are observed. The first method involves an impaired oxygen scavenging capability that increase the local concentration of oxygen, leading to the decreased activation and increased recycling of metronidazole. The second method is associated with lowered levels of pyruvate synthase and ferredoxin, the latter due to the lowered transcription of the ferredoxin gene. Strains employing the second method will still respond to a higher dosage of metronidazole.
Resistance in bacteria is documented in Bacteriodes spp. that resistant to nitroimidazoles including metronidazole. In the resistant strains, 5-nitroimidazole reductase is identified as the culprit that actively reduces metronidazole to inactive forms. Currently eleven types are identified which are encoded by nimA through nimK respectively. The gene is encoded either in the chromosome or the episome.
Other mechanisms may include reduced drug activation, efflux pumps, altered redox potential and biofilm formation. In the recent years it is observed that the resistance to metronidazole is increasingly common, complicating its clinical effectiveness.
History
The drug was initially developed by Rhône-Poulenc in the 1950s and licensed to G.D. Searle. Searle was acquired by Pfizer in 2003. The original patent expired in 1982, but evergreening reformulation occurred thereafter.
Brand name
In India, it is sold under the brand name Metrogyl and Flagyl. In Bangladesh, it is available as Amodis, Amotrex, Dirozyl, Filmet, Flagyl, Flamyd, Metra, Metrodol, Metryl, etc. In Pakistan, it is sold under the brand name of Flagyl and Metrozine. In the United States it is sold under the brand name Noritate.
Synthesis
2-Methylimidazole (1) may be prepared via the Debus-Radziszewski imidazole synthesis, or from ethylenediamine and acetic acid, followed by treatment with lime, then Raney nickel. 2-Methylimidazole is nitrated to give 2-methyl-4(5)-nitroimidazole (2), which is in turn alkylated with ethylene oxide or 2-chloroethanol to give metronidazole (3):
Research
Metronidazole is researched for its anti-inflammatory and immunomodulatory properties. Studies have shown that metronidazole can decrease the production of reactive oxygen species (ROS) and nitric oxide by activated immune cells, such as macrophages and neutrophils. Metronidazole's immunomodulatory properties are thought to be related to its ability to decrease the activation of nuclear factor-kappa B (NF-κB), a transcription factor that regulates the expression of pro-inflammatory cytokines, including chemokines, and adhesion molecules. Cytokines are small proteins that are secreted by immune cells and play a key role in the immune response. Chemokines are a type of cytokines that act as chemoattractants, meaning they attract and guide immune cells to specific sites in the body where they are needed. Cell adhesion molecules play an important role in the immune response by facilitating the interaction between immune cells and other cells in the body, such as endothelial cells, which form the lining of blood vessels. By inhibiting NF-κB activation, metronidazole can reduce the production of pro-inflammatory cytokines, such as TNF-alpha, IL-6, and IL-1β.
Metronidazole has been studied in various immunological disorders, including inflammatory bowel disease, periodontitis, and rosacea. In these conditions, metronidazole has been suspected to have anti-inflammatory and immunomodulatory effects that could be beneficial in the treatment of these conditions. Despite the success in treating rosacea with metronidazole, the exact mechanism of why metronidazole in rosacea is efficient is not precisely known, i.e., which properties of metronidazole help treat rosacea: antibacterial or immunomodulatory or both, or other mechanism is involved. Increased ROS production in rosacea is thought to contribute to the inflammatory process and skin damage, so metronidazole's ability to decrease ROS may explain the mechanism of action in this disease, but this remains speculation.
Metronidazole is also researched as a potential anti-inflammatory agent in periodontitis treatment.
Veterinary use
Metronidazole is used to treat infections of Giardia in dogs, cats, and other companion animals, but it does not reliably clear infection with this organism and is being supplanted by fenbendazole for this purpose in dogs and cats. It is also used for the management of chronic inflammatory bowel disease, gastrointestinal infections, periodontal disease, and systemic infections in cats and dogs. Another common usage is the treatment of systemic and/or gastrointestinal clostridial infections in horses. Metronidazole is used in the aquarium hobby to treat ornamental fish and as a broad-spectrum treatment for bacterial and protozoan infections in reptiles and amphibians. In general, the veterinary community may use metronidazole for any potentially susceptible anaerobic infection. The U.S. Food and Drug Administration (FDA) suggests it only be used when necessary because it has been shown to be carcinogenic in mice and rats, as well as to prevent antimicrobial resistance.
The appropriate dosage of metronidazole varies based on the animal species, the condition being treated and the specific formulation of the product.
References
External links
Drugs developed by AbbVie
Antiprotozoal agents
Disulfiram-like drugs
Fishkeeping
French inventions
IARC Group 2B carcinogens
Nitroimidazole antibiotics
Drugs developed by Pfizer
Wikipedia medicine articles ready to translate
Sanofi
World Health Organization essential medicines | Metronidazole | [
"Biology"
] | 3,971 | [
"Antiprotozoal agents",
"Biocides"
] |
315,203 | https://en.wikipedia.org/wiki/Bionomics | Bionomics (Greek: bio = life; nomos = law) has two different meanings:
the first is the comprehensive study of an organism and its relation to its environment. As translated from the French word Bionomie, its first use in English was in the period of 1885–1890. Another way of expressing this word is the term currently referred to as "ecology".
the other is an economic discipline which studies economy as a self-organized evolving ecosystem.
An example of studies of the first type is in Richard B. Selander's Bionomics, Systematics and Phylogeny of Lytta, a Genus of Blister Beetles (Coleoptera, Meloidae), Illinois Biological Monographs: number 28, 1960.
When related to the territory Ignegnoli talks about Landscape Bionomics, defining Landscape as the "level of biological organization integrating complex systems of plants, animals and humans in a living Entity recognizable in a territory as characterized by suitable emerging properties in a determined spatial configuration". (Ingegnoli, 2011, 2015; Ingegnoli, Bocchi, Giglio, 2017)
Bionomics as an economic discipline is used by Igor Flor of "Bionomica, the International Bionomics Institute"
References
Benthos - Bionomics
Ingegnoli V, Bocchi S, Giglio E (2017) Landscape Bionomics: a Systemic Approach to Understand and Govern Territorial Development. WSEAS Transactions on Environment and Development, Vol.13, pp. 189–195
Ingegnoli V (2015) Landscape Bionomics. Biological-Integrated Landscape Ecology. Springer, Heidelberg, Milan, New York. Pp. XXIV + 431
Ingegnoli, V. (2011). Bionomia del paesaggio. L’ecologia del paesaggio biologico-integrata per la formazione di un “medico” dei sistemi ecologici. Springer-Verlag, Milano, pp. XX+340.
External links
Website of "Bionomica"
Website of "The Planetary Society for Bionomic Renewal - terra.ngo"
Ecology
zh:生态学 | Bionomics | [
"Biology"
] | 450 | [
"Ecology"
] |
315,212 | https://en.wikipedia.org/wiki/Therac-25 | The Therac-25 is a computer-controlled radiation therapy machine produced by Atomic Energy of Canada Limited (AECL) in 1982 after the Therac-6 and Therac-20 units (the earlier units had been produced in partnership with of France).
The Therac-25 was involved in at least six accidents between 1985 and 1987, in which some patients were given massive overdoses of radiation. Because of concurrent programming errors (also known as race conditions), it sometimes gave its patients radiation doses that were hundreds of times greater than normal, resulting in death or serious injury. These accidents highlighted the dangers of software control of safety-critical systems.
The Therac-25 has become a standard case study in health informatics, software engineering, and computer ethics. It highlights the dangers of engineer overconfidence after the engineers dismissed end-user reports, leading to severe consequences.
History
The French company CGR manufactured the Neptune and Sagittaire linear accelerators.
In the early 1970s, CGR and the Canadian public company Atomic Energy of Canada Limited (AECL) collaborated on the construction of linear accelerators controlled by a DEC PDP-11 minicomputer: the Therac-6, which produced X-rays of up to 6 MeV, and the Therac-20, which could produce X-rays or electrons of up to 20 MeV. The computer increased ease of use because the accelerator could operate without it. CGR developed the software for the Therac-6 and reused some subroutines for the Therac-20.
In 1981, the two companies ended their collaboration agreement. AECL developed a new double pass concept for electron acceleration in a more confined space, changing its energy source from klystron to magnetron. In certain techniques, the electrons produced are used directly, while in others they are made to collide against a tungsten anode to produce X-ray beams. This dual accelerator concept was applied to the Therac-20 and Therac-25, with the latter being much more compact, versatile, and easy to use. It was also more economical for a hospital to have a dual machine that could apply treatments of electrons and X-rays, instead of two machines.
The Therac-25 was designed as a machine controlled by a computer, with some safety mechanisms switched from hardware to software as a result. AECL decided not to duplicate some safety mechanisms, and reused modules and code routines from the Therac-20 for the Therac-25.
The first prototype of the Therac-25 was built in 1976 and was put on the market in late 1982.
The software for the Therac-25 was developed by one person over several years using PDP-11 assembly language. It was an evolution of the Therac-6 software. In 1986, the programmer left AECL. In a subsequent lawsuit, lawyers were unable to identify the programmer or learn about his qualification and experience.
Five machines were installed in the United States and six in Canada.
After the accidents, in 1988 AECL dissolved the AECL Medical section and the company Theratronics International Ltd took over the maintenance of the installed Therac-25 machines.
Design
The machine had three modes of operation, with a turntable moving some apparatus into position for each of those modes: either a light, some scan magnets, or a tungsten target and flattener.
A "field light" mode, which allowed the patient and collimator to be correctly positioned by illuminating the treatment area with visible light.
Direct electron-beam therapy, in which a narrow, low-current beam of high-energy () electrons was scanned over the treatment area by magnets;
Megavolt X-ray (or photon) therapy, which delivered a beam of 25 MeV X-ray photons. The X-ray photons are produced by colliding a high current, narrow beam of electrons with a tungsten target. The X-rays are then passed through a flattening filter, and then measured using an X-ray ion chamber. The flattening filter resembles an inverted ice-cream cone, and it shapes and attenuates the X-rays. The electron beam current required to produce the X-rays is about 100 times greater than that used for electron therapy.
The patient is placed on a fixed stretcher. Above them is a turntable to which the components that modify the electron beam are fixed. The turntable has a position for the X-ray mode (photons), another position for the electron mode and a third position for making adjustments using visible light. In this position an electron beam is not expected, and a light that is reflected in a stainless steel mirror simulates the beam. In this position there is no ion chamber acting as a radiation dosimeter because the radiation beam is not expected to function.
The turntable has some microswitches that indicate the position to the computer. When the plate is in one of the three allowed fixed positions a plunger locks it by interlocking. In this type of machine, electromechanical locks were traditionally used to ensure that the turntable was in the correct position before starting treatment. In the Therac-25, these were replaced by software checks.
Problem description
The six documented accidents occurred when the high-current electron beam generated in X-ray mode was delivered directly to patients. Two software faults were to blame. One was when the operator incorrectly selected X-ray mode before quickly changing to electron mode, which allowed the electron beam to be set for X-ray mode without the X-ray target being in place. A second fault allowed the electron beam to activate during field-light mode, during which no beam scanner was active or target was in place.
Previous models had hardware interlocks to prevent such faults, but the Therac-25 had removed them, depending instead on software checks for safety.
The high-current electron beam struck the patients with approximately 100 times the intended dose of radiation, and over a narrower area, delivering a potentially lethal dose of beta radiation. The feeling was described by patient Ray Cox as "an intense electric shock", causing him to scream and run out of the treatment room. Several days later, radiation burns appeared, and the patients showed the symptoms of radiation poisoning; in three cases, the injured patients later died as a result of the overdose.
Radiation overexposure incidents
Kennestone Regional Oncology Center, 1985
A Therac-25 had been in operation for six months in Marietta, Georgia at the Kennestone Regional Oncology Center when, on June 3, 1985, applied radiation therapy treatment following a lumpectomy was being performed on 61-year-old woman Katie Yarbrough. She was set to receive a 10-MeV dose of electron therapy to her clavicle. When therapy began, she stated she experienced a "tremendous force of heat...this red-hot sensation." The technician entered the room, to whom Katie stated, "you burned me." The technician assured her this was not possible. She returned home where, in the following days, she experienced reddening of the treatment area. Shortly after, her shoulder became locked in place and she experienced spasms. Within two weeks, the aforementioned redness spread from her chest to her back, indicating that the source of the burn had passed through her, which is the case with radiation burns. The staff at the treatment center did not believe it was possible for the Therac-25 to cause such an injury, and it was treated as a symptom of her cancer. Later, the hospital physicist consulted the AECL about the incident. He calculated that the applied dose was between 15,000 and 20,000 rad (radiation absorbed dose) when she should have been dosed with 200 rad. A dose of 1000 rad can be fatal. In October 1985, Katie sued the hospital and the manufacturer of the machine. In November 1985, the AECL was notified of the lawsuit. It was not until March 1986, after another incident involving the Therac-25, that the AECL informed the FDA that it had received a complaint from the patient.
Due to the radiation overdose, her breast had to be surgically removed, an arm and shoulder were immobilized, and she was in constant pain. The treatment printout function was not activated at the time of treatment and there was no record of the applied radiation data. An out-of-court settlement was reached to resolve the lawsuit.
Ontario Cancer Foundation, 1985
The Therac-25 had been in operation in the clinic for six months when, on July 26, 1985, a 40-year-old patient was receiving her 24th treatment for cervical cancer. The operator activated the treatment, but after five seconds the machine stopped with the error message "H-tilt", the treatment pause indication and the dosimeter indicating that no radiation had been applied. The operator pressed the key (Proceed : continue). The machine stopped again. The operator repeated the process five times until the machine stopped the treatment. A technician was called and found no problem. The machine was used to treat six other patients on the same day.
The patient complained of burning and swelling in the area and was hospitalized on July 30. She was suspected of a radiation overdose and the machine was taken out of service. On November 3, 1985, the patient died of cancer, although the autopsy mentioned that if she had not died then, she would have had to undergo a hip replacement due to damage from the radiation overdose. A technician estimated that she received between 13,000 and 17,000 rad.
The incident was reported to the FDA and the Canadian Radiation Protection Bureau.
The AECL suspected that there might be an error with three microswitches that reported the position of the turntable. The AECL was unable to replicate a failure of the microswitches and microswitch testing was inconclusive. They then changed the method to be tolerant of one failure and modified the software to check if the turntable was moving or in the treatment position.
Afterward, the AECL claimed that the modifications represented a five-order-of-magnitude increase in safety.
Yakima Valley Memorial Hospital, 1985
In December 1985 a woman developed an erythema with a parallel band pattern after receiving treatment from a Therac-25 unit. Hospital staff sent a letter on January 31, 1986, to the AECL about the incident. The AECL responded in two pages detailing the reasons why radiation overdose was impossible on the Therac-25, stating both machine failure and operator error were not possible.
Six months later, the patient developed chronic ulcers under the skin due to tissue necrosis. She had surgery and skin grafts were placed. The patient continued to live with minor sequelae.
East Texas Cancer Center, Tyler, March 1986
Over two years, this hospital treated more than 500 patients with the Therac-25 with no incident. On March 21, 1986, a patient presented for his ninth treatment session for a tumor on his back. The treatment was set to be 22-MeV of electrons with a dose of 180 rad in an area of 10x17 cm, with an accumulated radiation in 6 weeks of 6000 rad.
The experienced operator entered the session data and realized that she had written an “x” for ‘x-ray’ instead of an “e” for ‘electron beam’ as the type of treatment. With the cursor she went up and changed the “x” to an “e” and since the rest of the parameters were correct she pressed until she got down to the command box. All parameters were marked "Verified" and the message "Rays ready" was displayed. She hit the key ("Beam on"). The machine stopped and displayed the message "Malfunction 54" (error 54). It also showed 'Treatment pause'. The manual said that the "Malfunction 54" message was a "dose input 2" error. A technician later testified that "dose input 2" meant that the radiation delivered was either too high or too low.
The radiation monitor (dosimeter) marked 6 units supplied when it had demanded 202 units. The operator pressed ( Proceed : continue). The machine stopped again with the message "Malfunction 54" (error 54) and the dosimeter indicated that it had delivered fewer units than required. The surveillance camera in the radiation room was offline and the intercom had been broken that day.
With the first dose the patient felt an electric shock and heard a crackle from the machine. Since it was his ninth session, he realized that it was not normal. He started to get up from the table to ask for help. At that moment the operator pressed to continue the treatment. The patient felt a shock of electricity through his arm, as if his hand was torn off. He reached the door and began to bang on it until the operator opened it. A physician was immediately called to the scene, where they observed intense erythema in the area, suspecting that it had been a simple electric shock. He sent the patient home. The hospital physicist checked the machine and, because it was calibrated to the correct specification, it continued to treat patients throughout the day. The technicians were unaware that the patient had received a massive dose of radiation between 16,500 and 25,000 rads in less than a second over an area of one cm2. The crackling of the machine had been produced by saturation of the ionization chambers, which had the consequence that they indicated that the applied radiation dose had been very low.
Over the following weeks the patient experienced paralysis of the left arm, nausea, vomiting, and ended up being hospitalized for radiation-induced myelitis of the spinal cord. His legs, mid-diaphragm and vocal cords ended up paralyzed. He also had recurrent herpes simplex skin infections. He died five months after the overdose.
From the day after the accident, AECL technicians checked the machine and were unable to replicate error 54. They checked the grounding of the machine to rule out electric shock as the cause. The machine was back in operation on April 7, 1986.
East Texas Cancer Center, Tyler, April 1986
On April 11, 1986, a patient was to receive electron treatment for skin cancer on the face. The prescription was 10 MeV for an area of 7x10 cm. The operator was the same as the one in the March incident, three weeks earlier. After filling in all the treatment data she realized that she had to change the mode from X to E. She did so and pressed to go down to the command box. As "Beam ready" was displayed, she pressed (Proceed : continue). The machine produced a loud noise, which was heard through the intercom. Error 54 was displayed. The operator entered the room and the patient described a burning sensation on his face. The patient died on May 1, 1986, just shy of 3 weeks later. The autopsy showed severe radiation damage to the right temporal lobe and brain stem.
The hospital physicist stopped the machine treatments and notified the AECL. After strenuous work, the physicist and operator were able to reproduce the error 54 message. They determined that speed in editing the data entry was a key factor in producing error 54. After much practice, he was able to reproduce the error 54 at will. The AECL stated they could not reproduce the error and they only got it after following the instructions of the physicist so that the data entry was very rapid.
Yakima Valley Memorial Hospital, 1987
On January 17, 1987, a patient was to receive a treatment with two film-verification exposures of 4 and 3 rads, plus a 79-rad photon treatment for a total exposure of 86 rads. Film was placed under the patient and 4 rads were administered through a 22 cm × 18 cm opening. The machine was stopped, the aperture was opened to 35 cm × 35 cm and a dose of 3 rad was administered. The machine stopped. The operator entered the room to remove the film plates and adjust the patient's position. He used the hand control inside the room to adjust the turntable. He left the room, forgetting the film plates. In the control room, after seeing the "Beam ready" message, he pressed the key to fire the beams. After 5 seconds the machine stopped and displayed a message that quickly disappeared. Since the machine was paused, the operator pressed (Proceed : continue). The machine stopped, showing "Flatness" as the reason. The operator heard the patient on the intercom, but could not understand him, and entered the room. The patient had felt a severe burning sensation in his chest. The screen showed that he had only been given 7 rad. A few hours later, the patient showed burns on the skin in the area. Four days later the reddening of the area had a banded pattern similar to that produced in the incident the previous year, and for which they had not found the cause. The AECL began an investigation, but was unable to reproduce the event.
The hospital physicist conducted tests with film plates to see if he could recreate the incident, which involved two X-ray parameters with the turntable in field-light position. The film appeared to match the film that was left by mistake under the patient during the accident. It was found the patient was exposed to between 8,000 and 10,000 rad instead of the prescribed 86 rad. The patient died in April 1987 from complications due to radiation overdose. The relatives filed a lawsuit that ended with an out-of-court settlement.
Root causes
A commission attributed the primary cause to generally poor software design and development practices, rather than singling out specific coding errors. In particular, the software was designed so that it was realistically impossible to test it in a rigorous, automated way.
Researchers who investigated the accidents found several contributing causes. These included the following institutional causes:
AECL did not have the software code independently reviewed and chose to rely on in-house code, including the operating system.
AECL did not consider the design of the software during its assessment of how the machine might produce the desired results and what failure modes existed, focusing purely on hardware and asserting that the software was free of bugs.
Machine operators were reassured by AECL personnel that overdoses were impossible, leading them to dismiss the Therac-25 as the potential cause of many incidents.
AECL had never tested the Therac-25 with the combination of software and hardware until it was assembled at the hospital.
The researchers also found several engineering issues:
Several error messages merely displayed the word "MALFUNCTION" followed by a number from 1 to 64. The user manual did not explain or even address the error codes, nor give any indication that these errors could pose a threat to patient safety.
The system distinguished between errors that halted the machine, requiring a restart, and errors which merely paused the machine (which allowed operators to continue with the same settings using a keypress). However, some errors which endangered the patient merely paused the machine, and the frequent occurrence of minor errors caused operators to become accustomed to habitually unpausing the machine.
One failure occurred when a particular sequence of keystrokes was entered on the VT100 terminal that controlled the PDP-11 computer: If the operator were to press "X" to (erroneously) select 25 MeV photon mode, then use "cursor up" to edit the input to "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds of the first keypress and well within the capability of an experienced user of the machine, the edit would not be processed and an overdose could be administered. These edits were not noticed as it would take 8 seconds for startup, so it would go with the default setup.
The design did not have any hardware interlocks to prevent the electron-beam from operating in its high-energy mode without the target in place.
The engineer had reused software from the Therac-6 and Therac-20, which used hardware interlocks that masked their software defects. Those hardware safeties had no way of reporting that they had been triggered, so preexisting errors were overlooked.
The hardware provided no way for the software to verify that sensors were working correctly. The table-position system was the first implicated in Therac-25's failures; the manufacturer revised it with redundant switches to cross-check their operation.
The software set a flag variable by incrementing it, rather than by setting it to a fixed non-zero value. Occasionally an arithmetic overflow occurred, causing the flag to return to zero and the software to bypass safety checks.
Leveson notes that a lesson to be drawn from the incident is to not assume that reused software is safe: "A naive assumption is often made that reusing software or using commercial off-the-shelf software will increase safety because the software will have been exercised extensively. Reusing software modules does not guarantee safety in the new system to which they are transferred ..." In response to incidents like those associated with Therac-25, the IEC 62304 standard was created, which introduces development life cycle standards for medical device software and specific guidance on using software of unknown pedigree.
See also
1962 Mexico City radiation accident
1990 Clinic of Zaragoza radiotherapy accident
Ciudad Juárez cobalt-60 contamination incident
Computer ethics
Goiânia accident
High integrity software
IEC 62304
Ionizing radiation
List of civilian radiation accidents
List of orphan source incidents
Nuclear and radiation accidents and incidents
Radiation protection
Radioactive scrap metal
Samut Prakan radiation accident
List of software bugs
References
Further reading
(short summary of the Therac-25 Accidents)
Software bugs
Health disasters
Nuclear medicine
Health disasters in Canada
Engineering failures
Radiation accidents and incidents | Therac-25 | [
"Technology",
"Engineering"
] | 4,461 | [
"Systems engineering",
"Reliability engineering",
"Technological failures",
"Engineering failures",
"Civil engineering"
] |
315,414 | https://en.wikipedia.org/wiki/Gelfand%E2%80%93Naimark%20theorem | In mathematics, the Gelfand–Naimark theorem states that an arbitrary C*-algebra A is isometrically *-isomorphic to a C*-subalgebra of bounded operators on a Hilbert space. This result was proven by Israel Gelfand and Mark Naimark in 1943 and was a significant point in the development of the theory of C*-algebras since it established the possibility of considering a C*-algebra as an abstract algebraic entity without reference to particular realizations as an operator algebra.
Details
The Gelfand–Naimark representation π is the Hilbert space analogue of the direct sum of representations πf of A where f ranges over the set of pure states of A and πf is the irreducible representation associated to f by the GNS construction. Thus the Gelfand–Naimark representation acts on the Hilbert direct sum of the Hilbert spaces Hf by
π(x) is a bounded linear operator since it is the direct sum of a family of operators, each one having norm ≤ ||x||.
Theorem. The Gelfand–Naimark representation of a C*-algebra is an isometric *-representation.
It suffices to show the map π is injective, since for *-morphisms of C*-algebras injective implies isometric. Let x be a non-zero element of A. By the Krein extension theorem for positive linear functionals, there is a state f on A such that f(z) ≥ 0 for all non-negative z in A and f(−x* x) < 0. Consider the GNS representation πf with cyclic vector ξ. Since
it follows that πf (x) ≠ 0, so π (x) ≠ 0, so π is injective.
The construction of Gelfand–Naimark representation depends only on the GNS construction and therefore it is meaningful for any Banach *-algebra A having an approximate identity. In general (when A is not a C*-algebra) it will not be a faithful representation. The closure of the image of π(A) will be a C*-algebra of operators called the C*-enveloping algebra of A. Equivalently, we can define the
C*-enveloping algebra as follows: Define a real valued function on A by
as f ranges over pure states of A. This is a semi-norm, which we refer to as the C* semi-norm of A. The set I of elements of A whose semi-norm is 0 forms a two sided-ideal in A closed under involution. Thus the quotient vector space A / I is an involutive algebra and the norm
factors through a norm on A / I, which except for completeness, is a C* norm on A / I (these are sometimes called pre-C*-norms). Taking the completion of A / I relative to this pre-C*-norm produces a C*-algebra B.
By the Krein–Milman theorem one can show without too much difficulty that for x an element of the Banach *-algebra A having an approximate identity:
It follows that an equivalent form for the C* norm on A is to take the above supremum over all states.
The universal construction is also used to define universal C*-algebras of isometries.
Remark. The Gelfand representation or Gelfand isomorphism for a commutative C*-algebra with unit is an isometric *-isomorphism from to the algebra of continuous complex-valued functions on the space of multiplicative linear functionals, which in the commutative case are precisely the pure states, of A with the weak* topology.
See also
GNS construction
Stinespring factorization theorem
Gelfand–Raikov theorem
Koopman operator
Tannaka–Krein duality
References
(also available from Google Books)
, also available in English from North Holland press, see in particular sections 2.6 and 2.7.
Operator theory
Theorems in functional analysis
C*-algebras | Gelfand–Naimark theorem | [
"Mathematics"
] | 853 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
315,459 | https://en.wikipedia.org/wiki/Endoderm | Endoderm is the innermost of the three primary germ layers in the very early embryo. The other two layers are the ectoderm (outside layer) and mesoderm (middle layer). Cells migrating inward along the archenteron form the inner layer of the gastrula, which develops into the endoderm.
The endoderm consists at first of flattened cells, which subsequently become columnar. It forms the epithelial lining of multiple systems.
In plant biology, endoderm corresponds to the innermost part of the cortex (bark) in young shoots and young roots often consisting of a single cell layer. As the plant becomes older, more endoderm will lignify.
Production
The following chart shows the tissues produced by the endoderm.
The embryonic endoderm develops into the interior linings of two tubes in the body, the digestive and respiratory tube.
Liver and pancreas cells are believed to derive from a common precursor.
In humans, the endoderm can differentiate into distinguishable organs after 5 weeks of embryonic development.
Additional images
See also
Hypoblast of primitive endoderm
Ectoderm
Germ layer
Histogenesis
Mesoderm
Organogenesis
Endodermal sinus tumor
Gastrulation
Cell differentiation
Triploblasty
List of human cell types derived from the germ layers
References
Germ layers
Developmental biology
Embryology
Gastrulation | Endoderm | [
"Biology"
] | 292 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
315,474 | https://en.wikipedia.org/wiki/List%20of%20named%20alloys | This is a list of named alloys grouped alphabetically by the metal with the highest percentage. Within these headings, the alloys are also grouped alphabetically. Some of the main alloying elements are optionally listed after the alloy names.
Alloys by base metal
Aluminium
AA-8000: used for electrical building wire in the U.S. per the National Electrical Code, replacing AA-1350.
Al–Li (2.45% lithium): aerospace applications, including the Space Shuttle
Alnico (nickel, cobalt): used for permanent magnets
Aluminium–Scandium (scandium)
Birmabright (magnesium, manganese): used in car bodies, mainly used by Land Rover cars.
Devarda's alloy (45% Al, 50% Cu, 5% Zn): chemical reducing agent.
Duralumin (copper)
Hiduminium or R.R. alloys (2% copper, iron, nickel): used in aircraft pistons
Hydronalium (up to 12% magnesium, 1% manganese): used in shipbuilding, resists seawater corrosion
Italma (3.5% magnesium, 0.3% manganese): formerly used to make coinage of the Italian lira
Magnalium (5-50% magnesium): used in airplane bodies, ladders, pyrotechnics, etc.
Ni-Ti-Al (titanium 40%, aluminum 10%), also called Nital
Y alloy (4% copper, nickel, magnesium)
Aluminium also forms complex metallic alloys, like β–Al–Mg, ξ'–Al–Pd–Mn, and T–Al3Mn.
Beryllium
Lockalloy (62% beryllium, 38% aluminum)
Bismuth
Bismanol (manganese); magnetic alloy from the 1950s using powder metallurgy
Cerrosafe (lead, tin, cadmium)
Rose metal (lead, tin)
Wood's metal (lead, tin, cadmium)
Chromium
Chromium hydride (hydrogen)
Nichrome (nickel)
Ferrochrome (iron)
Cobalt
Elgiloy (cobalt, chromium, nickel, iron, molybdenum, manganese, carbon)[Cr-Co-Ni]
Megallium (cobalt, chromium, molybdenum)
Stellite (chromium, tungsten, carbon)
Talonite (tungsten, molybdenum, carbon)
Ultimet (chromium, nickel, iron, molybdenum, tungsten)
Vitallium (chromium, molybdenum)
Copper
Arsenical copper (arsenic)
Beryllium copper (0.5-3% beryllium, 99.5%-97% copper)
Billon (silver)
Brass (zinc) see also Brass §Brass types for longer list
Calamine brass (zinc)
Chinese silver (zinc)
Dutch metal (zinc)
Gilding metal (zinc)
Muntz metal (zinc)
Pinchbeck (zinc)
Prince's metal (zinc)
Tombac (zinc)
Bronze (tin, aluminum or other element)
Aluminium bronze (aluminum)
Arsenical bronze (arsenic, tin)
Bell metal (tin)
Bismuth bronze (bismuth)
Brastil (alloy, bronze)
Florentine bronze (aluminium or tin)
Glucydur (beryllium, iron)
Guanín (gold, silver)
Gunmetal (tin, zinc)
Phosphor bronze (tin and phosphorus)
Ormolu (zinc)
Silicon bronze (tin, arsenic, silicon)
Speculum metal (tin)
White bronze (tin, zinc)
Constantan (nickel)
Copper hydride (hydrogen)
Copper–tungsten (tungsten)
Corinthian bronze (gold, silver)
Cunife (nickel, iron)
Cupronickel (nickel)
CuSil (silver)
Cymbal alloys (tin)
Devarda's alloy (aluminium, zinc)
Hepatizon (gold, silver)
Manganin (manganese, nickel)
Melchior (nickel); high corrosion resistance, used in marine applications in condenser tubes
Nickel silver (nickel)
Nordic gold (aluminium, zinc, tin)
Shakudo (gold)
Tellurium copper (tellurium)
Tumbaga (gold)
Gallium
AlGa (aluminium, gallium)
Galfenol (iron)
Galinstan (indium, tin)
Gold
See also notes below
Colored gold (silver, copper)
Crown gold (silver, copper)
Electrum (silver)
Purple gold (aluminum)
Rhodite (rhodium)
Rose gold (copper)
Tumbaga (copper)
White gold (nickel, palladium)
Indium
Field's metal (bismuth, tin)
Iron
Most iron alloys are steels, with carbon as a major alloying element.
Elinvar (nickel, chromium)
Fernico (nickel, cobalt)
Ferroalloys (:Category:Ferroalloys)
Ferroboron
Ferrocerium
Ferrochrome
Ferromagnesium
Ferromanganese
Ferromolybdenum
Ferronickel
Ferrophosphorus
Ferrosilicon
Ferrotitanium
Ferrouranium
Ferrovanadium
Invar (nickel)
Cast iron (carbon)
Pig iron (carbon)
Iron hydride (hydrogen)
Kanthal (20–30% chromium, 4–7.5% aluminium); used in heating elements, including e-cigarettes
Kovar (nickel, cobalt)
Spiegeleisen (manganese, carbon, silicon)
Staballoy (stainless steel) (manganese, chromium, carbon) - see also Uranium below
Steel (carbon) (:Category:Steels)
Bulat steel
Chromoly (chromium, molybdenum)
Crucible steel
Damascus steel
Ducol
Hadfield steel
High-speed steel
Mushet steel
HSLA steel
Maraging steel
Reynolds 531
Silicon steel (silicon)
Spring steel
Stainless steel (chromium, nickel)
AL-6XN
Alloy 20
Celestrium
Marine grade stainless
Martensitic stainless steel
Alloy 28 or Sanicro 28 (nickel, chromium)
Surgical stainless steel (chromium, molybdenum, nickel)
Zeron 100 (chromium, nickel, molybdenum)
Tool steel (tungsten or manganese)
Silver steel (US:Drill rod) (manganese, chromium, silicon)
Weathering steel ('Cor-ten') (silicon, manganese, chromium, copper, vanadium, nickel)
Wootz steel
Lead
Molybdochalkos (copper)
Solder (tin)
Terne (tin)
Type metal (tin, antimony)
Magnesium
Elektron
Magnox (0.8% aluminium, 0.004% beryllium); used in nuclear reactors
T-Mg–Al–Zn (Bergman phase) is a complex metallic alloy
Manganese
MN40, used in a foil for brazing
MN70, used in a foil for brazing
Ferromanganese
Spiegeleisen
Mercury
Amalgam
Ashtadhatu
Nickel
Alloy 230
Alnico (aluminium, cobalt); used in magnets
Alumel (manganese, aluminium, silicon)
Brightray (20% chromium, iron, rare earths); originally for hard-facing valve seats
Chromel (chromium)
Cupronickel (bronze, copper)
Ferronickel (iron)
German silver (copper, zinc)
Hastelloy (molybdenum, chromium, sometimes tungsten)
Inconel (chromium, iron)
Inconel 686 (chromium, molybdenum, tungsten)
Invar
Monel metal (copper, iron, manganese)
Nichrome (chromium)
Nickel-carbon (carbon)
Nicrosil (chromium, silicon, magnesium)
Nimonic (chromium, cobalt, titanium), used in jet engine turbine blades
Nisil (silicon)
Nitinol (titanium, shape memory alloy)
Magnetically "soft" alloys
Mu-metal (iron)
Permalloy (iron, molybdenum)
Supermalloy (molybdenum)
Brass (copper, zinc, manganese)
Nickel hydride (hydrogen)
Stainless steel (chromium, molybdenum, carbon, manganese, sulphur, phosphorus, silicon)
Coin silver (nickel)
Platinum
Platinum-iridium
Plutonium
Plutonium–aluminium
Plutonium–cerium
Plutonium–cerium–cobalt
Plutonium–gallium (gallium)
Plutonium–gallium–cobalt
Plutonium–zirconium
Potassium
NaK (sodium)
KLi (lithium)
Rare earths
Mischmetal (various rare earth elements)
Terfenol-D (terbium, dysprosium, and iron), a highly magnetostrictive alloy used in portable speakers such as the SoundBug device
Ferrocerium (cerium, iron)
Neodymium magnets, another strong permanent magnet
SmCo (cobalt); used for permanent magnets in guitar pickups, headphones, satellite transponders, etc.
Scandium hydride (hydrogen)
Lanthanum-nickel alloy (nickel)
Rhodium
Pseudo palladium (rhodium–silver alloy)
Silver
Argentium sterling silver (copper, germanium)
Billon
Britannia silver (copper)
Doré bullion (gold)
Dymalloy (copper, metal matrix composite with diamond)
Electrum (gold)
Goloid (copper, gold)
Platinum sterling (platinum)
Shibuichi (copper)
Sterling silver (copper)
Tibetan silver (copper)
Titanium
6al–4v (aluminium, vanadium)
Beta C (vanadium, chromium, others)
Gum metal (niobium, tantalum, zirconium, oxygen); used in spectacle frames, precision screws, etc.
Titanium hydride (hydrogen)
Titanium nitride (nitrogen)
Titanium gold (gold)
Titanium carbide [TiC]
Tin
Babbitt (copper, antimony, lead; used for bearing surfaces)
Britannium (copper, antimony)
Pewter (antimony, copper)
Queen's metal (antimony, lead, and bismuth)
Solder (lead, antimony)
Terne (lead)
White metal, (copper or lead); used as base metal for plating, in bearings, etc.
Uranium
Staballoy (depleted uranium with other metals, usually titanium or molybdenum). See also Iron above for Staballoy (stainless steel).
Uranium hydride (hydrogen)
Mulberry (alloy) (niobium, zirconium)
Zinc
Zamak (aluminium, magnesium, copper)
Electroplated zinc alloys
See also
Complex metallic alloys
Heusler alloy, a range of ferromagnetic alloys (66% copper, cobalt, iron, manganese, nickel or palladium)
High-entropy alloys
Intermetallic compounds
List of brazing alloys
Pot metal; inexpensive casting metal of non-specific composition
Notes
References
Alloys | List of named alloys | [
"Chemistry"
] | 2,337 | [
"Alloys",
"Chemical mixtures",
"nan"
] |
315,487 | https://en.wikipedia.org/wiki/Nickel%20silver | Nickel silver, maillechort, German silver, argentan, new silver, nickel brass, albata, or alpacca is a cupronickel (copper with nickel) alloy with the addition of zinc. The usual formulation is 60% copper, 20% nickel and 20% zinc. Nickel silver does not contain the element silver. It is named for its silvery appearance, which can make it attractive as a cheaper and more durable substitute. It is also well suited for being plated with silver.
A naturally occurring ore composition in China was smelted into the alloy known as or () ('white copper' or cupronickel). The name German Silver refers to the artificial recreation of the natural ore composition by German metallurgists. All modern, commercially important, nickel silvers (such as those standardized under ASTM B122) contain zinc and are sometimes considered a subset of brass.
History
Nickel silver was first used in China, where it was smelted from readily available unprocessed ore. During the Qing dynasty, it was "smuggled into various parts of the East Indies", despite a government ban on the export of nickel silver. It became known in the West from imported wares called (Mandarin) or (Cantonese) (白 銅, literally "white copper"), for which the silvery metal colour was used to imitate sterling silver. According to Berthold Laufer, it was identical to khar sini, one of the seven metals recognized by Jābir ibn Hayyān.
In Europe, consequently, it was at first called , which is about the way is pronounced in the Cantonese dialect. The earliest European mention of occurs in the year 1597. From then until the end of the eighteenth century there are references to it as having been exported from Canton to Europe.
German artificial recreation of the natural ore composition, however, began to appear from about 1750 onward. In 1770, the Suhl metalworks were able to produce a similar alloy. In 1823, a German competition was held to perfect the production process: the goal was to develop an alloy that possessed the closest visual similarity to silver. The brothers Henniger in Berlin and Ernst August Geitner in Schneeberg independently achieved this goal. The manufacturer Berndorf named the trademark brand Alpacca, which became widely known in northern Europe for nickel silver. In 1830, the German process of manufacture was introduced into England, while exports of from China gradually stopped. In 1832, a form of German silver was also developed in Birmingham, England.
After the modern process for the production of electroplated nickel silver was patented in 1840 by George Richards Elkington and his cousin Henry Elkington in Birmingham, the development of electroplating caused nickel silver to become widely used. It formed an ideal, strong and bright substrate for the plating process. It was also used unplated in applications such as cutlery.
Uses
Nickel silver first became popular as a base metal for silver-plated cutlery and other silverware, notably the electroplated wares called EPNS (electroplated nickel silver). It is used in zippers, costume jewelry, for making musical instruments (e.g., flutes, clarinets), and is preferred for the track in electric model railway layouts, as its oxide is conductive. Better quality keys and lock cylinder pins are made of nickel silver for durability under heavy use. The alloy has been widely used in the production of coins (e.g. Portuguese escudo and the former GDR marks). Its industrial and technical uses include marine fittings and plumbing fixtures for its corrosion resistance, and heating coils for its high electrical resistance.
In the nineteenth century, particularly after 1868, North American Plains Indian metalsmiths were able to easily acquire sheets of German silver. They used them to cut, stamp, and cold hammer a wide range of accessories and also horse gear. Presently, Plains metalsmiths use German silver for pendants, pectorals, bracelets, armbands, hair plates, conchas (oval decorative plates for belts), earrings, belt buckles, necktie slides, stickpins, dush-tuhs, and tiaras. Nickel silver is the metal of choice among contemporary Kiowa and Pawnee in Oklahoma. Many of the metal fittings on modern higher-end equine harness and tack are of nickel silver.
Early in the twentieth century, German silver was used by automobile manufacturers before the advent of steel sheet metal. For example, the famous Rolls-Royce Silver Ghost of 1907 used German silver. After about 1920, it became widely used for pocketknife bolsters, due to its machinability and corrosion resistance. Prior to this, the most common metal was iron.
Musical instruments, including the flute, saxophone, trumpet, and French horn, string instrument frets, and electric guitar pickup parts, can be made of nickel silver. Many professional-level French horns are entirely made of nickel silver. Some saxophone manufacturers, such as Keilwerth, offer saxophones made of nickel silver (Shadow model); these are far rarer than traditional lacquered brass saxophones. Student-level flutes and piccolos are also made of silver-plated nickel silver, although upper-level models are likely to use sterling silver. Nickel silver produces a bright and powerful sound quality; an additional benefit is that the metal is harder and more corrosion resistant than brass. Because of its hardness, it is used for most clarinet, flute, oboe and similar wind instrument keys, normally silver-plated. It is used to produce the tubes (called staples) onto which oboe reeds are tied.
Many parts of brass instruments are made of nickel silver, such as tubes, braces or valve mechanism. Trombone slides of many manufacturers offer a lightweight nickel silver (LT slide) option for faster slide action and weight balance. The material was used in the construction of the National tricone resophonic guitar. The frets of guitar, mandolin, banjo, bass, and related string instruments are typically nickel silver. Nickel silver is sometimes used as ornamentation on the great highland bagpipe.
Nickel silver is also used in artworks. The Dutch sculptor Willem Lenssinck has made several pieces from German silver. Outdoors art made from this material easily withstands all kinds of weather.
See also
Argentium sterling silver
Britannia silver
Britannia metal
Cupronickel
Sheffield plate
Nickel Directive
List of named alloys
References
External links
Silver's Sterling Qualities
Chinese inventions
Copper alloys
Nickel alloys
Economy of the Qing dynasty
Silver | Nickel silver | [
"Chemistry"
] | 1,341 | [
"Nickel alloys",
"Alloys",
"Copper alloys"
] |
315,578 | https://en.wikipedia.org/wiki/Information%20processing%20%28psychology%29 | In cognitive psychology, information processing is an approach to the goal of understanding human thinking that treats cognition as essentially computational in nature, with the mind being the software and the brain being the hardware. It arose in the 1940s and 1950s, after World War II. The information processing approach in psychology is closely allied to the computational theory of mind in philosophy; it is also related to cognitivism in psychology and functionalism in philosophy.
Two types
Information processing may be vertical or horizontal, either of which may be centralized or decentralized (distributed). The horizontally distributed processing approach of the mid-1980s became popular under the name connectionism. The connectionist network is made up of different nodes, and it works by a "priming effect," and this happens when a "prime node activates a connected node". But "unlike in semantic networks, it is not a single node that has a specific meaning, but rather the knowledge is represented in a combination of differently activated nodes"(Goldstein, as cited in Sternberg, 2012).
Models and theories
There are several proposed models or theories that describe the way in which we process information.
Every individual has different information overload point with the same information load because individuals have different information-processing capacities.
Sternberg's triarchic theory of intelligence
Sternberg's theory of intelligence is made up of three different components: creative, analytical, and practical abilities. Creativeness is the ability to have new original ideas, and being analytical can help a person decide whether the idea is a good one or not. "Practical abilities are used to implement the ideas and persuade others of their value". In the middle of Sternberg's theory is cognition and with that is information processing. In Sternberg's theory, he says that information processing is made up of three different parts, meta components, performance components, and knowledge-acquisition components. These processes move from higher-order executive functions to lower-order functions. Meta components are used for planning and evaluating problems, while performance components follow the orders of the meta components, and the knowledge-acquisition component learns how to solve the problems. This theory in action can be explained by working on an art project. First is a decision about what to draw, then a plan and a sketch. During this process there is simultaneous monitoring of the process, and whether it is producing the desired accomplishment. All these steps fall under the meta component processing, and the performance component is the art. The knowledge-acquisition portion is the learning or improving drawing skills.
Information processing model: the working memory
Information processing has been described as "the sciences concerned with gathering, manipulating, storing, retrieving, and classifying recorded information". According to the Atkinson-Shiffrin memory model or multi-store model, for information to be firmly implanted in memory it must pass through three stages of mental processing: sensory memory, short-term memory, and long-term memory.
An example of this is the working memory model. This includes the central executive, phonologic loop, episodic buffer, visuospatial sketchpad, verbal information, long-term memory, and visual information. The central executive is like the secretary of the brain. It decides what needs attention and how to respond. The central executive then leads to three different subsections. The first is phonological storage, subvocal rehearsal, and the phonological loop. These sections work together to understand words, put the information into memory, and then hold the memory. The result is verbal information storage. The next subsection is the visuospatial sketchpad which works to store visual images. The storage capacity is brief but leads to an understanding of visual stimuli. Finally, there is an episodic buffer. This section is capable of taking information and putting it into long-term memory. It is also able to take information from the phonological loop and visuospatial sketchpad, combining them with long-term memory to make "a unitary episodic representation.
In order for these to work, the sensory register takes in via the five senses: visual, auditory, tactile, olfactory, and taste. These are all present since birth and are able to handle simultaneous processing (e.g., food – taste it, smell it, see it). In general, learning benefits occur when there is a developed process of pattern recognition. The sensory register has a large capacity and its behavioral response is very short (1–3 seconds).
Within this model, sensory store and short term memory or working memory has limited capacity. Sensory store is able to hold very limited amounts of information for very limited amounts of time. This phenomenon is very similar to having a picture taken with a flash. For a few brief moments after the flash goes off, the flash it seems to still be there. However, it is soon gone and there is no way to know it was there. Short term memory holds information for slightly longer periods of time, but still has a limited capacity. According to Linden, "The capacity of STM had initially been estimated at "seven plus or minus two" items, which fits the observation from neuropsychological testing that the average digit span of healthy adults is about seven. However, it emerged that these numbers of items can only be retained if they are grouped into so-called chunks, using perceptual or conceptual associations between individual stimuli." Its duration is of 5–20 seconds before it is out of the subject's mind. This occurs often with names of people newly introduced to. Images or information based on meaning are stored here as well, but it decays without rehearsal or repetition of such information.
On the other hand, long-term memory has a potentially unlimited capacity and its duration is as good as indefinite. Although sometimes it is difficult to access, it encompasses everything learned until this point in time. One might become forgetful or feel as if the information is on the tip of the tongue.
Cognitive development theory
Another approach to viewing the ways in which information is processed in humans was suggested by Jean Piaget in what is called the Piaget's Cognitive Development Theory. Piaget developed his model based on development and growth. He identified four different stages between different age brackets characterized by the type of information and by a distinctive thought process. The four stages are: the sensorimotor (from birth to 2 years), preoperational (2–6 years), concrete operational (6–11 years), and formal operational periods (11 years and older). During the sensorimotor stage, newborns and toddlers rely on their senses for information processing to which they respond with reflexes. In the preoperational stage, children learn through imitation and remain unable to take other people's point of view. The concrete operational stage is characterized by the developing ability to use logic and to consider multiple factors to solve a problem. The last stage is the formal operational, in which preadolescents and adolescents begin to understand abstract concepts and to develop the ability to create arguments and counter arguments.
Furthermore, adolescence is characterized by a series of changes in the biological, cognitive, and social realms. In the cognitive area, the brain's prefrontal cortex as well as the limbic system undergoes important changes. The prefrontal cortex is the part of the brain that is active when engaged in complicated cognitive activities such as planning, generating goals and strategies, intuitive decision-making, and metacognition (thinking about thinking). This is consistent with Piaget's last stage of formal operations. The prefrontal cortex becomes complete between adolescence and early adulthood. The limbic system is the part of the brain that modulates reward sensitivity based on changes in the levels of neurotransmitters (e.g., dopamine) and emotions.
In short, cognitive abilities vary according to our development and stages in life. It is at the adult stage that we are better able to be better planners, process and comprehend abstract concepts, and evaluate risks and benefits more aptly than an adolescent or child would be able to.
In computing, information processing broadly refers to the use of algorithms to transform data—the defining activity of computers; indeed, a broad computing professional organization is known as the International Federation for Information Processing (IFIP). It is essentially synonymous with the terms data processing or computation, although with a more general connotation.
See also
References
Bibliography
,
Cognitive psychology
Information | Information processing (psychology) | [
"Biology"
] | 1,733 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
315,659 | https://en.wikipedia.org/wiki/Golden%20angle | In geometry, the golden angle is the smaller of the two angles created by sectioning the circumference of a circle according to the golden ratio; that is, into two arcs such that the ratio of the length of the smaller arc to the length of the larger arc is the same as the ratio of the length of the larger arc to the full circumference of the circle.
Algebraically, let a+b be the circumference of a circle, divided into a longer arc of length a and a smaller arc of length b such that
The golden angle is then the angle subtended by the smaller arc of length b. It measures approximately ...° or in radians ... .
The name comes from the golden angle's connection to the golden ratio φ; the exact value of the golden angle is
or
where the equivalences follow from well-known algebraic properties of the golden ratio.
As its sine and cosine are transcendental numbers, the golden angle cannot be constructed using a straightedge and compass.
Derivation
The golden ratio is equal to φ = a/b given the conditions above.
Let ƒ be the fraction of the circumference subtended by the golden angle, or equivalently, the golden angle divided by the angular measurement of the circle.
But since
it follows that
This is equivalent to saying that φ 2 golden angles can fit in a circle.
The fraction of a circle occupied by the golden angle is therefore
The golden angle g can therefore be numerically approximated in degrees as:
or in radians as :
Golden angle in nature
The golden angle plays a significant role in the theory of phyllotaxis; for example, the golden angle is the angle separating the florets on a sunflower. Analysis of the pattern shows that it is highly sensitive to the angle separating the individual primordia, with the Fibonacci angle giving the parastichy with optimal packing density.
Mathematical modelling of a plausible physical mechanism for floret development has shown the pattern arising spontaneously from the solution of a nonlinear partial differential equation on a plane.
See also
137 (number)
138 (number)
Golden ratio
Fibonacci sequence
References
External links
Golden Angle at MathWorld
Golden ratio
Angle
Mathematical constants
Real transcendental numbers | Golden angle | [
"Physics",
"Mathematics"
] | 469 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Mathematical objects",
"Golden ratio",
"nan",
"Wikipedia categories named after physical quantities",
"Angle",
"Mathematical constants",
"Numbers"
] |
315,718 | https://en.wikipedia.org/wiki/CP-67 | CP-67 is a hypervisor, or Virtual Machine Monitor, from IBM for its System/360 Model 67 computer.
CP-67 is the control program portion of CP/CMS, a virtual machine operating system developed by IBM's Cambridge Scientific Center in Cambridge, Massachusetts. It was a reimplementation of their earlier research system CP-40, which ran on a one-off customized S/360-40. CP-67 was later reimplemented (again) as CP-370, which IBM released as VM/370 in 1972, when virtual memory was added to the System/370 series.
CP and CMS are usually grouped together as a unit, but the "components are independent of each other. CP-67 can be used on an appropriate configuration without CMS, and CMS
can be run on a properly configured System/360 as a single-user system without CP-67."
Minimum hardware configuration
The minimum configuation for CP-67 is:
2067 CPU, model 1 or 2
2365 Processor Storage model 1—262,144 bytes of magnetic core memory with an access time of 750 ns (nanoseconds) per eight bytes.
IBM 1052 printer/keyboard
IBM 1403 printer
IBM 2540 card read/punch
Three IBM 2311 disk storage units, 7.5 MB each, 22.5 MB total
IBM 2400 magnetic tape data storage unit
IBM 270x Transmission Control unit
Installation
Disks to be used by CP have to be formatted by a standalone utility called FORMAT, loaded from tape or punched cards. CP disks are formatted with fixed-length 829 byte records.
Following formatting, a second stand-alone utility, DIRECT, partitions the disk space between permanent (system and user files) and temporary (paging and spooling) space. DIRECT also creates the user directory identifying the virtual machines (users) available in the system. For each user the directory contains identifying information, id and password, and lists the resources (core, devices, etc) that this user can access, Although a user may be allowed access to physical devices it is more common to specify virtual devices, such as a spooled card reader, card punch, and printer. A user can be allocated one or more virtual disk units, "mini disks" [sic.], which resemble a real disk of the same device type, except that they occupy a subset of the space on the real device.
Family tree
See also
History of CP/CMS
References
Virtualization software
IBM mainframe operating systems
History of software
VM (operating system) | CP-67 | [
"Technology"
] | 529 | [
"History of software",
"History of computing"
] |
315,784 | https://en.wikipedia.org/wiki/CP/CMS | CP/CMS (Control Program/Cambridge Monitor System) is a discontinued time-sharing operating system of the late 1960s and early 1970s. It is known for its excellent performance and advanced features. Among its three versions, CP-40/CMS was an important 'one-off' research system that established the CP/CMS virtual machine architecture. It was followed by CP-67/CMS, a reimplementation of CP-40/CMS for the IBM System/360-67, and the primary focus of this article. Finally, CP-370/CMS was a reimplementation of CP-67/CMS for the System/370. While it was never released as such, it became the foundation of IBM's VM/370 operating system, announced in 1972.
Each implementation was a substantial redesign of its predecessor and an evolutionary step forward. CP-67/CMS was the first widely available virtual machine architecture. IBM pioneered this idea with its research systems M44/44X (which used partial virtualization) and CP-40 (which used full virtualization).
In addition to its role as the predecessor of the VM family, CP/CMS played an important role in the development of operating system (OS) theory, the design of IBM's System/370, the time-sharing industry, and the creation of a self-supporting user community that anticipated today's free software movement.
History
Fundamental CP/CMS architectural and strategic parameters were established in CP-40, which began production use at IBM's Cambridge Scientific Center in early 1967. This effort occurred in a complex political and technical milieu, discussed at some length and supported by first-hand quotes in the Wikipedia article History of CP/CMS.
In a nutshell:
In the early 1960s, IBM sought to maintain dominance over scientific computing, where time-sharing efforts such as CTSS and MIT's Project MAC gained focus. But IBM had committed to a huge project, the System/360, which took the company in a different direction.
The time-sharing community was disappointed with the S/360's lack of time-sharing capabilities. This led to key IBM sales losses at Project MAC and Bell Laboratories. IBM's Cambridge Scientific Center (CSC), originally established to support Project MAC, began an effort to regain IBM's credibility in time-sharing, by building a time-sharing operating system for the S/360. This system would eventually become CP/CMS. In the same spirit, IBM designed and released a S/360 model with time-sharing features, the IBM System/360-67, and a time-sharing operating system, TSS/360. TSS failed; but the 360-67 and CP/CMS succeeded, despite internal political battles over time-sharing, and concerted efforts at IBM to scrap the CP/CMS effort.
In 1967, CP/CMS production use began, first on CSC's CP-40, then later on CP-67 at Lincoln Laboratories and other sites. It was made available via the IBM Type-III Library in 1968. By 1972, CP/CMS had gone through several releases; it was a robust, stable system running on 44 systems; it could support 60 timesharing users on a S/360-67; and at least two commercial timesharing vendors (National CSS and IDC) were reselling S/360-67 time using CP/CMS technology.
In 1972, IBM announced the addition of virtual memory to the S/370 series, along with the VM/370 operating system, a reimplementation of CP/CMS for the S/370. This marked the end of CP/CMS releases, although the system continued its independent existence for some time. VM releases continued to include source code for some time and members of the VM community long remained active contributors.
Overview
CP/CMS was built by IBM's Cambridge Scientific Center (CSC), a research and development lab with ties to MIT, under the leadership of Robert Creasy. The system's goals, development process, release, and legacy of breakthrough technology, set this system apart from other operating systems of its day and from other large IBM projects. It was an open-source system, made available in source code form to all IBM customers at no charge – as part of the unsupported IBM Type-III Library. CP/CMS users supported themselves and each other. Unusual circumstances, described in the History section below, led to this situation.
CP/CMS consisted of two main components:
CP, the Control Program, created the virtual machine environment. The widely used version was CP-67, ran on the S/360-67. (The research system CP-40 established the architecture. A third version, CP-370, became VM/370.) Instead of explicitly dividing up memory and other resources among users, which had been the traditional approach, CP provided each user with a simulated stand-alone System/360 computer. Each system was able to run any S/360 software that ran on the bare machine and in effect gave each user a private computer system.
CMS, the Cambridge Monitor System (and also Console Monitor System – but renamed Conversational Monitor System in VM) was a lightweight single-user operating system, for interactive time-sharing use. By running many copies of CMS in CP's virtual machines – instead of multiple copies of large, traditional multi-tasking OS – the overhead per user was less. This allowed a great number of simultaneous users to share a single S/360.
The CP/CMS virtual machine concept was an important step forward in operating system design.
By isolating users from each other, CP/CMS greatly improved system reliability and security.
By simulating a full, stand-alone computer for each user, CP/CMS could run any S/360 software in a time-sharing environment, not just applications specifically designed for time-sharing.
By using lightweight CMS as the primary user interface, CP/CMS achieved unprecedented time-sharing performance. In addition, the simplicity of CMS made it easier to implement user interface enhancements than in traditional OS.
IBM reimplemented CP/CMS as its VM/370 product line, released in 1972 when virtual memory was added to the S/370 series. VM/370's successors (such as z/VM) remain in wide use today. (IBM reimplemented CP-67, as it had CP-40, and did not simply rename and repackage it. VM coexisted with CP/CMS and its successors for many years. It is thus appropriate to view CP/CMS as an independent OS, distinct from the VM family.)
CP/CMS as free software
CP/CMS was distributed in source code form, and many CP/CMS users were actively involved in studying and modifying that source code. Such direct user involvement with a vendor-supplied operating system was unusual.
In the CP/CMS era, many vendors distributed operating systems in machine-readable source code. IBM provided optional source code for, e.g., OS/360, DOS/360, and several later mainstream IBM operating systems. With all these systems, some awareness of system source code was also involved in the SYSGEN process, comparable to a kernel build in modern systems also in installing a Starter Set. (Forty years later, the Hercules emulator can be used to run fossilized versions of these systems, based on source code that is now treated as part of the public domain.)
The importance of operating system source code has changed over time. Before IBM unbundled software from hardware in 1969, the OS (and most other software) was included in the cost of the hardware. Each vendor had complete responsibility for the entire system, hardware and software. This made the distribution medium relatively unimportant. After IBM's unbundling, OS software was delivered as IBM System Control Program software IBM System Control Program (SCP) software, eventually in object code only (OCO) form.
For complicated reasons, CP/CMS was not released in the normal way. It was not supported by IBM, but was made part of the unsupported IBM Type-III Library, a collection of software contributions from IBM staff members (similarly software contributed by customers formed the Type-IV Library). IBM distributed this library to its customers for use 'as is'. The lack of direct IBM support for such products forced active users to support themselves and encouraged modifications and mutual support. CP/CMS and other Type-III products were early forms of free software.
Source code distribution of other IBM operating systems may have continued for some time (e.g. OS/360, DOS/360, DOS/VSE, MVS, and even TSS/370, which all today are generally considered to be in the public domain) since they were arguably published without a copyright notice before 1978.} However, the unsupported status of CP/CMS placed different pressures on its user community and created the need for source code distribution.
CP/CMS was contributed to the Type-III Library by MIT's Lincoln Laboratory and not by IBM, despite the fact that the system was built by IBM's Cambridge Scientific Center. This decision has been described as a form of collusion to outmaneuver the IBM political forces opposed to time-sharing. It is thought that it may also reflect the amount of formal and informal input from MIT and Union Carbide that contributed to the design and implementation of CP-40, the S/360-67, CP-67, and CMS. See History of CP/CMS (historical notes) for further insights and references on this topic.
Many CP/CMS users made extensive modifications to their own copies of the source code. Much of this work was shared among sites, and important changes found their way back into the core system. Other users, such as National CSS and some academic sites, continued independent development of CP/CMS, rather than switching to VM/370 when it became available. These efforts diverged from the community, in what today would be termed a software fork.
After IBM released VM/370, source code distribution of VM continued for several releases. (The VM project did not adopt the use of PL/S, an internal systems programming language mandated for use within IBM on many comparable projects. The use of PL/S would have made source code distribution impossible. IBM attempted to turn away from assembly language to higher level languages as early as 1965, and was making substantial use of PL/S by 1969, e.g. in MVS. PL/S was considered a trade secret at the time and was not available to customers. IBM apparently made exceptions to this policy much later.) The VM user community continued to make important contributions to the software, as it had during the CP/CMS Type-III period. Few OS or DOS sites exhibited active user involvement in deep operating system internals, but this was found at many VM sites. This reverse support helped CP/CMS concepts survive and evolve, despite VM's second class citizen status at IBM.
Architecture
The CP/CMS architecture was revolutionary for its time. The system consisted of a virtualizing control program (CP) which created multiple independent virtual machines (VMs). Platform virtualization was possible because of two elements of the IBM System/360-67:
Segregation of privileged "supervisor state" instructions from normal "problem state" instructions
Address translation hardware
When a program was running in "problem state," using a privileged instruction or an invalid memory address would cause the hardware to raise an exception condition. By trapping these conditions, CP could simulate the appropriate behavior, e.g. performing I/O or paging operations. A guest operating system, which would run in "supervisor state" on a bare machine, was run in "problem state" under CP.
The result was a fully virtualized environment. Each virtual machine had its own set of virtual devices, mapped from the system's real hardware environment. Thus a given dial-up teletype was presented to its VM instance as its virtual console.
Note that, in CP-67, certain model-dependent and diagnostic instructions were not virtualized, notably the DIAG instruction. Ultimately, in later development at IBM and elsewhere, DIAG instructions were used to create a non-virtualized interface, to what became called a hypervisor. Client operating systems could use this mechanism to communicate directly with the control program; this offered dramatic performance improvements.
Any S/360 operating system could in fact be run under CP, but normal users ran Cambridge Monitor System (CMS), a simple, single-user operating system. CMS allowed users to run programs and manage their virtual devices. CP-67 versions 1 and 2 did not support virtual memory inside a virtual machine. This was added in version 3. At that point, testing and development of CP itself could be done by running a full copy of CP/CMS inside a single virtual machine. Some CP/CMS operating system work, such as CP-370 development and MVS testing, ran four- or five-level deep stacks of hardware and OS simulations.
The CP/CMS design is different from IBM's previous monolithic operating systems, it separates complex "big system" (dispatching, hardware management, mass storage) from "little system" (application program execution, file I/O, console input/output). The re-categorization of both systems into their own entities prevents a bug in one users' system from affecting both. This is a model feature in microkernel operating systems.
IBM's decision to implement virtualization and virtual memory features in the subsequent S/370 design (although missing from the initial S/370 series) reflects, at least in part, the success of the CP/CMS approach. In turn the survival and success of IBM's VM operating system family, and of virtualization technology in general, owe much to the S/360-67.
In many respects, IBM's CP-67 and CP/CMS products anticipated (and heavily influenced) contemporary virtualization software, such as VMware Workstation, Xen, and Microsoft Virtual PC.
Related terminology
CP: Control Program. CP-40 and CP-67 were implementations for CSC's customized S/360-40 and the standard S/360-67, respectively.
CMS: Cambridge Monitor System. This portion of the CP/CMS system was renamed Conversational Monitor System when IBM released VM/370. Unlike the CP-to-VM transition, however, which was a reimplementation, much of CMS was moved without modification from CP/CMS into VM/370.
VM: Virtual Machine, initially the term pseudo-machine was used, but soon virtual machine was borrowed from the IBM M44/44X project. It was well established in CP/CMS by the time IBM introduced VM/370.
hypervisor: a mechanism for paravirtualization. This term was coined in IBM's reimplementation of CP-67 as VM/370.
See also
VP/CSS
History of IBM
Time-sharing system evolution
CMS file system
Footnotes
Citations
Primary CP/CMS sources
R. J. Creasy, "The origin of the VM/370 time-sharing system", IBM Journal of Research & Development, Vol. 25, No. 5 (September 1981), pp. 483–90, PDF― perspective on CP/CMS and VM history by the CP-40 project lead, also a CTSS author
E.W. Pugh, L.R. Johnson, and John H. Palmer, IBM's 360 and early 370 systems, MIT Press, Cambridge MA and London, ― extensive (819 pp.) treatment of IBM's offerings during this period; the limited coverage of CP/CMS in such a definitive work is telling
Melinda Varian, VM and the VM community, past present, and future, SHARE 89 Sessions 9059–61, 1997;― an outstanding source for CP/CMS and VM history
Bitsavers, Index of /pdf/ibm/360/cp67
Additional CP/CMS sources
R. J. Adair, R. U. Bayles, L. W. Comeau and R. J. Creasy, A Virtual Machine System for the 360/40, IBM Corporation, Cambridge Scientific Center Report No. 320‐2007 (May 1966)― a seminal paper describing implementation of the virtual machine concept, with descriptions of the customized CSC S/360-40 and the CP-40 design
International Business Machines Corporation, CP-67/CMS, Program 360D-05.2.005, IBM Program Information Department (June 1969)― IBM's reference manual
R. A. Meyer and L. H. Seawright, "A virtual machine time-sharing system," IBM Systems Journal, Vol. 9, No. 3, pp. 199–218 (September 1970)― describes the CP-67/CMS system, outlining features and applications
R. P. Parmelee, T. I. Peterson, C. C. Tillman, and D. J. Hatfield, "Virtual storage and virtual machine concepts," IBM Systems Journal, Vol. 11, No. 2 (June 1972)
Background CP/CMS sources
F. J. Corbató, et al., The Compatible Time-Sharing System, A Programmer’s Guide, M.I.T. Press, 1963
F. J. Corbató, M. Merwin-Daggett, and R. C. Daley, "An Experimental Time-sharing System," Proc. Spring Joint Computer Conference (AFIPS) 21, pp. 335–44 (1962) — description of CTSS
F. J. Corbató and V. A. Vyssotsky, "Introduction and Overview of the MULTICS System", Proc. Fall Joint Computer Conference (AFIPS) 27, pp. 185–96 (1965)
P. J. Denning, "Virtual Memory", Computing Surveys Vol. 2, pp. 153–89 (1970)
J. B. Dennis, "Segmentation and the Design of Multi-Programmed Computer Systems," JACM Vol. 12, pp. 589–602 (1965)― virtual memory requirements for Project MAC, destined for GE 645
C. A. R. Hoare and R. H. Perrott, Eds., Operating Systems Techniques, Academic Press, Inc., New York (1972)
T. Kilburn, D. B. G. Edwards, M. J. Lanigan, and F. H. Sumner, "One-Level Storage System", IRE Trans. Electron. Computers EC-11, pp. 223–35 (1962)― Manchester/Ferranti Atlas
R. A. Nelson, "Mapping Devices and the M44 Data Processing System," Research Report RC 1303, IBM Thomas J. Watson Research Center (1964)― about the IBM M44/44X
R. P. Parmelee, T. I. Peterson, C. C. Tillman, and D. J. Hatfield, "Virtual Storage and Virtual Machine Concepts", IBM Systems Journal, Vol. 11, pp. 99–130 (1972)
Additional on-line CP/CMS resources
febcm.club.fr — Information Technology Timeline, 1964–74
www.multicians.org — Tom Van Vleck's short essay The IBM 360/67 and CP/CMS
www.cap-lore.com — Norman Hardy's Short history of IBM's virtual machines
www.cap-lore.com — Norman Hardy's short description of the "Blaauw Box"
Detailed citations for points made in this article can be found in History of CP/CMS.
External links
VM/CMS handbook
Andreas C. Hofmann: CP-370/CMS – Control Program 370 / Cambridge Monitor System, 1970, in: Computing History Dictionary (german) [¹10.12.2022]
History of software
IBM mainframe operating systems
Time-sharing operating systems
Virtualization software
VM (operating system)
1960s software | CP/CMS | [
"Technology"
] | 4,226 | [
"History of software",
"History of computing"
] |
315,793 | https://en.wikipedia.org/wiki/Actinophrys | Actinophrys is a genus of heliozoa, amoeboid unicellular organisms with many axopodial filaments that radiate out of their cell. It contains one of the most common heliozoan species, Actinophrys sol. It is classified within the monotypic family Actinophryidae.
Characteristics
Actinophrys species belong to an informal group known as heliozoa, which are unicellular eukaryotes (or protists) that are heterotrophic (also known as protozoa) and present stiff radiating arms known as axopodia. In particular, Actinophrys species are characterized by axonemes consisting of double interlocking spirals of microtubules. Their axonemes end on a large central nucleus. They are also characterized by the siliceous material present in their cysts.
Systematics
Actinophrys was described in 1830 by German naturalist Christian Gottfried Ehrenberg, with the type species Actinophrys sol. The species originally belonged to a genus named Trichoda, described earlier by Otto Friedrich Müller and later declared obsolete. In 1824, Bory de St. Vincent transferred that species to a new genus Peritricha but, without any new observations to justify the change, it fell out of use.
Species
There are currently four accepted species of Actinophrys.
Actinophrys sol (=A. difformis ; A. marina ; A. stella ; A. oculata ; A. tenuipes ; A. fissipes ; A. longipes ; A. tunicata ; A. limbata ; A. paradoxa ; A. picta ; A. alveolata ; A. subalpina ; A. vesiculata )
Actinophrys pontica
Actinophrys salsuginosa
Actinophrys tauryanini
References
Ochrophyta
Ochrophyte genera
Taxa described in 1830 | Actinophrys | [
"Biology"
] | 411 | [
"Ochrophyta",
"Algae"
] |
315,897 | https://en.wikipedia.org/wiki/Microphylls%20and%20megaphylls | In plant anatomy and evolution a microphyll (or lycophyll) is a type of plant leaf with one single, unbranched leaf vein. Plants with microphyll leaves occur early in the fossil record, and few such plants exist today. In the classical concept of a microphyll, the leaf vein emerges from the protostele without leaving a leaf gap. Leaf gaps are small areas above the node of some leaves where there is no vascular tissue, as it has all been diverted to the leaf. Megaphylls, in contrast, have multiple veins within the leaf and leaf gaps above them in the stem.
Leaf vasculature
The clubmosses and horsetails have microphylls, as in all extant species there is only a single vascular trace in each leaf. These leaves are narrow because the width of the blade is limited by the distance water can efficiently diffuse cell-to-cell from the central vascular strand to the margin of the leaf. Despite their name, microphylls are not always small: those of Isoëtes can reach 25 centimetres in length, and the extinct Lepidodendron bore microphylls up to 78 cm long.
Evolution
The enation theory of microphyll evolution posits that small outgrowths, or enations, developed from the side of early stems (such as those found in the Zosterophylls). Outgrowths of the protostele (the central vasculature) later emerged towards the enations (as in Asteroxylon), and eventually continued to grow fully into the leaf to form the mid-vein (such as in Baragwanathia). The fossil record appears to display these traits in this order, but this may be a coincidence, as the record is incomplete. The telome theory proposes instead that both microphylls and megaphylls originated by the reduction; microphylls by reduction of a single telome branch, and megaphylls by evolution from branched portions of a telome.
The simplistic evolutionary models, however, do not correspond well to evolutionary relationships. Some genera of ferns display complex leaves that are attached to the pseudostele by an outgrowth of the vascular bundle, leaving no leaf gap. Horsetails (Equisetum) bear only a single vein, and appear to be microphyllous; however, the fossil record suggests that their forebears had leaves with complex venation, and their current state is a result of secondary simplification. Some gymnosperms bear needles with only one vein, but these evolved later from plants with complex leaves.
An interesting case is that of Psilotum, which has a (simple) protostele, and enations devoid of vascular tissue. Some species of Psilotum have a single vascular trace that terminates at the base of the enations. Consequently, Psilotum was long thought to be a "living fossil" closely related to early land plants (rhyniophytes). However, genetic analysis has shown Psilotum to be a reduced fern.
It is not clear whether leaf gaps are a homologous trait of megaphyllous organisms or have evolved more than once.
While the simple definitions (microphylls: one vein, macrophylls: more than one) can still be used in modern botany, the evolutionary history is harder to decipher.
See also
Vegetation classification
References
Leaf morphology
Plant physiology
Plant anatomy | Microphylls and megaphylls | [
"Biology"
] | 704 | [
"Plant physiology",
"Plants"
] |
315,924 | https://en.wikipedia.org/wiki/Thermoacoustic%20heat%20engine | Thermoacoustic engines (sometimes called "TA engines") are thermoacoustic devices which use high-amplitude sound waves to pump heat from one place to another (this requires work, which is provided by the loudspeaker) or use a heat difference to produce work in the form of sound waves (these waves can then be converted into electrical current the same way as a microphone does).
These devices can be designed to use either a standing wave or a travelling wave.
Compared to vapor refrigerators, thermoacoustic refrigerators have no coolant and few moving parts (only the loudspeaker), therefore require no dynamic sealing or lubrication.
History
The ability of heat to produce sound was noted by glassblowers centuries ago.
In the 1850s experiments showed that a temperature differential drove the phenomenon, and that acoustic volume and intensity vary with tube length and bulb size.
Rijke demonstrated that adding a heated wire screen a quarter of the way up the tube greatly magnified the sound, supplying energy to the air in the tube at its point of greatest pressure. Further experiments showed that cooling the air at its points of minimal pressure produced a similar amplifying effect. A Rijke tube converts heat into acoustic energy, using natural convection.
In about 1887, Lord Rayleigh discussed the possibility of pumping heat with sound.
In 1969, Rott reopened the topic. Using the Navier-Stokes equations for fluids, he derived equations specific for thermoacoustics.
Linear thermoacoustic models were developed to form a basic quantitative understanding, and numeric models for computation.
Swift continued with these equations, deriving expressions for the acoustic power in thermoacoustic devices.
In 1992 a similar thermoacoustic refrigeration device was used on Space Shuttle Discovery.
Orest Symko at University of Utah began a research project in 2005 called Thermal Acoustic Piezo Energy Conversion (TAPEC).
Niche applications such as small to medium scale cryogenic applications. Score Ltd. was awarded £2M in March 2007 to research a cooking stove that also delivers electricity and cooling for use in developing countries.
A radioisotope-heated thermoacoustic system was proposed and prototyped for deep space exploration missions by Airbus. The system has slight theoretical advantages over other generator systems like existing thermocouple based systems, or a proposed Stirling engine used in ASRG prototype.
SoundEnergy developed the THEAC system that turns heat, typically waste heat or solar heat into cooling with no other power source. The device uses argon gas. The device amplifies sound created by the waste heat, converts the resulting pressure back into another heat differential and uses a Stirling cycle to produce the cooling effect.
Operation
A thermoacoustic device takes advantages of the fact that in a sound wave parcels of gas adiabatically alternatively compress and expand, and pressure and temperature change simultaneously; when pressure reaches a maximum or minimum, so does the temperature. It basically consists of heat exchangers, a resonator and a stack (on standing wave devices) or regenerator (on travelling wave devices). Depending on the type of engine a driver or loudspeaker might be used to generate sound waves.
In a tube closed at both ends, interference can occur between two waves traveling in opposite directions at certain frequencies. The interference causes resonance and creates a standing wave. The stack consists of small parallel channels. When the stack is placed at a certain location in the resonator having a standing wave, a temperature differential develops across the stack. By placing heat exchangers at each side of the stack, heat can be moved. The opposite is possible as well: a temperature difference across the stack produces a sound wave. The first example is a heat pump, while the second is a prime mover.
Heat pump
Creating or moving heat from a cold to a warm reservoir requires work. Acoustic power provides this work. The stack creates a pressure drop. Interference between the incoming and reflected acoustic waves is now imperfect. The difference in amplitude causes the standing wave to travel, giving the wave acoustic power.
Heat pumping along a stack in a standing wave device follows the Brayton cycle.
A counter-clockwise Brayton cycle for a refrigerator consists of four processes that affect a parcel of gas between two plates of a stack.
Adiabatic compression of the gas. When a parcel of gas is displaced from its rightmost position to its leftmost position, the parcel is adiabatically compressed, increasing its temperature. At the leftmost position the parcel now has a higher temperature than the warm plate.
Isobaric heat transfer. The parcel's higher temperature causes it to transfer heat to the plate at constant pressure, cooling the gas.
Adiabatic expansion of the gas. The gas is displaced back from the leftmost position to the rightmost position. Due to adiabatic expansion the gas cools to a temperature lower than that of the cold plate.
Isobaric heat transfer. The parcel's lower temperature causes heat to be transferred from the cold plate to the gas at a constant pressure, returning the parcel's temperature to its original value.
Travelling wave devices can be described using the Stirling cycle.
Temperature gradient
Engines and heat pumps both typically use stacks and heat exchangers. The boundary between a prime mover and heat pump is given by the temperature gradient operator, which is the mean temperature gradient divided by the critical temperature gradient.
The mean temperature gradient is the temperature difference across the stack divided by the length of the stack.
The critical temperature gradient is a value that depends on characteristics of the device such as frequency, cross-sectional area and gas properties.
If the temperature gradient operator exceeds one, the mean temperature gradient is larger than the critical temperature gradient and the stack operates as a prime mover. If the temperature gradient operator is less than one, the mean temperature gradient is smaller than the critical gradient and the stack operates as a heat pump.
Theoretical efficiency
In thermodynamics the highest achievable efficiency is the Carnot efficiency. The efficiency of thermoacoustic engines can be compared to Carnot efficiency using the temperature gradient operator.
The efficiency of a thermoacoustic engine is given by
The coefficient of performance of a thermoacoustic heat pump is given by
Practical efficiency
The most efficient thermoacoustic devices have an efficiency approaching 40% of the Carnot limit, or about 20% to 30% overall (depending on the heat engine temperatures).
Higher hot-end temperatures may be possible with thermoacoustic devices because they have no moving parts, thus allowing the Carnot efficiency to be higher. This may partially offset their lower efficiency, compared to conventional heat engines, as a percentage of Carnot.
The ideal Stirling cycle, approximated by traveling wave devices, is inherently more efficient than the ideal Brayton cycle, approximated by standing wave devices. However, the narrower pores required to give good thermal contact in a travelling wave device, as compared to a standing wave stack which requires deliberately imperfect thermal contact, also gives rise to greater frictional losses, reducing practical efficiency. The toroidal geometry often used in traveling wave devices, but not required for standing wave devices, can also boost losses due to Gedeon streaming around the loop.
See also
Sound amplification by stimulated emission of radiation (SASER)
References
Further reading
Semipopular introduction to thermoacoustic effects and devices.
Frank Wighard "Double Acting Pulse Tube Electroacoustic System" US Patent 5,813,234
External links
Los Alamos National Laboratory, New Mexico, USA
Thermoacoustics at the University of Adelaide, Australia, web archive backup: Discussion Forum
Adelaide University
Hear That? The Fridge Is Chilling, Wired Magazine article
Cooling technology
Heat pumps
Acoustics | Thermoacoustic heat engine | [
"Physics"
] | 1,589 | [
"Classical mechanics",
"Acoustics"
] |
315,927 | https://en.wikipedia.org/wiki/Modern%20architecture | Modern architecture, also called modernist architecture, was an architectural movement and style that was prominent in the 20th century, between the earlier Art Deco and later postmodern movements. Modern architecture was based upon new and innovative technologies of construction (particularly the use of glass, steel, and concrete); the principle functionalism (i.e. that form should follow function); an embrace of minimalism; and a rejection of ornament.
According to Le Corbusier, the roots of the movement were to be found in the works of Eugène Viollet le duc, while Mies van der Rohe was heavily inspired by Karl Friedrich Schinkel. The movement emerged in the first half of the 20th century and became dominant after World War II until the 1980s, when it was gradually replaced as the principal style for institutional and corporate buildings by postmodern architecture.
Origins
Modern architecture emerged at the end of the 19th century from revolutions in technology, engineering, and building materials, and from a desire to break away from historical architectural styles and invent something that was purely functional and new.
The revolution in materials came first, with the use of cast iron, drywall, plate glass, and reinforced concrete, to build structures that were stronger, lighter, and taller. The cast plate glass process was invented in 1848, allowing the manufacture of very large windows. The Crystal Palace by Joseph Paxton at the Great Exhibition of 1851 was an early example of iron and plate glass construction, followed in 1864 by the first glass and metal curtain wall. These developments together led to the first steel-framed skyscraper, the ten-story Home Insurance Building in Chicago, built in 1884 by William Le Baron Jenney and based on the works of Viollet le Duc.
French industrialist François Coignet was the first to use iron-reinforced concrete, that is, concrete strengthened with iron bars, as a technique for constructing buildings. In 1853 Coignet built the first iron reinforced concrete structure, a four-storey house in the suburbs of Paris. A further important step forward was the invention of the safety elevator by Elisha Otis, first demonstrated at the New York Crystal Palace exposition in 1854, which made tall office and apartment buildings practical. Another important technology for the new architecture was electric light, which greatly reduced the inherent danger of fires caused by gas in the 19th century.
The debut of new materials and techniques inspired architects to break away from the neoclassical and eclectic models that dominated European and American architecture in the late 19th century, most notably eclecticism, Victorian and Edwardian architecture, and the Beaux-Arts architectural style. This break with the past was particularly urged by the architectural theorist and historian Eugène Viollet-le-Duc. In his 1872 book Entretiens sur L'Architecture, he urged: "use the means and knowledge given to us by our times, without the intervening traditions which are no longer viable today, and in that way we can inaugurate a new architecture. For each function its material; for each material its form and its ornament." This book influenced a generation of architects, including Louis Sullivan, Victor Horta, Hector Guimard, and Antoni Gaudí.
Early modernism in Europe (1900–1914)
At the end of the 19th century, a few architects began to challenge the traditional Beaux Arts and Neoclassical styles that dominated architecture in Europe and the United States. The Glasgow School of Art (1896–99) designed by Charles Rennie Mackintosh, had a façade dominated by large vertical bays of windows. The Art Nouveau style was launched in the 1890s by Victor Horta in Belgium and Hector Guimard in France; it introduced new styles of decoration, based on vegetal and floral forms. In Barcelona, Antonio Gaudi conceived architecture as a form of sculpture; the façade of the Casa Batlló in Barcelona (1904–1907) had no straight lines; it was encrusted with colorful mosaics of stone and ceramic tiles.
Architects also began to experiment with new materials and techniques, which gave them greater freedom to create new forms. In 1903–1904 in Paris Auguste Perret and Henri Sauvage began to use reinforced concrete, previously only used for industrial structures, to build apartment buildings. Reinforced concrete, which could be molded into any shape, and which could create enormous spaces without the need of supporting pillars, replaced stone and brick as the primary material for modernist architects. The first concrete apartment buildings by Perret and Sauvage were covered with ceramic tiles, but in 1905 Perret built the first concrete parking garage on 51 rue de Ponthieu in Paris; here the concrete was left bare, and the space between the concrete was filled with glass windows. Henri Sauvage added another construction innovation in an apartment building on Rue Vavin in Paris (1912–1914); the reinforced concrete building was in steps, with each floor set back from the floor below, creating a series of terraces. Between 1910 and 1913, Auguste Perret built the Théâtre des Champs-Élysées, a masterpiece of reinforced concrete construction, with Art Deco sculptural bas-reliefs on the façade by Antoine Bourdelle. Because of the concrete construction, no columns blocked the spectator's view of the stage.
Otto Wagner, in Vienna, was another pioneer of the new style. In his book Moderne Architektur (1895) he had called for a more rationalist style of architecture, based on "modern life". He designed a stylized ornamental metro station at Karlsplatz in Vienna (1888–89), then an ornamental Art Nouveau residence, Majolika House (1898), before moving to a much more geometric and simplified style, without ornament, in the Austrian Postal Savings Bank (1904–1906). Wagner declared his intention to express the function of the building in its exterior. The reinforced concrete exterior was covered with plaques of marble attached with bolts of polished aluminum. The interior was purely functional and spare, a large open space of steel, glass, and concrete where the only decoration was the structure itself.
The Viennese architect Adolf Loos also began removing any ornament from his buildings. His Steiner House, in Vienna (1910), was an example of what he called rationalist architecture; it had a simple stucco rectangular façade with square windows and no ornament. The fame of the new movement, which became known as the Vienna Secession spread beyond Austria. Josef Hoffmann, a student of Wagner, constructed a landmark of early modernist architecture, the Stoclet Palace, in Brussels, in 1906–1911. This residence, built of brick covered with Norwegian marble, was composed of geometric blocks, wings, and a tower. A large pool in front of the house reflected its cubic forms. The interior was decorated with paintings by Gustav Klimt and other artists, and the architect even designed clothing for the family to match the architecture.
In Germany, a modernist industrial movement, Deutscher Werkbund (German Work Federation) had been created in Munich in 1907 by Hermann Muthesius, a prominent architectural commentator. Its goal was to bring together designers and industrialists, to turn out well-designed, high-quality products, and in the process to invent a new type of architecture. The organization originally included twelve architects and twelve business firms, but quickly expanded. The architects include Peter Behrens, Theodor Fischer (who served as its first president), Josef Hoffmann and Richard Riemerschmid. In 1909 Behrens designed one of the earliest and most influential industrial buildings in the modernist style, the AEG turbine factory, a functional monument of steel and concrete. In 1911–1913, Adolf Meyer and Walter Gropius, who had both worked for Behrens, built another revolutionary industrial plant, the Fagus Factory in Alfeld an der Laine, a building without ornament where every construction element was on display. The Werkbund organized a major exposition of modernist design in Cologne just a few weeks before the outbreak of the First World War in August 1914. For the 1914 Cologne exhibition, Bruno Taut built a revolutionary glass pavilion.
Early American modernism (1890s–1914)
Frank Lloyd Wright was a highly original and independent American architect who refused to be categorized in any one architectural movement. Like Le Corbusier and Ludwig Mies van der Rohe, he had no formal architectural training. From 1887 to 1893 he worked in the Chicago office of Louis Sullivan, who pioneered the first tall steel-frame office buildings in Chicago, and who famously stated "form follows function". Wright set out to break all the traditional rules. He was particularly famous for his Prairie Houses, including the Winslow House in River Forest, Illinois (1893–94); Arthur Heurtley House (1902) and Robie House (1909); sprawling, geometric residences without decoration, with strong horizontal lines which seemed to grow out of the earth, and which echoed the wide flat spaces of the American prairie. His Larkin Building (1904–1906) in Buffalo, New York, Unity Temple (1905) in Oak Park, Illinois and Unity Temple had highly original forms and no connection with historical precedents.
Early skyscrapers
At the end of the 19th century, the first skyscrapers began to appear in the United States. They were a response to the shortage of land and high cost of real estate in the center of the fast-growing American cities, and the availability of new technologies, including fireproof steel frames and improvements in the safety elevator invented by Elisha Otis in 1852. The first steel-framed "skyscraper", The Home Insurance Building in Chicago, was ten stories high. It was designed by William Le Baron Jenney in 1883, and was briefly the tallest building in the world. Louis Sullivan built another monumental new structure, the Carson, Pirie, Scott and Company Building, in the heart of Chicago in 1904–1906. While these buildings were revolutionary in their steel frames and height, their decoration was borrowed from Neo-Renaissance, Neo-Gothic and Beaux-Arts architecture. The Woolworth Building, designed by Cass Gilbert, was completed in 1912, and was the tallest building in the world until the completion of the Chrysler Building in 1929. The structure was purely modern, but its exterior was decorated with Neo-Gothic ornament, complete with decorative buttresses, arches and spires, which caused it to be nicknamed the "Cathedral of Commerce".
Rise of modernism in Europe and Russia (1918–1931)
After the first World War, a prolonged struggle began between architects who favored the more traditional styles of neo-classicism and the Beaux-Arts architecture style, and the modernists, led by Le Corbusier and Robert Mallet-Stevens in France, Walter Gropius and Ludwig Mies van der Rohe in Germany, and Konstantin Melnikov in the new Soviet Union, who wanted only pure forms and the elimination of any decoration. Louis Sullivan popularized the axiom Form follows function to emphasize the importance of utilitarian simplicity in modern architecture. Art Deco architects such as Auguste Perret and Henri Sauvage often made a compromise between the two, combining modernist forms and stylized decoration.
International Style (1920s–1970s)
The dominant figure in the rise of modernism in France was Charles-Édouard Jeanerette, a Swiss-French architect who in 1920 took the name Le Corbusier. In 1920 he co-founded a journal called L'Espirit Nouveau and energetically promoted architecture that was functional, pure, and free of any decoration or historical associations. He was also a passionate advocate of a new urbanism, based on planned cities. In 1922 he presented a design of a city for three million people, whose inhabitants lived in identical sixty-story tall skyscrapers surrounded by open parkland. He designed modular houses, which would be mass-produced on the same plan and assembled into apartment blocks, neighborhoods, and cities. In 1923 he published "Toward an Architecture", with his famous slogan, "a house is a machine for living in."<ref>Le Corbusier, Vers une architecture", (1923), Flammarion edition (1995), pages XVIII-XIX</ref> He tirelessly promoted his ideas through slogans, articles, books, conferences, and participation in Expositions.
To illustrate his ideas, in the 1920s he built a series of houses and villas in and around Paris. They were all built according to a common system, based upon the use of reinforced concrete, and of reinforced concrete pylons in the interior which supported the structure, allowing glass curtain walls on the façade and open floor plans, independent of the structure. They were always white, and had no ornament or decoration on the outside or inside. The best-known of these houses was the Villa Savoye, built in 1928–1931 in the Paris suburb of Poissy. An elegant white box wrapped with a ribbon of glass windows around on the façade, with living space that opened upon an interior garden and countryside around, raised up by a row of white pylons in the center of a large lawn, it became an icon of modernist architecture.
Bauhaus and the German Werkbund (1919–1933)
In Germany, two important modernist movements appeared after the first World War, The Bauhaus was a school founded in Weimar in 1919 under the direction of Walter Gropius. Gropius was the son of the official state architect of Berlin, who studied before the war with Peter Behrens, and designed the modernist Fagus turbine factory. The Bauhaus was a fusion of the prewar Academy of Arts and the school of technology. In 1926 it was transferred from Weimar to Dessau; Gropius designed the new school and student dormitories in the new, purely functional modernist style he was encouraging. The school brought together modernists in all fields; the faculty included the modernist painters Vasily Kandinsky, Joseph Albers and Paul Klee, and the designer Marcel Breuer.
Gropius became an important theorist of modernism, writing The Idea and Construction in 1923. He was an advocate of standardization in architecture, and the mass construction of rationally designed apartment blocks for factory workers. In 1928 he was commissioned by the Siemens company to build apartment for workers in the suburbs of Berlin, and in 1929 he proposed the construction of clusters of slender eight- to ten-story high-rise apartment towers for workers.
While Gropius was active at the Bauhaus, Ludwig Mies van der Rohe led the modernist architectural movement in Berlin. Inspired by the De Stijl movement in the Netherlands, he built clusters of concrete summer houses and proposed a project for a glass office tower. He became the vice president of the German Werkbund, and became the head of the Bauhaus from 1930 to 1933. proposing a wide variety of modernist plans for urban reconstruction. His most famous modernist work was the German pavilion for the 1929 international exposition in Barcelona. It was a work of pure modernism, with glass and concrete walls and clean, horizontal lines. Though it was only a temporary structure, and was torn down in 1930, it became, along with Le Corbusier's Villa Savoye, one of the best-known landmarks of modernist architecture. A reconstructed version now stands on the original site in Barcelona.
When the Nazis came to power in Germany, they viewed the Bauhaus as a training ground for communists, and closed the school in 1933. Gropius left Germany and went to England, then to the United States, where he and Marcel Breuer both joined the faculty of the Harvard Graduate School of Design, and became the teachers of a generation of American postwar architects. In 1937 Mies van der Rohe also moved to the United States; he became one of the most famous designers of postwar American skyscrapers.
Expressionist architecture (1918–1931)
Expressionism, which appeared in Germany between 1910 and 1925, was a counter-movement against the strictly functional architecture of the Bauhaus and Werkbund. Its advocates, including Bruno Taut, Hans Poelzig, Fritz Hoger and Erich Mendelsohn, wanted to create architecture that was poetic, expressive, and optimistic. Many expressionist architects had fought in World War I and their experiences, combined with the political turmoil and social upheaval that followed the German Revolution of 1919, resulted in a utopian outlook and a romantic socialist agenda. Economic conditions severely limited the number of built commissions between 1914 and the mid-1920s, As result, many of the most innovative expressionist projects, including Bruno Taut's Alpine Architecture and Hermann Finsterlin's Formspiels, remained on paper. Scenography for theatre and films provided another outlet for the expressionist imagination, and provided supplemental incomes for designers attempting to challenge conventions in a harsh economic climate. A particular type, using bricks to create its forms (rather than concrete) is known as Brick Expressionism.
Erich Mendelsohn, (who disliked the term Expressionism for his work) began his career designing churches, silos, and factories which were highly imaginative, but, for lack of resources, were never built. In 1920, he finally was able to construct one of his works in the city of Potsdam; an observatory and research center called the Einsteinium, named in tribute to Albert Einstein. It was supposed to be built of reinforced concrete, but because of technical problems it was finally built of traditional materials covered with plaster. His sculptural form, very different from the austere rectangular forms of the Bauhaus, first won him commissions to build movie theaters and retail stores in Stuttgart, Nuremberg, and Berlin. His Mossehaus in Berlin was an early model for the streamline moderne style. His Columbushaus on Potsdamer Platz in Berlin (1931) was a prototype for the modernist office buildings that followed. (It was torn down in 1957, because it stood in the zone between East and West Berlin, where the Berlin Wall was constructed.) Following the rise of the Nazis to power, he moved to England (1933), then to the United States (1941).
Fritz Höger was another notable Expressionist architect of the period. His Chilehaus was built as the headquarters of a shipping company, and was modeled after a giant steamship, a triangular building with a sharply pointed bow. It was constructed of dark brick, and used external piers to express its vertical structure. Its external decoration borrowed from Gothic cathedrals, as did its internal arcades. Hans Poelzig was another notable expressionist architect. In 1919 he built the Großes Schauspielhaus, an immense theater in Berlin, seating five thousand spectators for theater impresario Max Reinhardt. It featured elongated shapes like stalagmites hanging down from its gigantic dome, and lights on massive columns in its foyer. He also constructed the IG Farben building, a massive corporate headquarters, now the main building of Goethe University in Frankfurt. Bruno Taut specialized in building large-scale apartment complexes for working-class Berliners. He built twelve thousand individual units, sometimes in buildings with unusual shapes, such as a giant horseshoe. Unlike most other modernists, he used bright exterior colors to give his buildings more life The use of dark brick in the German projects gave that particular style a name, Brick Expressionism.
The Austrian philosopher, architect, and social critic Rudolf Steiner also departed as far as possible from traditional architectural forms. His Second Goetheanum, built from 1926 near Basel, Switzerland and Mendelsohn's Einsteinturm in Potsdam, Germany, were based on no traditional models and had entirely original shapes.
Constructivist architecture (1919–1931)
After the Russian Revolution of 1917, Russian avant-garde artists and architects began searching for a new Soviet style which could replace traditional neoclassicism. The new architectural movements were closely tied with the literary and artistic movements of the period, the futurism of poet Vladimir Mayakovskiy, the Suprematism of painter Kasimir Malevich, and the colorful Rayonism of painter Mikhail Larionov. The most startling design that emerged was the tower proposed by painter and sculptor Vladimir Tatlin for the Moscow meeting of the Third Communist International in 1920: he proposed two interlaced towers of metal four hundred meters high, with four geometric volumes suspended from cables. The movement of Russian Constructivist architecture was launched in 1921 by a group of artists led by Aleksandr Rodchenko. Their manifesto proclaimed that their goal was to find the "communist expression of material structures". Soviet architects began to construct workers' clubs, communal apartment houses, and communal kitchens for feeding whole neighborhoods.
One of the first prominent constructivist architects to emerge in Moscow was Konstantin Melnikov, the number of working clubs – including Rusakov Workers' Club (1928) – and his own living house, Melnikov House (1929) near Arbat Street in Moscow. Melnikov traveled to Paris in 1925 where he built the Soviet Pavilion for the International Exhibition of Modern Decorative and Industrial Arts in Paris in 1925; it was a highly geometric vertical construction of glass and steel crossed by a diagonal stairway, and crowned with a hammer and sickle. The leading group of constructivist architects, led by Vesnin brothers and Moisei Ginzburg, was publishing the 'Contemporary Architecture' journal. This group created several major constructivist projects in the wake of the First Five Year Plan – including colossal Dnieper Hydroelectric Station (1932) – and made an attempt to start the standardization of living blocks with Ginzburg's Narkomfin building. A number of architects from the pre-Soviet period also took up the constructivist style. The most famous example was Lenin's Mausoleum in Moscow (1924), by Alexey Shchusev (1924)
The main centers of constructivist architecture were Moscow and Leningrad; however, during the industrialization many constructivist buildings were erected in provincial cities. The regional industrial centers, including Ekaterinburg, Kharkiv or Ivanovo, were rebuilt in the constructivist manner; some cities, like Magnitogorsk or Zaporizhzhia, were constructed anew (the so-called socgorod, or 'socialist city').
The style fell markedly out of favor in the 1930s, replaced by the more grandiose nationalist styles that Stalin favored. Constructivist architects and even Le Corbusier projects for the new Palace of the Soviets from 1931 to 1933, but the winner was an early Stalinist building in the style termed Postconstructivism. The last major Russian constructivist building, by Boris Iofan, was built for the Paris World Exhibition (1937), where it faced the pavilion of Nazi Germany by Hitler's architect Albert Speer.
New Objectivity (1920–1933)
The New Objectivity (in German Neue Sachlichkeit, sometimes also translated as New Sobriety) is a name often given to the Modern architecture that emerged in Europe, primarily German-speaking Europe, in the 1920s and 30s. It is also frequently called Neues Bauen (New Building). The New Objectivity took place in many German cities in that period, for example in Frankfurt with its Neues Frankfurt project.
Modernism becomes a movement: CIAM (1928)
By the late 1920s, modernism had become an important movement in Europe. Architecture, which previously had been predominantly national, began to become international. The architects traveled, met each other, and shared ideas. Several modernists, including Le Corbusier, had participated in the competition for the headquarters of the League of Nations in 1927. In the same year, the German Werkbund organized an architectural exposition at the Weissenhof Estate Stuttgart. Seventeen leading modernist architects in Europe were invited to design twenty-one houses; Le Corbusier, and Ludwig Mies van der Rohe played a major part. In 1927 Le Corbusier, Pierre Chareau, and others proposed the foundation of an international conference to establish the basis for a common style. The first meeting of the Congrès Internationaux d'Architecture Moderne or International Congresses of Modern Architects (CIAM), was held in a chateau on Lake Leman in Switzerland 26–28 June 1928. Those attending included Le Corbusier, Robert Mallet-Stevens, Auguste Perret, Pierre Chareau and Tony Garnier from France; Victor Bourgeois from Belgium; Walter Gropius, Erich Mendelsohn, Ernst May and Ludwig Mies van der Rohe from Germany; Josef Frank from Austria; Mart Stam and Gerrit Rietveld from the Netherlands, and Adolf Loos from Czechoslovakia. A delegation of Soviet architects was invited to attend, but they were unable to obtain visas. Later members included Josep Lluís Sert of Spain and Alvar Aalto of Finland. No one attended from the United States. A second meeting was organized in 1930 in Brussels by Victor Bourgeois on the topic "Rational methods for groups of habitations". A third meeting, on "The functional city", was scheduled for Moscow in 1932, but was cancelled at the last minute. Instead, the delegates held their meeting on a cruise ship traveling between Marseille and Athens. On board, they together drafted a text on how modern cities should be organized. The text, called The Athens Charter, after considerable editing by Corbusier and others, was finally published in 1957 and became an influential text for city planners in the 1950s and 1960s. The group met once more in Paris in 1937 to discuss public housing and was scheduled to meet in the United States in 1939, but the meeting was cancelled because of the war. The legacy of the CIAM was a roughly common style and doctrine which helped define modern architecture in Europe and the United States after World War II.
Art Deco
The Art Deco architectural style (called Style Moderne in France), was modern, but it was not modernist; it had many features of modernism, including the use of reinforced concrete, glass, steel, chrome, and it rejected traditional historical models, such as the Beaux-Arts style and Neo-classicism; but, unlike the modernist styles of Le Corbusier and Mies van der Rohe, it made lavish use of decoration and color. It reveled in the symbols of modernity; lightning flashes, sunrises, and zig-zags. Art Deco had begun in France before World War I and spread through Europe; in the 1920s and 1930s it became a highly popular style in the United States, South America, India, China, Australia, and Japan. In Europe, Art Deco was particularly popular for department stores and movie theaters. The style reached its peak in Europe at the International Exhibition of Modern Decorative and Industrial Arts in 1925, which featured art deco pavilions and decoration from twenty countries. Only two pavilions were purely modernist; the Esprit Nouveau pavilion of Le Corbusier, which represented his idea for a mass-produced housing unit, and the pavilion of the USSR, by Konstantin Melnikov in a flamboyantly futurist style.
Later French landmarks in the Art Deco style included the Grand Rex movie theater in Paris, La Samaritaine department store by Henri Sauvage (1926–28) and the Social and Economic Council building in Paris (1937–38) by Auguste Perret, and the Palais de Tokyo and Palais de Chaillot, both built by collectives of architects for the 1937 Paris .
American Art Deco; the skyscraper style (1919–1939)
In the late 1920s and early 1930s, an exuberant American variant of Art Deco appeared in the Chrysler Building, Empire State Building and Rockefeller Center in New York City, and Guardian Building in Detroit. The first skyscrapers in Chicago and New York had been designed in a neo-gothic or neoclassical style, but these buildings were very different; they combined modern materials and technology (stainless steel, concrete, aluminum, chrome-plated steel) with Art Deco geometry; stylized zig-zags, lightning flashes, fountains, sunrises, and, at the top of the Chrysler building, Art Deco "gargoyles" in the form of stainless steel radiator ornaments. The interiors of these new buildings, sometimes termed Cathedrals of Commerce", were lavishly decorated in bright contrasting colors, with geometric patterns variously influenced by Egyptian and Mayan pyramids, African textile patterns, and European cathedrals, Frank Lloyd Wright himself experimented with Mayan Revival, in the concrete cube-based Ennis House of 1924 in Los Angeles. The style appeared in the late 1920s and 1930s in all major American cities. The style was used most often in office buildings, but it also appeared in the enormous movie palaces that were built in large cities when sound films were introduced.
Streamline style and Public Works Administration (1933–1939)
The beginning of the Great Depression in 1929 brought an end to lavishly decorated Art Deco architecture and a temporary halt to the construction of new skyscrapers. It also brought in a new style, called "Streamline Moderne" or sometimes just Streamline. This style, sometimes modeled after for the form of ocean liners, featured rounded corners, strong horizontal lines, and often nautical features, such as superstructures and steel railings. It was associated with modernity and especially with transportation; the style was often used for new airport terminals, train and bus stations, and for gas stations and diners built along the growing American highway system. In the 1930s the style was used not only in buildings, but in railroad locomotives, and even refrigerators and vacuum cleaners. It both borrowed from industrial design and influenced it.
In the United States, the Great Depression led to a new style for government buildings, sometimes called PWA Moderne, for the Public Works Administration, which launched gigantic construction programs in the U.S. to stimulate employment. It was essentially classical architecture stripped of ornament, and was employed in state and federal buildings, from post offices to the largest office building in the world at that time, Pentagon (1941–43), begun just before the United States entered the Second World War.
American modernism (1919–1939)
During the 1920s and 1930s, Frank Lloyd Wright resolutely refused to associate himself with any architectural movements. He considered his architecture to be entirely unique and his own. Between 1916 and 1922, he broke away from his earlier prairie house style and worked instead on houses decorated with textured blocks of cement; this became known as his "Mayan style", after the pyramids of the ancient Mayan civilization. He experimented for a time with modular mass-produced housing. He identified his architecture as "Usonian", a combination of USA, "utopian" and "organic social order". His business was severely affected by the beginning of the Great Depression that began in 1929; he had fewer wealthy clients who wanted to experiment. Between 1928 and 1935, he built only two buildings: a hotel near Chandler, Arizona, and the most famous of all his residences, Fallingwater (1934–37), a vacation house in Pennsylvania for Edgar J. Kaufman. Fallingwater is a remarkable structure of concrete slabs suspended over a waterfall, perfectly uniting architecture and nature.
The Austrian architect Rudolph Schindler designed what could be called the first house in the modern style in 1922, the Schindler house.
Schindler also contributed to American modernism with his design for the Lovell Beach House in Newport Beach. The Austrian architect Richard Neutra moved to the United States in 1923, worked for a short time with Frank Lloyd Wright, also quickly became a force in American architecture through his modernist design for the same client, the Lovell Health House in Los Angeles. Neutra's most notable architectural work was the Kaufmann Desert House in 1946, and he designed hundreds of further projects.
Paris International Exposition of 1937 and the architecture of dictators
The 1937 Paris International Exposition in Paris effectively marked the end of the Art Deco, and of pre-war architectural styles. Most of the pavilions were in a neoclassical Deco style, with colonnades and sculptural decoration. The pavilions of Nazi Germany, designed by Albert Speer, in a German neoclassical style topped by eagle and swastika, faced the pavilion of the Soviet Union, topped by enormous statues of a worker and a peasant carrying a hammer and sickle. As to the modernists, Le Corbusier was practically, but not quite invisible at the Exposition; he participated in the Pavilion des temps nouveaux, but focused mainly on his painting. The one modernist who did attract attention was a collaborator of Le Corbusier, Josep Lluis Sert, the Spanish architect, whose pavilion of the Second Spanish Republic was pure modernist glass and steel box. Inside it displayed the most modernist work of the Exposition, the painting Guernica by Pablo Picasso. The original building was destroyed after the Exposition, but it was recreated in 1992 in Barcelona.
The rise of nationalism in the 1930s was reflected in the Fascist architecture of Italy, and Nazi architecture of Germany, based on classical styles and designed to express power and grandeur. The Nazi architecture, much of it designed by Albert Speer, was intended to awe the spectators by its huge scale. Adolf Hitler intended to turn Berlin into the capital of Europe, grander than Rome or Paris. The Nazis closed the Bauhaus, and the most prominent modern architects soon departed for Britain or the United States. In Italy, Benito Mussolini wished to present himself as the heir to the glory and empire of ancient Rome. Mussolini's government was not as hostile to modernism as The Nazis; the spirit of Italian Rationalism of the 1920s continued, with the work of architect Giuseppe Terragni. His Casa del Fascio in Como, headquarters of the local Fascist party, was a perfectly modernist building, with geometric proportions (33.2 meters long by 16.6 meters high), a clean façade of marble, and a Renaissance-inspired interior courtyard. Opposed to Terragni was Marcello Piacitini, a proponent of monumental fascist architecture, who rebuilt the University of Rome, and designed the Italian pavilion at the 1937 Paris Exposition, and planned a grand reconstruction of Rome on the fascist model.
New York World's Fair (1939)
The 1939 New York World's Fair marked a turning point in architecture between Art Deco and modern architecture. The theme of the Fair was the World of Tomorrow, and its symbols were the purely geometric trylon and periphery sculpture. It had many monuments to Art Deco, such as the Ford Pavilion in the Streamline Moderne style, but also included the new International Style that would replace Art Deco as the dominant style after the War. The Pavilions of Finland, by Alvar Aalto, of Sweden by Sven Markelius, and of Brazil by Oscar Niemeyer and Lúcio Costa, looked forward to a new style. They became leaders in the postwar modernist movement.
World War II: wartime innovation and postwar reconstruction (1939–1945)
World War II (1939–1945) and its aftermath was a major factor in driving innovation in building technology, and in turn, architectural possibilities. The wartime industrial demands resulted in shortages of steel and other building materials, leading to the adoption of new materials, such as aluminum, The war and postwar period brought greatly expanded use of prefabricated building; largely for the military and government. The semi-circular metal Nissen hut of World War I was revived as the Quonset hut. The years immediately after the war saw the development of radical experimental houses, including the enameled-steel Lustron house (1947–1950), and Buckminster Fuller's experimental aluminum Dymaxion House.
The unprecedented destruction caused by the war was another factor in the rise of modern architecture. Large parts of major cities, from Berlin, Tokyo, and Dresden to Rotterdam and east London; all the port cities of France, particularly Le Havre, Brest, Marseille, Cherbourg had been destroyed by bombing. In the United States, little civilian construction had been done since the 1920s; housing was needed for millions of American soldiers returning from the war. The postwar housing shortages in Europe and the United States led to the design and construction of enormous government-financed housing projects, usually in run-down center of American cities, and in the suburbs of Paris and other European cities, where land was available,
One of the largest reconstruction projects was that of the city center of Le Havre, destroyed by the Germans and by Allied bombing in 1944; 133 hectares of buildings in the center were flattened, destroying 12,500 buildings and leaving 40,000 persons homeless. The architect Auguste Perret, a pioneer in the use of reinforced concrete and prefabricated materials, designed and built an entirely new center to the city, with apartment blocks, cultural, commercial, and government buildings. He restored historic monuments when possible, and built a new church, St. Joseph, with a lighthouse-like tower in the center to inspire hope. His rebuilt city was declared a UNESCO World Heritage site in 2005.
Le Corbusier and the Cité Radieuse (1947–1952)
Shortly after the War, the French architect Le Corbusier, who was nearly sixty years old and had not constructed a building in ten years, was commissioned by the French government to construct a new apartment block in Marseille. He called it Unité d'Habitation in Marseille, but it more popularly took the name of the Cité Radieuse (and later "Cité du Fada" "City of the crazy one" in Marseille French), after his book about futuristic urban planning. Following his doctrines of design, the building had a concrete frame raised up above the street on pylons. It contained 337 duplex apartment units, fit into the framework like pieces of a puzzle. Each unit had two levels and a small terrace. Interior "streets" had shops, a nursery school, and other serves, and the flat terrace roof had a running track, ventilation ducts, and a small theater. Le Corbusier designed furniture, carpets, and lamps to go with the building, all purely functional; the only decoration was a choice of interior colors that Le Corbusier gave to residents. Unité d'Habitation became a prototype for similar buildings in other cities, both in France and Germany. Combined with his equally radical organic design for the Chapel of Notre-Dame du-Haut at Ronchamp, this work propelled Corbusier in the first rank of postwar modern architects.
Team X and the 1953 International Congress of Modern Architecture
In the early 1950s, Michel Écochard, director of urban planning under the French Protectorate in Morocco, commissioned GAMMA ()—which initially included the architects Elie Azagury, George Candillis, Alexis Josic and Shadrach Woods—to design housing in the Hay Mohammedi neighborhood of Casablanca that provided a "culturally specific living tissue" for laborers and migrants from the countryside. Sémiramis, (Honeycomb), and Carrières Centrales were some of the first examples of this Vernacular Modernism.
At the 1953 Congrès Internationaux d'Architecture Moderne (CIAM), ATBAT-Afrique—the Africa branch of founded in 1947 by figures including Le Corbusier, Vladimir Bodiansky, and André Wogenscky—prepared a study of Casablanca's bidonvilles entitled "Habitat for the Greatest Number". The presenters, Georges Candilis and Michel Ecochard, argued—against doctrine—that architects must consider local culture and climate in their designs. This generated great debate among modernist architects around the world and eventually provoked a schism and the creation of Team 10. Ecochard's 8x8 meter model at Carrières Centrales earned him recognition as a pioneer in the architecture of collective housing, though his Moroccan colleague Elie Azagury was critical of him for serving as a tool of the French colonial regime and for ignoring the economic and social necessity that Moroccans live in higher density vertical housing.
Late modernist architecture
Late modernist architecture is generally understood to include buildings designed (1968–1980) with exceptions. Modernist architecture includes the buildings designed between 1945 and the 1960s. The late modernist style is characterized by bold shapes and sharp corners, slightly more defined than Brutalist architecture.
Postwar modernism in the United States (1945–1985)
The International Style of architecture had appeared in Europe, particularly in the Bauhaus movement, in the late 1920s. In 1932 it was recognized and given a name at an Exhibition at the Museum of Modern Art in New York City organized by architect Philip Johnson and architectural critic Henry-Russell Hitchcock, Between 1937 and 1941, following the rise Hitler and the Nazis in Germany, most of the leaders of the German Bauhaus movement found a new home in the United States, and played an important part in the development of American modern architecture.
Frank Lloyd Wright and the Guggenheim Museum
Frank Lloyd Wright was eighty years old in 1947; he had been present at the beginning of American modernism, and though he refused to accept that he belonged to any movement, continued to play a leading role almost to its end. One of his most original late projects was the campus of Florida Southern College in Lakeland, Florida, begun in 1941 and completed in 1943. He designed nine new buildings in a style that he described as "The Child of the Sun". He wrote that he wanted the campus to "grow out of the ground and into the light, a child of the sun".
He completed several notable projects in the 1940s, including the Johnson Wax Headquarters and the Price Tower in Bartlesville, Oklahoma (1956). The building is unusual that it is supported by its central core of four elevator shafts; the rest of the building is cantilevered to this core, like the branches of a tree. Wright originally planned the structure for an apartment building in New York City. That project was cancelled because of the Great Depression, and he adapted the design for an oil pipeline and equipment company in Oklahoma. He wrote that in New York City his building would have been lost in a forest of tall buildings, but that in Oklahoma it stood alone. The design is asymmetrical; each side is different.
In 1943 he was commissioned by the art collector Solomon R. Guggenheim to design a museum for his collection of modern art. His design was entirely original; a bowl-shaped building with a spiral ramp inside that led museum visitors on an upward tour of the art of the 20th century. Work began in 1946 but it was not completed until 1959, the year that he died.
Walter Gropius and Marcel Breuer
Walter Gropius, the founder of the Bauhaus, moved to England in 1934 and spent three years there before being invited to the United States by Walter Hudnut of the Harvard Graduate School of Design; Gropius became the head of the architecture faculty. Marcel Breuer, who had worked with him at the Bauhaus, joined him and opened an office in Cambridge. The fame of Gropius and Breuer attracted many students, who themselves became famous architects, including Ieoh Ming Pei and Philip Johnson. They did not receive an important commission until 1941, when they designed housing for workers in Kensington, Pennsylvania, near Pittsburgh., In 1945 Gropius and Breuer associated with a group of younger architects under the name TAC (The Architects Collaborative). Their notable works included the building of the Harvard Graduate School of Design, the U.S. Embassy in Athens (1956–57), and the headquarters of Pan American Airways in New York (1958–63).
Ludwig Mies van der Rohe
Ludwig Mies van der Rohe described his architecture with the famous saying, "Less is more". As the director of the school of architecture of what is now called the Illinois Institute of Technology from 1939 to 1956, Mies (as he was commonly known) made Chicago the leading city for American modernism in the postwar years. He constructed new buildings for the Institute in modernist style, two high-rise apartment buildings on Lakeshore Drive (1948–51), which became models for high-rises across the country. Other major works included Farnsworth House in Plano, Illinois (1945–1951), a simple horizontal glass box that had an enormous influence on American residential architecture. The Chicago Convention Center (1952–54) and Crown Hall at the Illinois Institute of Technology (1950–56), and The Seagram Building in New York City (1954–58) also set a new standard for purity and elegance. Based on granite pillars, the smooth glass and steel walls were given a touch of color by the use of bronze-toned I-beams in the structure. He returned to Germany in 1962–68 to build the new Nationalgallerie in Berlin. His students and followers included Philip Johnson, and Eero Saarinen, whose work was substantially influenced by his ideas.
Richard Neutra and Charles and Ray Eames
Influential residential architects in the new style in the United States included Richard Neutra and Charles and Ray Eames. The most celebrated work of the Eames was Eames House in Pacific Palisades, California, (1949) Charles Eames in collaboration with Eero Saarinen It is composed of two structures, an architects residence and his studio, joined in the form of an L. The house, influenced by Japanese architecture, is made of translucent and transparent panels organized in simple volumes, often using natural materials, supported on a steel framework. The frame of the house was assembled in sixteen hours by five workmen. He brightened up his buildings with panels of pure colors.
Richard Neutra continued to build influential houses in Los Angeles, using the theme of the simple box. Many of these houses erased the line distinction between indoor and outdoor spaces with walls of plate glass. Neutra's Constance Perkins House in Pasadena, California (1962) was re-examination of the modest single-family dwelling. It was built of inexpensive material–wood, plaster, and glass–and completed at a cost of just under $18,000. Neutra scaled the house to the physical dimensions of its owner, a small woman. It features a reflecting pool which meanders under of the glass walls of the house. One of Neutra's most unusual buildings was Shepherd's Grove in Garden Grove, California, which featured an adjoining parking lot where worshippers could follow the service without leaving their cars.
Skidmore, Owings and Merrill and Wallace K. Harrison
Many of the notable modern buildings in the postwar years were produced by two architectural mega-agencies, which brought together large teams of designers for very complex projects. The firm of Skidmore, Owings & Merrill was founded in Chicago in 1936 by Louis Skidmore and Nathaniel Owings, and joined in 1939 by engineer John Merrill, It soon went under the name of SOM. Its first big project was Oak Ridge National Laboratory in Oak Ridge, Tennessee, the gigantic government installation that produced plutonium for the first nuclear weapons. In 1964 the firm had eighteen "partner-owners", 54 "associate participants, "and 750 architects, technicians, designers, decorators, and landscape architects. Their style was largely inspired by the work of Ludwig Mies van der Rohe, and their buildings soon had a large place in the New York skyline, including the Manhattan House (1950–51), Lever House (1951–52) and the Manufacturers Trust Company Building (1954). Later buildings by the firm include Beinecke Library at Yale University (1963), the Willis Tower, formerly Sears Tower in Chicago (1973) and One World Trade Center in New York City (2013), which replaced the building destroyed in the terrorist attack of 11 September 2001.
Wallace Harrison played a major part in the modern architectural history of New York; as the architectural advisor of the Rockefeller Family, he helped design Rockefeller Center, the major Art Deco architectural project of the 1930s. He was supervising architect for the 1939 New York World's Fair, and, with his partner Max Abramowitz, was the builder and chief architect of the headquarters of the United Nations; Harrison headed a committee of international architects, which included Oscar Niemeyer (who produced the original plan approved by the committee) and Le Corbusier. Other landmark New York buildings designed by Harrison and his firm included Metropolitan Opera House, the master plan for Lincoln Center, and John F. Kennedy International Airport.
Philip Johnson
Philip Johnson (1906–2005) was one of the youngest and last major figures in American modern architecture. He trained at Harvard with Walter Gropius, then was director of the department of architecture and modern design at the Metropolitan Museum of Art from 1946 to 1954. In 1947, he published a book about Ludwig Mies van der Rohe, and in 1953 designed his own residence, the Glass House in New Canaan, Connecticut in a style modeled after Mies's Farnsworth House. Beginning in 1955 he began to go in his own direction, moving gradually toward expressionism with designs that increasingly departed from the orthodoxies of modern architecture. His final and decisive break with modern architecture was the AT&T Building (later known as the Sony Tower), and now the 550 Madison Avenue in New York City, (1979) an essentially modernist skyscraper completely altered by the addition of broken pediment with a circular opening. This building is generally considered to mark the beginning of Postmodern architecture in the United States.
Eero Saarinen
Eero Saarinen (1910–1961) was the son of Eliel Saarinen, the most famous Finnish architect of the Art Nouveau period, who emigrated to the United States in 1923, when Eero was thirteen. He studied art and sculpture at the academy where his father taught, and then at the Académie de la Grande Chaumière Academy in Paris before studying architecture at Yale University. His architectural designs were more like enormous pieces of sculpture than traditional modern buildings; he broke away from the elegant boxes inspired by Mies van der Rohe and used instead sweeping curves and parabolas, like the wings of birds. In 1948 he conceived the idea of a monument in St. Louis, Missouri in the form of a parabolic arch 192 meters high, made of stainless steel (1948). He then designed the General Motors Technical Center in Warren, Michigan (1949–55), a glass modernist box in the style of Mies van der Rohe, followed by the IBM Research Center in Yorktown, Virginia (1957–61). His next works were a major departure in style; he produced a particularly striking sculptural design for the Ingalls Rink in New Haven, Connecticut (1956–59, an ice skiing rink with a parabolic roof suspended from cables, which served as a preliminary model for next and most famous work, the TWA Terminal at JFK airport in New York (1956–1962). His declared intention was to design a building that was distinctive and memorable, and also one that would capture the particular excitement of passengers before a journey. The structure is separated into four white concrete parabolic vaults, which together resemble a bird on the ground perched for flight. Each of the four curving roof vaults has two sides attached to columns in a Y form just outside the structure. One of the angles of each shell is lightly raised, and the other is attached to the center of the structure. The roof is connected with the ground by curtain walls of glass. All of the details inside the building, including the benches, counters, escalators, and clocks, were designed in the same style.
Louis Kahn
Louis Kahn (1901–74) was another American architect who moved away from the Mies van der Rohe model of the glass box, and other dogmas of the prevailing international style. He borrowed from a wide variety of styles, and idioms, including neoclassicism. He was a professor of architecture at Yale University from 1947 to 1957, where his students included Eero Saarinen. From 1957 until his death he was a professor of architecture at the University of Pennsylvania. His work and ideas influenced Philip Johnson, Minoru Yamasaki, and Edward Durell Stone as they moved toward a more neoclassical style. Unlike Mies, he did not try to make his buildings look light; he constructed mainly with concrete and brick, and made his buildings look monumental and solid. He drew from a wide variety of different sources; the towers of Richards Medical Research Laboratories were inspired by the architecture of the Renaissance towns he had seen in Italy as a resident architect at the American Academy in Rome in 1950. Notable buildings by Kahn in the United States include the First Unitarian Church of Rochester, New York (1962); and the Kimball Art Museum in Fort Worth, Texas (1966–72). Following the example of Le Corbusier and his design of the government buildings in Chandigarh, the capital city of the Haryana & Punjab State of India, Kahn designed the Jatiyo Sangshad Bhaban (National Assembly Building) in Dhaka, Bangladesh (1962–74), when that country won independence from Pakistan. It was Kahn's last work.
I. M. Pei
I. M. Pei (1917–2019) was a major figure in late modernism and the debut of Post-modern architecture. He was born in China and educated in the United States, studying architecture at the Massachusetts Institute of Technology. While the architecture school there still trained in the Beaux-Arts architecture style, Pei discovered the writings of Le Corbusier, and a two-day visit by Le Corbusier to the campus in 1935 had a major impact on Pei's ideas of architecture. In the late 1930s, he moved to the Harvard Graduate School of Design, where he studied with Walter Gropius and Marcel Breuer and became deeply involved in Modernism. After the war he worked on large projects for the New York real estate developer William Zeckendorf, before breaking away and starting his own firm. One of the first buildings his own firm designed was the Green Building at the Massachusetts Institute of Technology. While the clean modernist façade was admired, the building developed an unexpected problem; it created a wind tunnel effect, and in strong winds the doors could not be opened. Pei was forced to construct a tunnel so visitors could enter the building during high winds.
Between 1963 and 1967 Pei designed the Mesa Laboratory for the National Center for Atmospheric Research outside Boulder, Colorado, in an open area at the foothills of the Rocky Mountains. The project differed from Pei's earlier urban work; it would rest in an open area in the foothills of the Rocky Mountains. His design was a striking departure from traditional modernism; it looked as if it were carved out of the side of the mountain.
In the late modernist area, art museums bypassed skyscrapers as the most prestigious architectural projects; they offered greater possibilities for innovation in form and more visibility. Pei established himself with his design for the Herbert F. Johnson Museum of Art at Cornell University in Ithaca, New York (1973), which was praised for its imaginative use of a small space, and its respect for the landscape and other buildings around it. This led to the commission for one of the most important museum projects of the period, the new East Wing of the National Gallery of Art in Washington, completed in 1978, and to another of Pei's most famous projects, the pyramid at the entrance of Louvre Museum in Paris (1983–89). Pei chose the pyramid as the form that best harmonized with the Renaissance and neoclassical forms of the historic Louvre, as well as for its associations with Napoleon and the Battle of the Pyramids. Each face of the pyramid is supported by 128 beams of stainless steel, supporting 675 panels of glass, each .
Fazlur Rahman Khan
In 1955, employed by the architectural firm Skidmore, Owings & Merrill (SOM), he began working in Chicago. He was made a partner in 1966. He worked the rest of his life side by side with Architect Bruce Graham. Khan introduced design methods and concepts for efficient use of material in building architecture. His first building to employ the tube structure was the Chestnut De-Witt apartment building. During the 1960s and 1970s, he became noted for his designs for Chicago's 100-story John Hancock Center, which was the first building to use the trussed-tube design, and 110-story Sears Tower, since renamed Willis Tower, the tallest building in the world from 1973 until 1998, which was the first building to use the framed-tube design.
He believed that engineers needed a broader perspective on life, saying, "The technical man must not be lost in his own technology; he must be able to appreciate life, and life is art, drama, music, and most importantly, people." Khan's personal papers, most of which were in his office at the time of his death, are held by the Ryerson & Burnham Libraries at the Art Institute of Chicago. The Fazlur Khan Collection includes manuscripts, sketches, audio cassette tapes, slides and other materials regarding his work.
Khan's seminal work of developing tall building structural systems are still used today as the starting point when considering design options for tall buildings. Tube structures have since been used in many skyscrapers, including the construction of the World Trade Center, Aon Centre, Petronas Towers, Jin Mao Building, Bank of China Tower and most other buildings in excess of 40 stories constructed since the 1960s. The strong influence of tube structure design is also evident in the world's current tallest skyscraper, the Burj Khalifa in Dubai. According to Stephen Bayley of The Daily Telegraph:
Minoru Yamasaki
In the United States, Minoru Yamasaki found major independent success in implementing unique engineering solutions to then-complicated problems, including the space that elevator shafts took up on each floor, and dealing with his personal fear of heights. During this period, he created a number of office buildings which led to his innovative design of the towers of the World Trade Center in 1964, which began construction 21 March 1966. The first of the towers was finished in 1970. Many of his buildings feature superficial details inspired by the pointed arches of Gothic architecture, and make use of extremely narrow vertical windows. This narrow-windowed style arose from his own personal fear of heights. One particular design challenge of the World Trade Center's design related to the efficacy of the elevator system, which was unique in the world. Yamasaki integrated the fastest elevators at the time, running at 1,700 feet per minute. Instead of placing a large traditional elevator shaft in the core of each tower, Yamasaki created the Twin Towers' "Skylobby" system. The Skylobby design created three separate, connected elevator systems which would serve different segments of the building, depending on which floor was chosen, saving approximately 70% of the space used for a traditional shaft. The space saved was then used for office space. In addition to these accomplishments, he had also designed the Pruitt-Igoe Housing Project, the largest ever housing project built in the United States, which was fully torn down in 1976 due to bad market conditions and the decrepit state of the buildings themselves. Separately, he had also designed the Century Plaza Towers and One Woodward Avenue, among 63 other projects he had developed during his career.
Postwar modernism in Europe (1945–1975)
In France, Le Corbusier remained the most prominent architect, though he built few buildings there. His most prominent late work was the convent of Sainte Marie de La Tourette in Eveux-sur-l'Arbresle. The Convent, built of raw concrete, was austere and without ornament, inspired by the medieval monasteries he had visited on his first trip to Italy.
In Britain, the major figures in modernism included Wells Coates (1895–1958), FRS Yorke (1906–1962), James Stirling (1926–1992) and Denys Lasdun (1914–2001). Lasdun's best-known work is the Royal National Theatre (1967–1976) on the south bank of the Thames. Its raw concrete and blockish form offended British traditionalists; Charles III, King of the U.K compared it with a nuclear power station.
In Belgium, a major figure was Charles Vandenhove (born 1927) who constructed an important series of buildings for the University Hospital Center in Liège. His later work ventured into colorful rethinking of historical styles, such as Palladian architecture.
In Finland, the most influential architect was Alvar Aalto, who adapted his version of modernism to the Nordic landscape, light, and materials, particularly the use of wood. After World War II, he taught architecture in the United States. In Denmark, Arne Jacobsen was the best-known of the modernists, who designed furniture as well as carefully proportioned buildings.
In Italy, the most prominent modernist was Gio Ponti, who worked often with the structural engineer Pier Luigi Nervi, a specialist in reinforced concrete. Nervi created concrete beams of exceptional length, twenty-five meters, which allowed greater flexibility in forms and greater heights. Their best-known design was the Pirelli Building in Milan (1958–1960), which for decades was the tallest building in Italy.
The most famous Spanish modernist was the Catalan architect Josep Lluis Sert, who worked with great success in Spain, France, and the United States. In his early career, he worked for a time under Le Corbusier, and designed the Spanish pavilion for the 1937 Paris Exposition. His notable later work included the Fondation Maeght in Saint-Paul-de-Provence, France (1964), and the Harvard Science Center in Cambridge, Massachusetts. He served as Dean of Architecture at the Harvard School of Design.
Notable German modernists included Johannes Krahn, who played an important part in rebuilding German cities after World War II, and built several important museums and churches, notably St. Martin, Idstein, which artfully combined stone masonry, concrete, and glass. Leading Austrian architects of the style included Gustav Peichl, whose later works included the Art and Exhibition Center of the German Federal Republic in Bonn, Germany (1989).
Tropical Modernism Tropical Modernism, or Tropical Modern' is a style of architecture that merges modernist architecture principles with tropical vernacular traditions, emerging in the mid-20th century. The term is used to describe modernist architecture in various regions of the world, including Latin America, Asia and Africa, as detailed below. Architects adapted to local conditions by using features which encouraged protection from harsh sunlight (such as solar shading) and encouraged the flow of cooling breezes through buildings (through narrow corridors). Some contend that the style originated in the 'hot, humid conditions' of West Africa in the 1940s. Typical features include geometric screens. Maxwell Fry and Jane Drew, of the Architectural Association architecture school in London, UK, made important contributions to research and practice in the Tropical Modernism style, after founding the School of Tropical Study at the AA. Speaking about the adoption of modernism in post-independence Ghana, Professor Ola Ukuku, states that ‘those involved in developing Tropical Modernism were actually operating as agents of the colonies at the time’.
Latin America
Architectural historians sometimes label Latin American modernism as "tropical modernism". This reflects architects who adapted modernism to the tropical climate as well as the sociopolitical contexts of Latin America.
Brazil became a showcase of modern architecture in the late 1930s through the work of Lúcio Costa (1902–1998) and Oscar Niemeyer (1907–2012). Costa had the lead and Niemeyer collaborated on the Ministry of Education and Health in Rio de Janeiro (1936–43) and the Brazilian pavilion at the 1939 World's Fair in New York. Following the war, Niemeyer, along with Le Corbusier, conceived the form of the United Nations Headquarters constructed by Walter Harrison.
Lúcio Costa also had overall responsibility for the plan of the most audacious modernist project in Brazil; the creation of new capital, Brasília, constructed between 1956 and 1961. Costa made the general plan, laid out in the form of a cross, with the major government buildings in the center. Niemeyer was responsible for designing the government buildings, including the palace of the President;the National Assembly, composed of two towers for the two branches of the legislature and two meeting halls, one with a cupola and other with an inverted cupola. Niemeyer also built the cathedral, eighteen ministries, and giant blocks of housing, each designed for three thousand residents, each with its own school, shops, and chapel. Modernism was employed both as an architectural principle and as a guideline for organizing society, as explored in The Modernist City.''
Following a military coup d'état in Brazil in 1964, Niemeyer moved to France, where he designed the modernist headquarters of the French Communist Party in Paris (1965–1980), a miniature of his United Nations plan.
Mexico also had a prominent modernist movement. Important figures included Félix Candela, born in Spain, who emigrated to Mexico in 1939; he specialized in concrete structures in unusual parabolic forms. Another important figure was Mario Pani, who designed the National Conservatory of Music in Mexico City (1949), and the Torre Insignia (1988); Pani was also instrumental in the construction of the new University of Mexico City in the 1950s, alongside Juan O'Gorman, Eugenio Peschard, and Enrique del Moral. The Torre Latinoamericana, designed by Augusto H. Alvarez, was one of the earliest modernist skyscrapers in Mexico City (1956); it successfully withstood the 1985 Mexico City earthquake, which destroyed many other buildings in the city center. Pedro Ramirez Vasquez and Rafael Mijares designed the Olympic Stadium for the 1968 Olympics, and Antoni Peyri and Candela designed the Palace of Sports. Luis Barragan was another influential figure in Mexican modernism; his raw concrete residence and studio in Mexico City looks like a blockhouse on the outside, while inside it features great simplicity in form, pure colors, abundant natural light, and, one of is signatures, a stairway without a railing. He won the Pritzker Architecture Prize in 1980, and the house was declared a UNESCO World Heritage Site in 2004.
Asia and Australia
Japan, like Europe, had an enormous shortage of housing after the war, due to the bombing of many cities. 4.2 million housing units needed to be replaced. Japanese architects combined both traditional and modern styles and techniques. One of the foremost Japanese modernists was Kunio Maekawa (1905–1986), who had worked for Le Corbusier in Paris until 1930. His own house in Tokyo was an early landmark of Japanese modernism, combining traditional style with ideas he acquired working with Le Corbusier. His notable buildings include concert halls in Tokyo and Kyoto and the International House of Japan in Tokyo, all in the pure modernist style.
Kenzo Tange (1913–2005) worked in the studio of Kunio Maekawa from 1938 until 1945 before opening his own architectural firm. His first major commission was the Hiroshima Peace Memorial Museum . He designed many notable office buildings and cultural centers. office buildings, as well as the Yoyogi National Gymnasium for the 1964 Summer Olympics in Tokyo. The gymnasium, built of concrete, features a roof suspended over the stadium on steel cables.
The Danish architect Jørn Utzon (1918–2008) worked briefly with Alvar Aalto, studied the work of Le Corbusier, and traveled to the United States to meet Frank Lloyd Wright. In 1957 he designed one of the most recognizable modernist buildings in the world; the Sydney Opera House. He is known for the sculptural qualities of his buildings, and their relationship with the landscape. The five concrete shells of the structure resemble seashells by the beach. Begun in 1957, the project encountered considerable technical difficulties making the shells and getting the acoustics right. Utzon resigned in 1966, and the opera house was not finished until 1973, ten years after its scheduled completion.
In India, modernist architecture was promoted by the postcolonial state under Prime Minister Jawaharlal Nehru, most notably by inviting Le Corbusier to design the city of Chandigarh. Although Nehru advocated for young Indians to be part of Le Corbuiser's design team in order to refine their skills whilst building their city, the team included only one female Indian architect, Eulie Chowdhury. Important Indian modernist architects also include BV Doshi, Charles Correa, Raj Rewal, Achyut Kanvinde, and Habib Rahman. Much discussion around modernist architecture took place in the journal MARG. In Sri Lanka, Geoffrey Bawa pioneered Tropical Modernism. Minnette De Silva was an important Sri Lankan modernist architect.
Post independence architecture in Pakistan is a blend of Islamic and modern styles of architecture with influences from Mughal, indo-Islamic and international architectural designs. The 1960s and 1970s was a period of architectural Significance. Jinnah's Mausoleum, Minar e Pakistan, Bab e Khyber, Islamic summit minar and the Faisal mosque date from this time.
Africa
Modernist architecture in Ghana is also considered part of Tropical Modernism.
Some notable modernist architects in Morocco were Elie Azagury and Jean-François Zevaco.
Asmara, capitol of Eritrea, is well known for its modernist architecture dating from the period of Italian colonization.
Preservation
Several works or collections of modern architecture have been designated by UNESCO as World Heritage Sites. In addition to the early experiments associated with Art Nouveau, these include a number of the structures mentioned above in this article: the Rietveld Schröder House in Utrecht, the Bauhaus structures in Weimar, Dessau, and Bernau, the Berlin Modernism Housing Estates, the White City of Tel Aviv, the city of Asmara, the city of Brasília, the Ciudad Universitaria of UNAM in Mexico City and the University City of Caracas in Venezuela, the Sydney Opera House, and the Centennial Hall in Wrocław, along with select works from Le Corbursier and Frank Lloyd Wright.
Private organizations such as Docomomo International, the World Monuments Fund, and the Recent Past Preservation Network are working to safeguard and document imperiled Modern architecture. In 2006, the World Monuments Fund launched Modernism at Risk, an advocacy and conservation program. The organization MAMMA. is working to document and preserve modernist architecture in Morocco.
See also
Complementary architecture
Contemporary architecture
Critical regionalism
Ecomodernism
List of post-war Category A listed buildings in Scotland
Modern art
Modern furniture
Modernisme
New Urbanism
Organic architecture
References
Bibliography
Colquhoun, Alan, Modern Architecture, Oxford history of art, Oxford University Press, 2002,
Morgenthaler, Hans Rudolf, The Meaning of Modern Architecture: Its Inner Necessity and an Empathetic Reading, Ashgate Publishing, Ltd., 2015, .
Further reading
USA: Modern Architectures in History Request PDF – ResearchGate
The article goes in-depth about the original main contributors of modern architecture.
Pfeiffer, Bruce Brooks. Frank Lloyd Wright, 1867–1959: Building for Democracy. Taschen, 2021.
This article goes into depth about Frank Lloyd Wright and his contributions to modern architecture. and what he focused on to be a part of modern architecture.
"What Is Modern Architecture?" Hammond Historic District.
The article goes through the elaborations of the origin of modern architecture and what constitutes modern architecture.
External links
Six Building Designers Who Are Redefining Modern Architecture, an April 2011 radio and Internet report by the Special English service of the Voice of America.
Architecture and Modernism
"Preservation of Modern Buildings" edition of AIA Architect
Brussels50s60s.be, Overview of the architecture of the 1950s and 1960s in Brussels
A Grand Design: The Toronto City Hall Design Competition Modernist designs from the 1958 international competition
Architectural history
+
Architectural design
Architectural theory | Modern architecture | [
"Engineering"
] | 14,654 | [
"Architectural history",
"Architectural theory",
"Postmodern architecture",
"Architectural design",
"Design",
"Architecture"
] |
315,928 | https://en.wikipedia.org/wiki/Continuous%20Tone-Coded%20Squelch%20System | In telecommunications, Continuous Tone-Coded Squelch System or CTCSS is one type of in-band signaling that is used to reduce the annoyance of listening to other users on a shared two-way radio communication channel. It is sometimes referred to as tone squelch or PL for Private Line, a trademark of Motorola. It does this by adding a low frequency audio tone to the voice. Where more than one group of users is on the same radio frequency (called co-channel users), CTCSS circuitry mutes those users who are using a different CTCSS tone or no CTCSS.
CTCSS tone codes are sometimes referred to as sub-channels, but this is a misnomer because no additional radio channels are created. All users with different CTCSS tones on the same channel are still transmitting on the identical radio frequency, and their transmissions interfere with each other; however; the interference is masked under most conditions. Although it provides some protection against interference, CTCSS does not offer any security against interception or jamming, and receivers without CTCSS enabled will still hear all traffic.
A receiver with just a carrier or noise squelch does not suppress any sufficiently strong signal; in CTCSS mode it unmutes only when the signal also carries the correct sub-audible audio tone. The tones are not actually below the range of human hearing, but are poorly reproduced by most communications-grade speakers and in any event are usually filtered out before being sent to the speaker or headphone.
Theory of operation
Radio transmitters using CTCSS always transmit their own tone code whenever the transmit button is pressed. The tone is transmitted at a low level simultaneously with the voice. This is called CTCSS encoding. CTCSS tones range from 67 to 257 Hz. The tones are usually referred to as sub-audible tones. In an FM two-way radio system, CTCSS encoder levels are usually set for 15% of system deviation. For example, in a 5 kHz deviation system, the CTCSS tone level would normally be set to 750 Hz deviation. Engineered systems may call for different level settings in the 500 Hz to 1 kHz (10–20%) range.
The ability of a receiver to mute the audio until it detects a carrier with the correct CTCSS tone is called decoding. Receivers are equipped with features to allow the CTCSS "lock" to be disabled. On a base station console, a microphone may have a split push-to-talk button. Pressing one half of the button, (often marked with a speaker icon or the letters "MON", short for "MONitor") disables the CTCSS decoder and reverts the receiver to hearing any signal on the channel. This is called the monitor function. There is sometimes a mechanical interlock: the user must push down and hold the monitor button or the transmit button is locked and cannot be pressed. This interlock option is referred to as compulsory monitor before transmit (the user is forced to monitor by the hardware design of the equipment itself). On mobile radios, the microphone is usually stored in a hang-up clip or a hang-up box containing a microphone clip. When the user pulls the microphone out of the hang-up clip to make a call, a switch in the clip (box) forces the receiver to revert to conventional carrier squelch mode ("monitor"). Some designs relocate the switch into the body of the microphone itself. In hand-held radios, an LED indicator may glow green, yellow, or orange to indicate another user is talking on the channel. Hand-held radios usually have a switch or push-button to monitor. Some modern radios have a feature called "Busy Channel Lockout", which will not allow the user to transmit as long as the radio is receiving another signal.
A CTCSS decoder is based on a very narrow bandpass filter which passes the desired CTCSS tone. The filter's output is amplified and rectified, creating a DC voltage whenever the desired tone is present. The DC voltage is used to turn on, enable, or unmute the receiver's speaker audio stages. When the tone is present, the receiver is unmuted, when it is not present the receiver is silent.
Because period is the inverse of frequency, lower tone frequencies can take longer to decode (depends on the decoder design). Receivers in a system using 67.0 Hz can take noticeably longer to decode than ones using 203.5 Hz, and they can take longer than one decoding 250.3 Hz. In some repeater systems, the time lag can be significant. The lower tone may cause one or two syllables to be clipped before the receiver audio is unmuted (is heard). This is because receivers are decoding in a chain. The repeater receiver must first sense the carrier signal on the input, then decode the CTCSS tone. When that occurs, the system transmitter turns on, encoding the CTCSS tone on its carrier signal (the output frequency). All radios in the system start decoding after they sense a carrier signal then recognize the tone on the carrier as valid. Any distortion on the encoded tone will also affect the decoding time.
Engineered systems often use tones in the 127.3 Hz to 162.2 Hz range to balance fast decoding with keeping the tones out of the audible part of the receive audio. Most amateur radio repeater controller manufacturers offer an audio delay option—this delays the repeated speech audio for a selectable number of milliseconds before it is retransmitted. During this fixed delay period (the amount of which is adjusted during installation, then locked down), the CTCSS decoder has enough time to recognize the right tone. This way the problem with lost syllables at the beginning of a transmission can be overcome without having to use higher frequency tones.
In early systems, it was common to avoid the use of adjacent tones. On channels where every available tone is not in use, this is good engineering practice. For example, an ideal would be to avoid using 97.4 Hz and 100.0 Hz on the same channel. The tones are so close that some decoders may periodically false trigger. The user occasionally hears a syllable or two of co-channel users on a different CTCSS tone talking. As electronic components age, or through production variances, some radios in a system may be better than others at rejecting nearby tone frequencies.
Digital-Coded Squelch
CTCSS is an analog system. A later Digital-Coded Squelch (DCS) system was developed by Motorola under the trademarked name Digital Private Line (DPL). General Electric responded with the same system under the name of Digital Channel Guard (DCG). The generic name is CDCSS (Continuous Digital-Coded Squelch System). The use of digital squelch on a channel that has existing tone squelch users precludes the use of the 131.8 and 136.5 Hz tones as the digital bit rate is 134.4 bits per second and the decoders set to those two tones will sense an intermittent signal (referred to in the two-way radio field as "falsing" the decoder).
List of tones
CTCSS tones are standardized by the EIA/TIA. The full list of the tones can be found in their original standard RS-220A, and the most recent EIA/TIA-603-E; the CTCSS tones also may be listed in manufacturers instruction, maintenance or operational manuals. Some systems use non-standard tones. The NATO Military radios use 150.0 Hz, and this can be found in the user manuals for the radios. Some areas do not use certain tones. For example, the tone of 100.0 Hz is avoided in the United Kingdom since this is twice the UK mains power line frequency; an inadequately smoothed power supply may cause unwanted squelch opening (this is true in many other areas that use 50 Hz power). Tones typically come from one of three series as listed below along with the two character PL code used by Motorola to identify tones. The most common set of supported tones is a set of 39 tones including all tones with Motorola PL codes, except for the tones 8Z, 9Z, and 0Z (zero-Z). The lowest series has adjacent tones that are roughly in the harmonic ratio of 20.05 to 1 (≈1.035265), while the other two series have adjacent tones roughly in the ratio of 100.015 to 1 (≈1.035142). An example technical description can be found in a Philips technical information sheet about their CTCSS products.
Notes
Reverse CTCSS
Some professional systems use a phase-reversal of the CTCSS tone at the end of a transmission to eliminate the squelch crash or squelch tail. This is common with General Electric Mobile Radio and Motorola systems. When the user releases the push-to-talk button the CTCSS tone does a phase shift for about 200 milliseconds. In older systems, the tone decoders used mechanical reeds to decode CTCSS tones. When audio at a resonant pitch was fed into the reed, it would resonate, which would turn on the speaker audio. The end-of-transmission phase reversal (called "Reverse Burst" by Motorola (and trademarked by them) and "Squelch Tail Elimination" or "STE" by GE ) caused the reed to abruptly stop vibrating which would cause the receive audio to instantly mute. Initially, a phase shift of 180 degrees was used, but experience showed that a shift of ±120 to 135 degrees was optimal in halting the mechanical reeds. These systems often have audio muting logic set for CTCSS only. If a transmitter without the phase reversal feature is used, the squelch can remain unmuted for as long as the reed continues to vibrate—up to 1.5 seconds at the end of a transmission as it coasts to a stop (sometimes referred to as the "flywheel effect" or called "freewheeling").
Interference and CTCSS
In non-critical uses, CTCSS can also be used to hide the presence of interfering signals such as receiver-produced intermodulation. Receivers with poor specifications—such as scanners or low-cost mobile radios—cannot reject the strong signals present in urban environments. The interference will still be present and may block the receiver, but the decoder will prevent it from being heard. It will still degrade system performance but the user will not have to hear the noises produced by receiving the interference.
CTCSS is commonly used in VHF and UHF amateur radio operations for this purpose. Wideband and extremely sensitive radios are common in the amateur radio field, which imposes limits on achievable intermodulation and adjacent-channel performance.
Family Radio Service (FRS), PMR446 and other consumer-grade radios often include a CTCSS feature called "Interference Eliminator Codes", "sub-channels", "privacy tones", or "privacy codes". These do not afford privacy or security, but serve only to reduce annoying interference by other users or other noise sources; a receiver with the tone squelch turned off will hear everything on the channel. GMRS/FRS radios offering CTCSS codes typically provide a choice of 38 tones, but the tone number and the tone frequencies used may vary from one manufacturer to another (or even within product lines of one manufacturer) and should not be assumed to be consistent (i.e. "Tone 12" in one set of radios may not be "Tone 12" in another).
See also
Radiotelephony procedure
Signaling (telecommunications)
References
Radio technology
Telecommunication protocols | Continuous Tone-Coded Squelch System | [
"Technology",
"Engineering"
] | 2,437 | [
"Information and communications technology",
"Telecommunications engineering",
"Radio technology"
] |
315,949 | https://en.wikipedia.org/wiki/Airspeed%20indicator | The airspeed indicator (ASI) or airspeed gauge is a flight instrument indicating the airspeed of an aircraft in kilometres per hour (km/h), knots (kn or kt), miles per hour (MPH) and/or metres per second (m/s). The recommendation by ICAO is to use km/h, however knots (kt) is currently the most used unit. The ASI measures the pressure differential between static pressure from the static port, and total pressure from the pitot tube. This difference in pressure is registered with the ASI pointer on the face of the instrument.
Colour-coded speeds and ranges
The ASI has standard colour-coded markings to indicate safe operation within the limitations of the aircraft. At a glance, the pilot can determine a recommended speed (V speeds) or if speed adjustments are needed. Single and multi-engine aircraft have common markings. For instance, the green arc indicates the normal operating range of the aircraft, from VS1 to VNO. The white arc indicates the flap operating range, VSO to VFE, used for approaches and landings. The yellow arc cautions that flight should be conducted in this range only in smooth air, while the red line (VNE) at the top of the yellow arc indicates damage or structural failure may result at higher speeds.
The ASI in multi-engine aircraft includes two additional radial markings, one red and one blue, associated with potential engine failure. The radial red line near the bottom of green arc indicates Vmc, the minimum indicated airspeed at which the aircraft can be controlled with the critical engine inoperative. The radial blue line indicates VYSE, the speed for best rate of climb with the critical engine inoperative.
Operation
The ASI is the only flight instrument that uses both the static system and the pitot system. Static pressure enters the ASI case, while total pressure flexes the diaphragm, which is connected to the ASI pointer via mechanical linkage. The pressures are equal when the aircraft is stationary on the ground, and hence shows a reading of zero. When the aircraft is moving forward, air entering the pitot tube is at a greater pressure than the static line, which flexes the diaphragm, moving the pointer. The ASI is checked before takeoff for a zero reading, and during takeoff that it is increasing appropriately.
The pitot tube may become blocked, because of insects, dirt or failure to remove the pitot cover. A blockage will prevent ram air from entering the system. If the pitot opening is blocked, but the drain hole is open, the system pressure will drop to ambient pressure, and the ASI pointer will drop to a zero reading. If both the opening and drain holes are blocked, the ASI will not indicate any change in airspeed. However, the ASI pointer will show altitude changes, as the associated static pressure changes. If both the pitot tube and the static system are blocked, the ASI pointer will read zero. If the static ports are blocked but the pitot tube remains open, the ASI will operate, but
inaccurately.
Types of airspeeds
There are four types of airspeed that can be remembered with the acronym ICE-T. Indicated airspeed (IAS), is read directly off the ASI. It has no correction for air density variations, installation or instrument errors. Calibrated airspeed (CAS) is corrected for installation and instrument errors. Equivalent airspeed (EAS) is calibrated airspeed (CAS) corrected for the compressibility of air at a non-trivial Mach number. True airspeed (TAS) is CAS corrected for altitude and nonstandard temperature. TAS is used for flight planning. TAS increases as altitude increases, as air density decreases. TAS may be determined via a flight computer, such as the E6B. Some ASIs have a TAS ring. Alternatively, a rule of thumb is to add 2 percent to the CAS for every of altitude gained.
Jet aircraft
Jet aircraft do not have VNO and VNE like piston-engined aircraft, but instead have a maximum operating speed expressed in knots, VMO and Mach number, MMO. Thus, a pilot of a jet aeroplane needs both an airspeed indicator and a Machmeter, with appropriate red lines. An ASI will include a red-and-white striped pointer, or "barber's pole", that automatically moves to indicate the applicable speed limit at any given time.
Angle of attack and Lift Reserve Indicators
An aeroplane can stall at any speed, so monitoring the ASI alone will not prevent a stall. The critical angle of attack (AOA) determines when an aircraft will stall. For a particular configuration, it is a constant independent of weight, bank angle, temperature, density altitude, and the center of gravity of an aircraft. An AOA indicator provides stall situational awareness as a means for monitoring the onset of the critical AOA. The AOA indicator will show the current AOA and its proximity to the critical AOA.
Similarly, the Lift Reserve Indicator (LRI) provides a measure of the amount of lift being generated. It uses a pressure differential system to provide the pilot with a visual representation of reserve lift available.
See also
Acronyms and abbreviations in avionics
Flight instruments
Global Positioning System
Indicated airspeed
ICAO recommendations on use of the International System of Units
Position error
Speedometer
V speeds
References
Further reading
Installing and flying the Lift Reserve Indicator, article and photos by Sam Buchanan http://home.hiwaay.net/~sbuc/journal/liftreserve.htm
Aircraft instruments
Airspeed
Measuring instruments
Navigational flight instruments
Speed sensors | Airspeed indicator | [
"Physics",
"Technology",
"Engineering"
] | 1,169 | [
"Physical quantities",
"Measuring instruments",
"Airspeed",
"Aircraft instruments",
"Speed sensors",
"Wikipedia categories named after physical quantities",
"Navigational flight instruments"
] |
315,962 | https://en.wikipedia.org/wiki/NetKernel | NetKernel is a British software company and software platform by the same name that is used for High Performance Computing, Enterprise Application Integration, and Energy Efficient Computation.
It allows developers to cleanly separate code from architecture. It can be used as an application server, embedded in a Java container or employed as a cloud computing platform.
As a platform, it is an implementation of the resource-oriented computing (ROC) abstraction. ROC is a logical computing model that resides on top of but is completely isolated from the physical realm of code and objects. In ROC, information and services are identified by logical addresses which are resolved to physical endpoints for the duration of a request and then released. Logical indirect addressing results in flexible systems that can be changed while the system is in operation. In NetKernel, the boundary between the logical and physical layers is intermediated by an operation-system caliber microkernel that can perform various transparent optimization.
The idea of using resources to model abstract information stems from the REST architectural style and the World Wide Web. The idea of using a uniform addressing model stems from the Unix operating system. NetKernel can be considered a unification of the Web and Unix implemented as a
software operating system running on a monolithic microkernel within a single computer.
NetKernel was developed by 1060 Research and is offered under a dual open-source software and commercial software license.
History
NetKernel was started at Hewlett-Packard Labs in 1999. It was conceived by Dr. Russ Perry, Dr. Royston Sellman and Dr. Peter Rodgers as a general purpose XML operating environment that could address the needs of the exploding interest in XML dialects for intra-industry XML messaging.
Rodgers saw the web as an implementation of a general abstraction which he extrapolated as ROC, but whereas the web is limited to publishing information; he set about conceiving a solution that could perform computation using similar principles. Working in close partnership with co-founder Tony Butterfield, they discovered a method for writing software that could be executed across a logical model, separated from the physical realm of code and objects. Recognising the potential for this approach, they spun out of HP Labs.
Rodgers and Butterfield begun their company as "1060 Research Limited" in Chipping Sodbury, a small market town on the edge of the Cotsolds region of England in 2002, and over a number of years developed the platform that became NetKernel.
In early 2018, 1060 Research announced that it was appointing a new CEO, Charles Radclyffe. Radclyffe announced to the NetKernel community in February 2018 that the team were working on a new platform based on NKEE 6 which would be fully hosted, programmable and accessible via the web - NetKernel Cloud. Radclyffe resigned after six months.
Concepts
Resource
A resource is identifiable information within a computer system. Resources are an abstract notion and they cannot be manipulated directly. When a resource is requested, a concrete, immutable representation is provided which captures the current state of the resource. This is directly analogous to the way the World Wide Web functions. On the Web, a URL address identifies a globally accessible resource. When a browser issues a request for the resource it is sent a representation of the resource in the response.
Addresses
A resource is identified by an address within an address space. In NetKernel, Uniform Resource Identifier (URI) addresses are used to identify all resources. Unlike the Web, which has a single global address space, NetKernel supports an unlimited number of address spaces and supports relationships between address spaces.
NetKernel supports a variety of URI schemes and introduces new ones specifically applicable to URI addressing within a software system.
Request
The fundamental operation in NetKernel is a resource request, or request. A request consists of a resource URI address and a verb.
Supported verbs include SOURCE, SINK, NEW, DELETE, EXISTS and META. Each request is dispatched to a microkernel which resolves the URI address to a physical endpoint and assigns and schedules a thread for processing. When the endpoint completes processing the microkernel returns the response to the initiating client.
Programming
The fundamental instruction in NetKernel is a resource request, specified by a URI. Mechanisms that sequence URI requests are located above the microkernel. In the current Java-based implementation, requests are dispatched using a Java API. This implies that any language that can call a Java API can be used to program NetKernel.
, the set of languages supported includes:
Java
Ruby
Scala
Clojure
JavaScript
Python 2
Groovy
Beanshell
PHP
DPML
XML related languages such as XQuery
The URI specification itself has sufficient richness to express a functional programming language.
Active URI Scheme
The active URI scheme was proposed by Hewlett-Packard as a means to encode a functional program within a URI.
active: {function-name} [+ {parameter-name} @ {parameter-value-URI}]*
For example, the following URI calls a random number generator
active:random
and the following uses an XSLT service to transform an XML document with an XSLT stylesheet:
active:xslt+operator@file:/style.xsl+operand@file:/document.xml
Because the argument values may be URI addresses themselves, a tree-structured set of function calls can be encoded in a single URI.
Transports
Transports are a mechanism used to introduce requests from outside of NetKernel to the NetKernel address space. Transports are available for the HTTP protocol, JMS (Java Message Service), and CRON. Other transports can be easily added as they are independent from the rest of NetKernel.
The role of the transport is to translate an external request based on one protocol into a NetKernel request with a URI and a specific verb (SOURCE, SINK, etc.) and then to send the returned representation back to the client via the supported protocol.
Two mappings are handled by a transport. The first is between the address space of the externally supported protocol to the internal NetKernel address space. And the second is between the verb or action specified externally into a NetKernel verb.
For example, in the case of the HTTP transport, the external address space is a sub-space of a URL. The following mapping illustrates this point.
http://www.mywebsite.com/publications/...
|
v
file:/src/publications/...
In addition, the HTTP protocol supports methods such as GET, PUT, HEAD, etc. which are mapped to NetKernel verbs.
Scripting languages
A mechanism is needed to issue the URI requests, capture the returned representations, and communicate with clients.
Scripting languages are executed by their runtime engine, which is itself a service. For example, the Groovy language runtime will run a program contained in the file file:/program.gy with the following:
active:groovy+operator@file:/program.gy
See also
Representational State Transfer
Web resource
Jolie
List of user interface markup languages
References
External links
1060 Research
Cross-platform software
Serverless computing
Distributed computing | NetKernel | [
"Technology"
] | 1,488 | [
"Computing platforms",
"Serverless computing"
] |
315,963 | https://en.wikipedia.org/wiki/Heading%20indicator | The heading indicator (HI), also known as a directional gyro (DG) or direction indicator (DI), is a flight instrument used in an aircraft to inform the pilot of the aircraft's heading.
Use
The primary means of establishing the heading in most small aircraft is the magnetic compass, which, however, suffers from several types of errors, including that created by the "dip" or downward slope of the Earth's magnetic field. Dip error causes the magnetic compass to read incorrectly whenever the aircraft is in a bank, or during acceleration or deceleration, making it difficult to use in any flight condition other than unaccelerated, perfectly straight and level. To remedy this, the pilot will typically maneuver the airplane with reference to the heading indicator, as the gyroscopic heading indicator is unaffected by dip and acceleration errors. The pilot will periodically reset the heading indicator to the heading shown on the magnetic compass.
Operation
The heading indicator works using a gyroscope, tied by an erection mechanism to the aircraft yawing plane, i. e. the plane defined by the longitudinal and the horizontal axis of the aircraft. As such, any configuration of the aircraft yawing plane that does not match the local Earth horizontal results in an indication error. The heading indicator is arranged such that the gyro axis is used to drive the display, which consists of a circular compass card calibrated in degrees. The gyroscope is spun either electrically, or using filtered air flow from a suction pump (sometimes a pressure pump in high altitude aircraft) driven from the aircraft's engine. Because the Earth rotates (ω, 15° per hour, apparent drift), and because of small accumulated errors caused by imperfect balancing of the gyro, the heading indicator will drift over time (real drift), and must be reset using a magnetic compass periodically. The apparent drift is predicted by ω sin Latitude and will thus be greatest over the poles. To counter for the effect of Earth rate drift a latitude nut can be set (on the ground only) which induces a (hopefully equal and opposite) real wander in the gyroscope. Otherwise it would be necessary to manually realign the direction indicator once each ten to fifteen minutes during routine in-flight checks. Failure to do this is a common source of navigation errors among new pilots. Another sort of apparent drift exists in the form of transport wander, caused by the aircraft movement and the convergence of the meridian lines towards the poles. It equals the course change along a great circle (orthodrome) flight path.
Variations
Some more expensive heading indicators are "slaved" to a magnetic sensor, called a flux gate. The flux gate continuously senses the Earth's magnetic field, and a servo mechanism constantly corrects the heading indicator. These "slaved gyros" reduce pilot workload by eliminating the need for manual realignment every ten to fifteen minutes.
The prediction of drift in degrees per hour, is as follows:
Although it is possible to predict the drift, there will be minor variations from this basic model, accounted for by gimbal error (operating the aircraft away from the local horizontal), among others. A common source of error here is the improper setting of the latitude nut (to the opposite hemisphere for example). The table however allows one to gauge whether an indicator is behaving as expected, and as such, is compared with the realignment corrections made with reference to the magnetic compass. Transport wander is an undesirable consequence of apparent drift.
See also
Acronyms and abbreviations in avionics
Earth Inductor Compass
Gyrocompass, a compass depending on gyroscopic precession effect instead of a basic gyroscopic effect
Horizontal situation indicator
Inertial navigation system, a far more complex system of gyroscopes that also employ accelerometers
References
Footnotes
Notes
Aircraft instruments
Avionics | Heading indicator | [
"Technology",
"Engineering"
] | 805 | [
"Avionics",
"Aircraft instruments",
"Measuring instruments"
] |
315,968 | https://en.wikipedia.org/wiki/Attitude%20indicator | The attitude indicator (AI), also known as the gyro horizon or artificial horizon, is a flight instrument that informs the pilot of the aircraft orientation relative to Earth's horizon, and gives an immediate indication of the smallest orientation change. The miniature aircraft and horizon bar mimic the relationship of the aircraft relative to the actual horizon. It is a primary instrument for flight in instrument meteorological conditions.
Attitude is always presented to users in the unit degrees (°). However, inner workings such as sensors, data and calculations may use a mix of degrees and radians, as scientists and engineers may prefer to work with radians.
History
Before the advent of aviation, artificial horizons were used in celestial navigation. Proposals of such devices based on gyroscopes, or spinning tops, date back to the 1740s, including the work of John Serson. Later implementations, also known as bubble horizons, were based on bubble levels and attached to a sextant. In the 2010s, remnants of an artificial horizon using liquid mercury were recovered from the wreck of HMS Erebus.
Use
The essential components of the AI include a symbolic miniature aircraft mounted so that it appears to be flying relative to the horizon. An adjustment knob, to account for the pilot's line of vision, moves the aircraft up and down to align it against the horizon bar. The top half of the instrument is blue to represent the sky, while the bottom half is brown to represent the ground. The bank index at the top shows the aircraft angle of bank. Reference lines in the middle indicate the degree of pitch, up or down, relative to the horizon.
Most Russian-built aircraft have a somewhat different design. The background display is colored as in a Western instrument, but moves up and down only to indicate pitch. A symbol representing the aircraft (which is fixed in a Western instrument) rolls left or right to indicate bank angle. A proposed hybrid version of the Western and Russian systems would be more intuitive, but has never caught on.
Operation
The heart of the AI is a gyroscope (gyro) that spins at high speed, from either an electric motor, or through the action of a stream of air pushing on rotor vanes placed along its periphery. The stream of air is provided by a vacuum system, driven by a vacuum pump, or a venturi. Air passing through the narrowest portion of a venturi has lower air pressure through Bernoulli's principle. The gyro is mounted in a double gimbal, which allows the aircraft to pitch and roll as the gyro stays vertically upright. A self-erecting mechanism, actuated by gravity, counteracts any precession due to bearing friction. It may take a few minutes for the erecting mechanism to bring the gyros to a vertical upright position after the aircraft engine is first powered up.
Attitude indicators have mechanisms that keep the instrument level with respect to the direction of gravity. The instrument may develop small errors, in pitch or bank during extended periods of acceleration, deceleration, turns, or due to the earth curving underneath the plane on long trips. To start with, they often have slightly more weight in the bottom, so that when the aircraft is resting on the ground they will hang level and therefore they will be level when started. But once they are started, that pendulous weight in the bottom will not pull them level if they are out of level, but instead its pull will cause the gyro to precess. In order to let the gyro very slowly orient itself to the direction of gravity while in operation, the typical vacuum powered gyro has small pendulums on the rotor casing that partially cover air holes. When the gyro is out of level with respect to the direction of gravity, the pendulums will swing in the direction of gravity and either uncover or cover the holes, such that air is allowed or prevented from jetting out of the holes, and thereby applying a small force to orient the gyro towards the direction of gravity. Electric powered gyros may have different mechanisms to achieve a similar effect.
Older AIs were limited in the amount of pitch or roll that they would tolerate. Exceeding these limits would cause the gyro to tumble as the gyro housing contacted the gimbals, causing a precession force. Preventing this required a caging mechanism to lock the gyro if the pitch exceed 60° and the roll exceeded 100°. Modern AIs do not have this limitation and therefore do not require a caging mechanism.
Attitude indicators are free from most errors, but depending upon the speed with which the erection system functions, there may be a slight nose-up indication during a rapid acceleration and a nose-down indication during a rapid deceleration. There is also a possibility of a small bank angle and pitch error after a 180° turn. These inherent errors are small and correct themselves within a minute or so after returning to straight-and-level flight.
Flight Director Attitude Indicator
Attitude indicators are also used on crewed spacecraft and are called Flight Director Attitude Indicators (FDAI), where they indicate the craft's yaw angle (nose left or right), pitch (nose up or down), roll, and orbit relative to a fixed-space inertial reference frame from an Inertial Measurement Unit (IMU). The FDAI can be configured to use known positions relative to Earth or the stars, so that the engineers, scientists and astronauts can communicate the relative position, attitude, and orbit of the craft.
Attitude and Heading Reference Systems
Attitude and Heading Reference Systems (AHRS) are able to provide three-axis information based on ring laser gyroscopes, that can be shared with multiple devices in the aircraft, such as "glass cockpit" primary flight displays (PFDs). Rather than using a spinning gyroscope, modern AHRS use solid-state electronics, low-cost inertial sensors, rate gyros, and magnetometers.
With most AHRS systems, if an aircraft's AIs have failed there will be a standby AI located in the center of the instrument panel, where other standby basic instruments such as the airspeed indicator and altimeter are also available. These mostly mechanical standby instruments may remain available even if the electronic flight instruments fail, although the standby attitude indicator may be electrically driven and will, after a short time, fail if its electrical power fails.
Attitude Direction Indicator
The Attitude Direction Indicator (ADI), or Flight Director Indicator (FDI), is an AI integrated with a Flight Director System (FDS). The ADI incorporates a computer that receives information from the navigation system, such as the AHRS, and processes this information to provide the pilot with a 3-D flight trajectory cue to maintain a desired path. The cue takes the form of V steering bars. The aircraft is represented by a delta symbol and the pilot flies the aircraft so that the delta symbol is placed within the V steering bars.
See also
Acronyms and abbreviations in avionics
Air India Flight 855
Korean Air Cargo Flight 8509
Peripheral vision horizon display (PVHD)
Turn and slip indicator
References
Avionics
Navigational flight instruments
Technology systems | Attitude indicator | [
"Technology",
"Engineering"
] | 1,481 | [
"Systems engineering",
"Technology systems",
"Avionics",
"Aircraft instruments",
"nan",
"Navigational flight instruments"
] |
316,038 | https://en.wikipedia.org/wiki/Blue%20moon | A blue moon refers either to the presence of a second full moon in a calendar month, to the third full moon in a season containing four, or to a moon that appears blue due to atmospheric effects.
The calendrical meaning of "blue moon" is unconnected to the other meanings. It is often referred to as “traditional”, but since no occurrences are known prior to 1937 it is better described as an invented tradition or “modern American folklore”. The practice of designating the second full moon in a month as "blue" originated with amateur astronomer James Hugh Pruett in 1946. It does not come from Native American lunar tradition, as is sometimes supposed.
The moon - not necessarily full - can sometimes appear blue due to atmospheric emissions from large forest fires or volcanoes, though the phenomenon is rare and unpredictable (hence the saying “once in a blue moon”). A calendrical blue moon (by Pruett's definition) is predictable and relatively common, happening 7 times in every 19 years (i.e. once every 2 or 3 years). Calendrical blue moons occur because the time between successive full moons (approximately 29.5 days) is shorter than the average calendar month. They are of no astronomical or historical significance, and are not a product of actual lunisolar timekeeping or intercalation.
Phrase origin
A 1528 satire, Rede Me and Be Nott Wrothe, contained the lines, “Yf they saye the mone is belewe / We must beleve that it is true.” The intended sense was of an absurd belief, like the moon being made of cheese. There is nothing to connect it with the later metaphorical or calendrical meanings of “blue moon”. However, a confusion of belewe (Middle English, “blue”) with belǽwan (Old English “to betray”)) led to a false etymology for the calendrical term that remains widely circulated, despite its originator having acknowledged it as groundless.
Percy Bysshe Shelley’s poem "Alastor" (1816) mentioned an erupting volcano and a “blue moon / Low in the west.” It was written at a time when the eruption of Mount Tambora was causing global climate effects, and not long before the first recorded instances of “blue moon” as a metaphor.
The OED cites Pierce Egan’s Real Life in London (1821) as the earliest known occurrence of “blue moon” in the metaphorical sense of a long time. (“How's Harry and Ben?—haven't seen you this blue moon.”) An 1823 revision of Francis Grose’s ‘’Classical Dictionary of the Vulgar Tongue’’, edited by Egan, included the definition: “Blue moon. In allusion to a long time before such a circumstance happens. ‘O yes, in a blue moon.’” An earlier (1811) version of the same dictionary had not included the phrase, so it was likely coined some time in the 1810s. "Once in a blue moon" is recorded from 1833.
The use of blue moon to mean a specific calendrical event dates from 1937, when the Maine Farmers' Almanac used the term in a slightly different sense from the one now in common use. According to the OED, “Earlier occurrences of the sense given in the Maine Farmers' Almanac have not been traced, either in editions of the Almanac prior to 1937, or elsewhere; the source of this application of the term (if it is not a coinage by the editor, H. P. Trefethen) is unclear.” The conjecture of editorial invention is further supported by the spurious explanation the almanac gave:
The Moon usually comes full twelve times in a year, three times in each
season... However, occasionally the moon comes full thirteen times in a year.
This was considered a very unfortunate circumstance, especially by the
monks who had charge of the calendar. It became necessary for them
to make a calendar of thirteen months, and it upset the regular arrangement
of church festivals. For this reason thirteen came to be considered an
unlucky number. Also, this extra moon had a way of coming in each of
the seasons so that it could not be given a name appropriate to the time
of year like the other moons. It was usually called the Blue Moon... In olden times the almanac
makers had much difficulty calculating the occurrence of the Blue Moon
and this uncertainty gave rise to the expression "Once in a Blue Moon".
There is no evidence that an extra moon in a month, season or year was considered unlucky, or that it led to 13 being considered unlucky, or that the extra moon was called "blue", or that it led to the phrase "once in a blue moon". There is good reason to suspect that the 1937 article was a hoax, practical joke, or simply misinformed. It is however true that the date of the Christian festival of Easter depended on an accurate computation of full moon dates, and important work was done by the monks Dionysius Exiguus and Bede, explained by the latter in The Reckoning of Time, written c725 CE. According to Bede, “Whenever it was a common year, [the Anglo-Saxons] gave three lunar months to each season. When an embolismic year occurred (that is, one of 13 lunar months) they assigned the extra month to summer, so that three months together bore the name ‘‘Litha’’; hence they called [the embolismic] year ‘‘Thrilithi’’. It had four summer months, with the usual three for the other seasons.” The name Litha is now applied by some Neo-Pagans to midsummer.
The 1937 Maine Farmers' Almanac article was misinterpreted by James Hugh Pruett in a 1946 Sky and Telescope article, leading to the calendrical definition of “blue moon” that is now most commonly used, i.e. the second full moon in a calendar month. “A blue moon in the original Maine Farmers' Almanac sense can only occur in the months of February, May, August, and November. In the later sense, one can occur in any month except February." This later sense gained currency from its use in a United States radio programme, StarDate on January 31, 1980 and in a question in the Trivial Pursuit game in 1986.
Several songs have been titled "Blue Moon", seen as a "symbol of sadness and loneliness."
Visually blue moon
The moon (and sun) can appear blue under certain atmospheric conditions – for instance, if volcanic eruptions or large-scale fires release particles into the atmosphere of just the right size to preferentially scatter red light. According to the Encyclopaedia Britannica, scattering is the cause of “that epitome of rare occurrences, the blue Moon (seen when forest fires produce clouds composed of small droplets of organic compounds).”
A Royal Society report on the 1883 Krakatoa eruption gave a detailed account of “blue, green, and other coloured appearances of the sun and moon” seen in many places for months afterwards.. The report mentioned that in February 1884 an observer in central America saw the crescent moon as “a magnificent emerald-green” while its ashen part was “pale green”. Venus, bright stars and a comet were also green. The report authors suspected that green moons were a contrast effect, since in those cases the surrounding sky was seen as red.
People saw blue moons in 1983 after the eruption of the El Chichón volcano in Mexico, and there are reports of blue moons caused by Mount St. Helens in 1980 and Mount Pinatubo in 1991.
The moon looked blue after forest fires in Sweden and Canada in 1950 and 1951, On September 23, 1950, several muskeg fires that had been smoldering for several years in Alberta, Canada, suddenly blew up into major—and very smoky—fires. Winds carried the smoke eastward and southward with unusual speed, and the conditions of the fire produced large quantities of oily droplets of just the right size (about 1micrometre in diameter) to scatter red and yellow light. Wherever the smoke cleared enough so that the sun was visible, it was lavender or blue. Ontario, Canada, and much of the east coast of the United States were affected by the following day, and two days later, observers in Britain reported an indigo sun in smoke-dimmed skies, followed by an equally blue moon that evening.
Ice particles might have a similar effect. The Antarctic diary of Robert Falcon Scott for July 11, 1911 mentioned "the air thick with snow, and the moon a vague blue".
The key to a blue moon is having many particles slightly wider than the wavelength of red light (0.7 micrometer)—and no other sizes present. Ash and dust clouds thrown into the atmosphere by fires and storms usually contain a mixture of particles with a wide range of sizes, with most smaller than 1 micrometer, and they tend to scatter blue light. This kind of cloud makes the moon turn red; thus red moons are far more common than blue moons.
Calendrical blue moon
Blue moon as a calendrical term originated with the 1937 Maine Farmers’ Almanac, a provincial U.S. magazine that is not to be confused with the Farmers' Almanac, Old Farmer's Almanac, or other American almanacs. There is no evidence of “blue moon” having been used as a specific calendrical term before 1937, and it was possibly invented by the magazine’s editor, Henry Porter Trefethen (1887-1957). As a term for the second full moon in a calendar month it began to be widely known in the U.S. in the mid-1980s and became internationally known in the late 1990s when calendrical matters were of special interest given the approaching millennium. It created a misapprehension that the calendrical meaning of “blue moon” had preceded the metaphorical one, and inspired various folk etymologies, e.g. the “betrayer” speculation mentioned earlier, or that it came from a printing convention in calendars or a saying in Czech. A 1997 Taiwanese movie, Blue Moon, had the log line “There is usually only one full moon every month, but occasionally there are two – and that second full moon is called the Blue Moon. It is said that when a person sees a blue moon and makes a wish, he will be granted a second chance in things.”
In 1999 folklorist Philip Hiscock presented a timeline for the calendrical term. First, the August page of the 1937 Maine Farmers' Almanac ran a sidebar claiming that the term was used “in olden times” for an extra full moon in a season, and gave some examples (21 November 1915, 22 August 1918, 21 May 1921, 20 February 1924, 21 November 1934, 22 August 1937, and 21 May 1940). Six years later, Laurence J. Lafleur (1907-66) quoted the almanac in the U.S. magazine Sky & Telescope (July 1943, page 17) in answer to a reader’s question about the meaning of “blue moon”. Then James Hugh Pruett (1886-1955) quoted it again in Sky & Telescope (March 1946, p3), saying “seven times in 19 years there were — and still are — 13 full moons in a year. This gives 11 months with one full moon each and one with two. This second in a month, so I interpret it, was called Blue Moon”. In 1980 the term was used (with Pruett’s definition) in a U.S. radio program, Star Date, and in 1985 it appeared in a U.S. children’s book, The Kids' World Almanac of Records and Facts (“What is a blue moon? When there are two full moons in a month, the second one is called a blue moon. It is a rare occurrence.”) In 1986 it was included as a question in Trivial Pursuit (likely taken from the children’s book), and in 1988 a forthcoming blue moon received widespread press coverage.
In 1999 U.S. astronomer Donald W. Olson researched the original articles and published the results in a Sky & Telescope article co-authored with Richard T Fienberg and Roger W. Sinnott. From the examples given by Trefethen in the 1937 Maine Farmers’ Almanac they deduced a “rule” he must effectively have used. “Seasonal Moon names are assigned near the spring equinox in accordance with the ecclesiastical rules for determining the dates of Easter and Lent. The beginnings of summer, fall, and winter are determined by the dynamical mean Sun. When a season contains four full Moons, the third is called a Blue Moon.” They termed this the “Maine rule” for blue moons, as distinct from Pruett’s 1946 definition that was seen to have been a misinterpretation.
In popular astronomy the Maine rule is sometimes called the “seasonal”, “true” or “traditional” rule (though of course no tradition of it exists prior to 1937). Blue moons by Pruett’s definition are sometimes called “calendar blue moons”. The "seasonal" blue moon rule is itself ambiguous since it depends which definition of season is used. The Maine rule used seasons of equal length with the ecclesiastical equinox (March 21). An alternative is to use the astronomical seasons, which are of unequal length.
There is also reference in modern popular astrology to “zodiacal blue moons”.
Blue moon dates
The table below has blue moon dates and times (UTC) calculated according to Pruett’s “calendar” rule (second full moon in a calendar month) and two versions of the “seasonal” rule (third full moon in a season with four). The Maine rule uses equal-length seasons defined by the dynamical mean sun, and is presumed to have been the original rule of Trefethen. The “astro-seasonal” rule uses the unequal astronomical seasons defined by the apparent sun. All calculations are by David Harper.
The fourth column shows blue moon dates that were actually printed in the Maine Farmers’ Almanac, as found by Olson, Fienberg and Sinnott in 1999. They studied issues published between 1819 and 1962, and found that all mentions occurred between 1937, when H.P. Trefethen introduced the term, and 1956, when Trefethen’s editorship ended (consistent with it being Trefethen’s own invention). Occasional discrepancies between the Maine rule and the almanac’s printed dates can be ascribed to clerical errors or miscalculation. In one case (August 1945) Trefethen appears to have used the apparent rather than mean sun.
The table shows that in 200 years there are 187 full moons that could be called "blue" by some definition - an average of nearly one per year. Two Pruett blue moons can occur in a single year (1915, 1961, 1999, 2018, 2037, 2094). 1915 had four blue moons (two Pruett, one Maine, one astro-seasonal). 1934 and 2048 have three (one of each type).
Despite the 187 blue moons appearing across the 200 years in this table, only 146 years have any of these 3 types of blue moons, leaving 54 years (thus averaging just over 1 year in every 4) which have none of the 3 rules represented in that calendar year.
While not totally unexpected (given the overlapping frequencies of these 3 rules), it so happens there are not any 2 sequential years (at least within these 200) wherein none of the 3 types of blue moon occur.
Conversely, despite the preponderance of years with blue moons (of at least 1 type) occurring in this 200-year range, there are no instances of more than 4 sequential years having a blue moon, of any of these 3 types -- i.e. at least 1 year out of every 5 sequential years has none of the 3 types appearing.
Remarks
One lunation (an average lunar cycle) is 29.53 days. There are about 365.24 days in a tropical year. Therefore, about 12.37 lunations (365.24 days divided by 29.53 days) occur in a tropical year. So the date of the full moon falls back by nearly one day every calendar month on average. Each calendar year contains roughly 11 days more than the number of days in 12 lunar cycles, so every two or three years (seven times in the 19 year Metonic cycle), there is an extra full moon in the year. The extra full moon necessarily falls in one of the four seasons (however defined), giving that season four full moons instead of the usual three.
Given that a year is approximately 365.2425 days and a synodic orbit is 29.5309 days, then there are about 12.368 synodic months in a year. For this to add up to another full month would take 1/0.368 years. Thus it would take about 2.716 years, or 2 years, 8 months, and 18 days for another Pruett blue moon to occur. Or approximately once in 32.5 months on an average.
When there are two Pruett blue moons in a single year, the first occurs in January and the second in March or April, and there is no full moon in February.
The next time New Year's Eve falls on a Pruett blue moon (as occurred on December 31, 2009 in time zones west of UTC+05) is after one Metonic cycle, in 2028 in time zones west of UTC+08. At that time there will be a total lunar eclipse.
See also
Blood moonreddish color a totally eclipsed Moon takes on to observers on Earth
References
External links
Upcoming blue moon dates (timeanddate.com).
Blue moon calculator (obliquity.com)
Calendars
Moon myths
Full moon | Blue moon | [
"Physics",
"Astronomy"
] | 3,751 | [
"Calendars",
"Physical quantities",
"Time",
"Astronomical myths",
"Spacetime",
"Moon myths"
] |
316,042 | https://en.wikipedia.org/wiki/Partisan%20game | In combinatorial game theory, a game is partisan (sometimes partizan) if it is not impartial. That is, some moves are available to one player and not to the other, or the payoffs are not symmetric.
Most games are partisan. For example, in chess, only one player can move the white pieces. More strongly, when analyzed using combinatorial game theory, many chess positions have values that cannot be expressed as the value of an impartial game, for instance when one side has a number of extra tempos that can be used to put the other side into zugzwang.
Partisan games are more difficult to analyze than impartial games, as the Sprague–Grundy theorem does not apply. However, the application of combinatorial game theory to partisan games allows the significance of numbers as games to be seen, in a way that is not possible with impartial games.
References
Combinatorial game theory | Partisan game | [
"Mathematics"
] | 198 | [
"Recreational mathematics",
"Game theory",
"Combinatorial game theory",
"Combinatorics"
] |
316,061 | https://en.wikipedia.org/wiki/Sluice | A sluice ( ) is a water channel containing a sluice gate, a type of lock to manage the water flow and water level. It can also be an open channel which processes material, such as a river sluice used in gold prospecting or fossicking. A mill race, leet, flume, penstock or lade is a sluice channeling water toward a water mill. The terms sluice, sluice gate, knife gate, and slide gate are used interchangeably in the water and wastewater control industry.
Operation
"Sluice gate" refers to a movable gate allowing water to flow under it. When a sluice is lowered, water may spill over the top, in which case the gate operates as a weir. Usually, a mechanism drives the sluice up or down. This may be a simple, hand-operated, chain pulled/lowered, worm drive or rack-and-pinion drive, or it may be electrically or hydraulically powered. A flap sluice, however, operates automatically, without external intervention or inputs.
Types of sluice gates
Flap sluice gate A fully automatic type, controlled by the pressure head across it; operation is similar to that of a check valve. It is a gate hinged at the top. When pressure is from one side, the gate is kept closed; a pressure from the other side opens the sluice when a threshold pressure is surpassed.
Vertical rising sluice gate A plate sliding in the vertical direction, which may be controlled by machinery.
Radial sluice gate A structure, where a small part of a cylindrical surface serves as the gate, supported by radial constructions going through the cylinder's radius. On occasion, a counterweight is provided.
Rising sector sluice gate Also a part of a cylindrical surface, which rests at the bottom of the channel and rises by rotating around its centre.
Needle sluice A sluice formed by a number of thin needles held against a solid frame through water pressure as in a needle dam.
Fan gate () This type of gate was invented by the Dutch hydraulic engineer in 1808. He was Inspector-General for Waterstaat (Water resource management) of the Kingdom of Holland at the time. The Fan door has the special property that it can open in the direction of high water solely using water pressure. This gate type was primarily used to purposely inundate certain regions, for instance in the case of the Hollandic Water Line. Nowadays this type of gate can still be found in a few places, for example in Gouda. A fan gate has a separate chamber that can be filled with water and is separated on the high-water-level side of the sluice by a large door. When a tube connecting the separate chamber with the high-water-level side of the sluice is opened, the water level, and with that the water pressure in this chamber, will rise to the same level as that on the high-water-level side. As there is no height difference across the larger gate, it exerts no force. However the smaller gate has a higher level on the upstream side, which exerts a force to close the gate. When the tube to the low water side is opened the water level in the chamber will fall. Due to the difference in the surface areas of the doors there will be a net force opening the gate.
Designing the sluice gate
Sluice gates are one of the most common hydraulic structures used to control or measure the flow in open channels. Vertical rising sluice gates are the most common in open channels and can operate under two flow regimes: free flow and submerged flow. The most important depths in designing of sluice gates are:
: upstream depth
: opening of the sluice gate
: the minimum depth of flow after the sluice gate
: the initial depth of the hydraulic jump
: the secondary depth of the hydraulic jump
: downstream depth
Logging sluices
In the mountains of the United States, sluices transported logs from steep hillsides to downslope sawmill ponds or yarding areas. Nineteenth-century logging was traditionally a winter activity for men who spent summers working on farms. Where there were freezing nights, water might be applied to logging sluices every night so a fresh coating of slippery ice would reduce friction of logs placed in the sluice the following morning.
Placer mining applications
Sluice boxes are often used in the recovery of black sands, gold, and other minerals from placer deposits during placer mining operations. They may be small-scale, as used in prospecting, or much larger, as in commercial operations, where the material is sometimes screened using a trommel, screening plant or sieve. Traditional sluices have transverse riffles over a carpet or rubber matting, which trap the heavy minerals, gemstones, and other valuable minerals. Since the early 2000s more miners and prospectors are relying on more modern and effective matting systems. The result is a concentrate which requires additional processing.
Types of material
Aluminium Most sluices are formed with aluminium using a press brake to form a U shape
Wood Traditionally wood was the material of choice for sluice gates.
Cast iron Cast iron has been popular when constructing sluice gates for years. This material is great at keeping the strength needed when dealing with powerful water levels.
Stainless steel In most cases, stainless steel is lighter than the older cast iron material.
Fibre-reinforced plastic (FRP) In modern times, newer materials such as fibre-reinforced plastic are being used to build sluices. These modern technologies have many of the attributes of the older materials, while introducing advantages such as corrosion resistance and much lighter weights.
Regional names for sluice gates
In the Somerset Levels, sluice gates are known as clyse or clyce.
Most of the inhabitants of Guyana refer to sluices as kokers.
The Sinhala people in Sri Lanka, who had an ancient civilization based on harvested rain water, refer to sluices as Horovuwa.
Gallery
See also
Control lock
Floodgate
Gatehouse (waterworks) – An (elaborate) structure to house a sluice gate
Lock
Rhyne
Zijlstra – A Dutch name referring to one who lives near a sluice
Van der Sluijs – A Dutch name originating from the Sluice
Hydraulic engineering
Canal
List of canals by country
References
Further reading
External links
Soar Valley Sluice Gates
Salt/Fresh water separating Sluice Complex (Part of DeltaWorks)
Canals
Hydraulic engineering
Water transport infrastructure
Dutch words and phrases | Sluice | [
"Physics",
"Engineering",
"Environmental_science"
] | 1,354 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
316,081 | https://en.wikipedia.org/wiki/Icosian%20calculus | The icosian calculus is a non-commutative algebraic structure discovered by the Irish mathematician William Rowan Hamilton in 1856.
In modern terms, he gave a group presentation of the icosahedral rotation group by generators and relations.
Hamilton's discovery derived from his attempts to find an algebra of "triplets" or 3-tuples that he believed would reflect the three Cartesian axes. The symbols of the icosian calculus correspond to moves between vertices on a dodecahedron. (Hamilton originally thought in terms of moves between the faces of an icosahedron, which is equivalent by duality. This is the origin of the name "icosian".) Hamilton's work in this area resulted indirectly in the terms Hamiltonian circuit and Hamiltonian path in graph theory. He also invented the icosian game as a means of illustrating and popularising his discovery.
Informal definition
The algebra is based on three symbols, , , and , that Hamilton described as "roots of unity", by which he meant that repeated application of any of them a particular number of times yields the identity, which he denoted by 1. Specifically, they satisfy the relations,
Hamilton gives one additional relation between the symbols,
which is to be understood as application of followed by application of . Hamilton points out that application in the reverse order produces a different result, implying that composition or multiplication of symbols is not generally commutative, although it is associative. The symbols generate a group of order 60, isomorphic to the group of rotations of a regular icosahedron or dodecahedron, and therefore to the alternating group of degree five. This, however, is not how Hamilton described them.
Hamilton drew comparisons between the icosians and his system of quaternions, but noted that, unlike quaternions, which can be added and multiplied, obeying a distributive law, the icosians could only, as far as he knew, be multiplied.
Hamilton understood his symbols by reference to the dodecahedron, which he represented in flattened form as a graph in the plane. The dodecahedron has 30 edges, and if arrows are placed on edges, there are two possible arrow directions for each edge, resulting in 60 directed edges. Each symbol corresponds to a permutation of the set of directed edges. The definitions below refer to the labeled diagram above. The notation represents a directed edge from vertex to vertex . Vertex is the tail of and vertex is its head.
The icosian symbol reverses the arrow on every directed edge, that is, it interchanges the head and tail. Hence is transformed into . Similarly, applying to produces , and to produces .
The icosian symbol , applied to a directed edge , produces the directed edge that (1) has the same head as and that (2) is encountered first as one moves around the head of in the anticlockwise direction. Hence applying to produces , to produces , and to produces .
The icosian symbol applied to a directed edge produces the directed edge that results from making a right turn at the head of . Hence applying to produces , to produces , and to produces . Comparing the results of applying and to the same directed edge exhibits the rule .
It is useful to define the symbol for the operation that produces the directed edge that results from making a left turn at the head of the directed edge to which the operation is applied. This symbol satisfies the relations
For example, the directed edge obtained by making a left turn from is . Indeed, applied to produces and applied to produces . Also, applied to produces and applied to produces .
These permutations are not rotations of the dodecahedron. Nevertheless, the group of permutations generated by these symbols is isomorphic to the rotation group of the dodecahedron, a fact that can be deduced from a specific feature of symmetric cubic graphs, of which the dodecahedron graph is an example. The rotation group of the dodecahedron has the property that for a given directed edge there is a unique rotation that sends that directed edge to any other specified directed edge. Hence by choosing a reference edge, say , a one-to-one correspondence between directed edges and rotations is established: let be the rotation that sends the reference edge to directed edge . (Indeed, there are 60 directed edges and 60 rotations.) The rotations are permutations of the set of directed edges of a different sort. Let denote the image of edge under the rotation . The icosian associated to sends the reference edge to the same directed edge as does , namely to . The result of applying that icosian to any other directed edge is .
Application to Hamiltonian circuits on the edges of the dodecahedron
A word consisting of the symbols and corresponds to a sequence of right and left turns in the graph. Specifying such a word along with an initial directed edge therefore specifies a directed path along the edges of the dodecahedron. If the group element represented by the word equals the identity, then the path returns to the initial directed edge in the final step. If the additional requirement is imposed that every vertex of the graph be visited exactly once—specifically that every vertex occur exactly once as the head of a directed edge in the path—then a Hamiltonian circuit is obtained. Finding such a circuit was one of the challenges posed by Hamilton's icosian game. Hamilton exhibited the word with the properties described above. Any of the 60 directed edges may serve as initial edge as a consequence of the symmetry of the dodecahedron, but only 30 distinct Hamiltonian circuits are obtained in this way, up to shift in starting point, because the word consists of the same sequence of 10 left and right turns repeated twice. The word with the roles of and interchanged has the same properties, but these give the same Hamiltonian cycles, up to shift in initial edge and reversal of direction. Hence Hamilton's word accounts for all Hamiltonian cycles in the dodecahedron, whose number is known to be 30.
Legacy
The icosian calculus is one of the earliest examples of many mathematical ideas, including:
presenting and studying a group by generators and relations;
visualization of a group by a graph, which led to combinatorial group theory and later geometric group theory;
Hamiltonian circuits and Hamiltonian paths in graph theory;
dessin d'enfant – see dessin d'enfant: history for details.
See also
Icosian
References
Graph theory
Abstract algebra
Binary operations
Rotational symmetry
William Rowan Hamilton | Icosian calculus | [
"Physics",
"Mathematics"
] | 1,331 | [
"Discrete mathematics",
"Algebra",
"Binary relations",
"Graph theory",
"Binary operations",
"Combinatorics",
"Mathematical relations",
"Abstract algebra",
"Symmetry",
"Rotational symmetry"
] |
316,405 | https://en.wikipedia.org/wiki/Non-directional%20beacon | A non-directional beacon (NDB) or non-directional radio beacon is a radio beacon which does not include directional information. Radio beacons are radio transmitters at a known location, used as an aviation or marine navigational aid. NDB are in contrast to directional radio beacons and other navigational aids, such as low-frequency radio range, VHF omnidirectional range (VOR) and tactical air navigation system (TACAN).
NDB signals follow the curvature of the Earth, so they can be received at much greater distances at lower altitudes, a major advantage over VOR. However, NDB signals are also affected more by atmospheric conditions, mountainous terrain, coastal refraction and electrical storms, particularly at long range. The system, developed by United States Army Air Corps (USAAC) Captain Albert Francis Hegenberger, was used to fly the world's first instrument approach on May 9, 1932.
Types of NDBs
NDBs used for aviation are standardised by the International Civil Aviation Organization (ICAO) Annex 10 which specifies that NDBs be operated on a frequency between 190 kHz and 1750 kHz, although normally all NDBs in North America operate between 190 kHz and 535 kHz. Each NDB is identified by a one, two, or three-letter Morse code callsign. In Canada, privately owned NDB identifiers consist of one letter and one number.
Non-directional beacons in North America are classified by power output: "low" power rating is less than 50 watts; "medium" from 50 W to 2,000 W; and "high" at more than 2,000 W.
There are four types of non-directional beacons in the aeronautical navigation service:
En route NDBs, used to mark airways
Approach NDBs
Localizer beacons
Locator beacons
The last two types are used in conjunction with an instrument landing system (ILS).
Automatic direction finder equipment
NDB navigation consists of two parts — the automatic direction finder (ADF) equipment on the aircraft that detects an NDB's signal, and the NDB transmitter. The ADF can also locate transmitters in the standard AM medium wave broadcast band (530 kHz to 1700 kHz at 10 kHz increments in the Americas, 531 kHz to 1602 kHz at 9 kHz increments in the rest of the world).
ADF equipment determines the direction or bearing to the NDB station relative to the aircraft by using a combination of directional and non-directional antennae to sense the direction in which the combined signal is strongest. This bearing may be displayed on a relative bearing indicator (RBI). This display looks like a compass card with a needle superimposed, except that the card is fixed with the 0 degree position corresponding to the centreline of the aircraft. In order to track toward an NDB (with no wind), the aircraft is flown so that the needle points to the 0 degree position. The aircraft will then fly directly to the NDB. Similarly, the aircraft will track directly away from the NDB if the needle is maintained on the 180 degree mark. With a crosswind, the needle must be maintained to the left or right of the 0 or 180 position by an amount corresponding to the drift due to the crosswind.
The formula to determine the compass heading to an NDB station (in a no wind situation) is to take the relative bearing between the aircraft and the station, and add the magnetic heading of the aircraft; if the total is greater than 360 degrees, then 360 must be subtracted. This gives the magnetic bearing that must be flown: (RB + MH) mod 360 = MB.
When tracking to or from an NDB, it is also usual that the aircraft track on a specific bearing. To do this it is necessary to correlate the RBI reading with the compass heading. Having determined the drift, the aircraft must be flown so that the compass heading is the required bearing adjusted for drift at the same time as the RBI reading is 0 or 180 adjusted for drift. An NDB may also be used to locate a position along the aircraft's current track (such as a radial path from a second NDB or a VOR). When the needle reaches an RBI reading corresponding to the required bearing, then the aircraft is at the position. However, using a separate RBI and compass, this requires considerable mental calculation to determine the appropriate relative bearing.
To simplify this task, a compass card driven by the aircraft's magnetic compass is added to the RBI to form a radio magnetic indicator (RMI). The ADF needle is then referenced immediately to the aircraft's magnetic heading, which reduces the necessity for mental calculation. Many RMIs used for aviation also allow the device to display information from a second radio tuned to a VOR station; the aircraft can then fly directly between VOR stations (so-called "Victor" routes) while using the NDBs to triangulate their position along the radial, without the need for the VOR station to have a collocated distance measuring equipment (DME). This display, along with the omni bearing indicator (OBI) for VOR/ILS information, was one of the primary radio navigation instruments prior to the introduction of the horizontal situation indicator (HSI) and subsequent digital displays used in glass cockpits.
The principles of ADFs are not limited to NDB usage; such systems are also used to detect the locations of broadcast signals for many other purposes, such as finding emergency beacons.
Uses
Airways
A bearing is a line passing through the station that points in a specific direction, such as 270 degrees (due west). NDB bearings provide a charted, consistent method for defining paths aircraft can fly. In this fashion, NDBs can, like VORs, define airways in the sky. Aircraft follow these pre-defined routes to complete a flight plan. Airways are numbered and standardized on charts. Colored airways are used for low to medium frequency stations like the NDB and are charted in brown on sectional charts. Green and red airways are plotted east and west, while amber and blue airways are plotted north and south. As of September 2022, only one colored airway is left in the continental United States, located off the coast of North Carolina and is called G13 or Green 13. Alaska is the only other state in the United States to make use of the colored airway systems. Pilots follow these routes by tracking bearings across various navigation stations, and turning at some. While most airways in the United States are based on VORs, NDB airways are common elsewhere, especially in the developing world and in lightly populated areas of developed countries, like the Canadian Arctic, since they can have a long range and are much less expensive to operate than VORs.
All standard airways are plotted on aeronautical charts, such as the United States sectional charts, issued by the National Oceanic and Atmospheric Administration (NOAA).
Fixes
NDBs have long been used by aircraft navigators, and previously mariners, to help obtain a fix of their geographic location on the surface of the Earth. Fixes are computed by extending lines through known navigational reference points until they intersect. For visual reference points, the angles of these lines can be determined by compass; the bearings of NDB radio signals are found using radio direction finder (RDF) equipment.
Plotting fixes in this manner allow crews to determine their position. This usage is important in situations where other navigational equipment, such as VORs with distance measuring equipment (DME), have failed. In marine navigation, NDBs may still be useful should Global Positioning System (GPS) reception fail.
Determining distance from an NDB station
To determine the distance to an NDB station, the pilot uses this method:
Turns the aircraft so that the station is directly off one of the wingtips.
Flies that heading, timing how long it takes to cross a specific number of NDB bearings.
Uses the formula: Time to station = 60 × number of minutes flown / degrees of bearing change
Computes the distance the aircraft is from the station; time × speed = distance
NDB approaches
A runway equipped with NDB or VOR (or both) as the only navigation aid is called a non-precision approach runway; if it is equipped with ILS, it is called a precision approach runway.
Instrument landing systems
NDBs are most commonly used as markers or "locators" for an instrument landing system (ILS) approach or standard approach. NDBs may designate the starting area for an ILS approach or a path to follow for a standard terminal arrival route, or STAR. In the United States, an NDB is often combined with the outer marker beacon in the ILS approach (called a locator outer marker, or LOM); in Canada, low-powered NDBs have replaced marker beacons entirely. Marker beacons on ILS approaches are now being phased out worldwide with DME ranges or GPS signals used, instead, to delineate the different segments of the approach.
Naval operational uses
German Navy U-boats during World War II were equipped with a Telefunken Spez 2113S homing beacon. This transmitter could operate on 100 kHz to 1500 kHz with a power of 150 W. It was used to send the submarine's location to other submarines or aircraft, which were equipped with DF receivers and loop antennas.
Antenna and signal characteristics
NDBs typically operate in the frequency range from 190 kHz to 535 kHz (although they are allocated frequencies from 190 to 1750 kHz) and transmit a carrier modulated by either 400 or 1020 Hz. NDBs can also be collocated with a DME in a similar installation for the ILS as the outer marker, only in this case, they function as the inner marker. NDB owners are mostly governmental agencies and airport authorities.
NDB radiators are vertically polarised. NDB antennas are usually too short for resonance at the frequency they operate – typically perhaps 20 metres length compared to a wavelength around 1000 m. Therefore, they require a suitable matching network that may consist of an inductor and a capacitor to "tune" the antenna. Vertical NDB antennas may also have a T-antenna, nicknamed a top hat, which is an umbrella-like structure designed to add loading at the end and improve its radiating efficiency. Usually a ground plane or counterpoise is connected underneath the antenna.
Other information transmitted by an NDB
Apart from Morse code identity of either 400 Hz or 1020 Hz, the NDB may broadcast:
Automatic terminal information service (ATIS)
Automatic weather information service (AWIS), or, in an emergency i.e. air-to-ground communication failure, an air traffic controller using a push-to-talk (PTT) function, may modulate the carrier with voice. The pilot uses their ADF receiver to hear instructions from the control tower.
Automated weather observing system (AWOS)
Automated surface observing system (ASOS)
VOLMET (meteorological information for aircraft in flight) or meteorological information broadcast
Transcribed weather broadcast (TWEB)
PIP monitoring. If an NDB has a problem, e.g. lower than normal power output, failure of mains power or standby transmitter is in operation, the NDB may be programmed to transmit an extra 'PIP' (a Morse dot), to alert pilots and others that the beacon may be unreliable for navigation.
Common adverse effects
Navigation using an ADF to track NDBs is subject to several common effects:
Night effect
Radio waves reflected back by the ionosphere can cause signal strength fluctuations from the transmitter, especially just before sunrise and just after sunset. This is more common on frequencies above 350 kHz. Because the returning sky waves travel over a different path, they have a different phase from the ground wave. This has the effect of suppressing the aerial signal in a fairly random manner. The needle on the indicator will start wandering. The indication will be most erratic during twilight at dusk and dawn.
Terrain effect
High terrain like mountains and cliffs can reflect radio waves, giving erroneous readings. Magnetic deposits can also cause erroneous readings
Thunderstorm effect
Water droplets and ice crystals circulating within a storm cloud, generate wideband noise. This high power noise may affect the accuracy of the ADF bearing. Lightning, due to the high power output will cause the needle of the RMI/RBI to point for a moment to the bearing of the lightning.
Shoreline effect
Radio waves speed up over water, causing the wave front to bend away from its normal path and pull it towards the coast. Refraction is negligible perpendicular (90°) to the coast, but increases as the angle of incidence decreases. The effect can be minimised by flying higher or by using NDBs situated nearer the coast.
Station interference
Due to congestion of stations in the LF and MF bands, there is the possibility of interference from stations on or near the same frequency. This will cause bearing errors. By day, the use of an NDB within the DOC will normally afford protection from interference. However, at night one can expect interference even within the DOC because of skywave contamination from stations out of range by day. Therefore, positive identification of the NDB at night should always be carried out.
Dip (bank) angle
During banking turns in an aircraft, the horizontal part of the loop aerial will no longer be horizontal and detect a signal. This causes displacement of the null in a way similar to the night effect giving an erroneous reading on the indicator which means that the pilot should not obtain a bearing unless the aircraft is wings-level.
While pilots study these effects during initial training, trying to compensate for them in flight is very difficult; instead, pilots generally simply choose a heading that seems to average out any fluctuations.
Radio-navigation aids must keep a certain degree of accuracy, given by international standards, Federal Aviation Administration (FAA), ICAO, etc.; to assure this is the case, Flight inspection organizations periodically check critical parameters with properly equipped aircraft to calibrate and certify NDB precision. The ICAO minimum accuracy for NDBs is ±5°
Monitoring NDBs
Besides their use in aircraft navigation, NDBs are also popular with long-distance radio enthusiasts (DXers). Because NDBs are generally low-power (usually 25 watts, some can be up to 5 kW), they normally cannot be heard over long distances, but favorable conditions in the ionosphere can allow NDB signals to travel much farther than normal. Because of this, radio DXers interested in picking up distant signals enjoy listening to faraway NDBs. Also, since the band allocated to NDBs is free of broadcast stations and their associated interference, and because most NDBs do little more than transmit their Morse code callsign, they are very easy to identify, making NDB monitoring an active niche within the DXing hobby.
In North America, the NDB band is from 190 to 435 kHz and from 510 to 530 kHz. In Europe, there is a longwave broadcasting band from 150 to 280 kHz, so the European NDB band is from 280 kHz to 530 kHz with a gap between 495 and 505 kHz because 500 kHz was the international maritime distress (emergency) frequency.
The beacons that transmit between 510 kHz and 530 kHz can sometimes be heard on AM radios that can tune below the beginning of the medium wave (MW) broadcast band. However, reception of NDBs generally requires a radio receiver that can receive frequencies below 530 kHz. Often "general coverage" shortwave radios receive all frequencies from 150 kHz to 30 MHz, and so can tune to the frequencies of NDBs. Specialized techniques (receiver preselectors, noise limiters and filters) are required for the reception of very weak signals from remote beacons.
The best time to hear NDBs that are very far away is the last three hours before sunrise. Reception of NDBs is also usually best during the fall and winter because during the spring and summer, there is more atmospheric noise on the LF and MF bands.
Beacon closures
As the adoption of satellite navigation systems such as GPS progressed, several countries began to decommission beacon installations such as NDBs and VOR. The policy has caused controversy in the aviation industry.
Airservices Australia began shutting down a number of ground-based navigation aids in May 2016, including NDBs, VORs and DMEs.
In the United States as of 2017, there were more than 1,300 NDBs, of which fewer than 300 were owned by the Federal Government. The FAA had begun decommissioning stand-alone NDBs. As of April 2018, the FAA had disabled 23 ground-based navaids including NDBs, and plans to shut down more than 300 by 2025. The FAA has no sustaining or acquisition system for NDBs and plans to phase out the existing NDBs through attrition, citing decreased pilot reliance on NDBs as more pilots use VOR and GPS navigation.
See also
Cardioid
Differential Global Positioning System (DGPS)
Electric beacon
Instrument flight rules (IFR)
Transponder landing system (TLS)
Notes
References
Further reading
International Civil Aviation Organization (2000). Annex 10 — Aeronautical Telecommunications, Vol. I (Radio Navigation Aids) (5th ed.).
U.S. Federal Aviation Administration (2004). Aeronautical Information Manual, § 1-1-2.
External links
List of North American navigation aids from airnav.com
UK Navaids Gallery with detailed Technical Descriptions of their operation
Flash-based ADF instrument simulator
Large selection of beacon related resources at the NDB List Website
The NDB List Radiobeacon Photo Gallery
On The art of NDB DXing [archived]
Database with NDBs
Automatic Direction Finder
Aeronautical navigation systems
Radio navigation
Beacons | Non-directional beacon | [
"Technology",
"Engineering"
] | 3,670 | [
"Aircraft instruments",
"Measuring instruments"
] |
316,522 | https://en.wikipedia.org/wiki/Fructose%201%2C6-bisphosphatase | The enzyme fructose bisphosphatase (EC 3.1.3.11; systematic name D-fructose-1,6-bisphosphate 1-phosphohydrolase) catalyses the conversion of fructose-1,6-bisphosphate to fructose 6-phosphate in gluconeogenesis and the Calvin cycle, which are both anabolic pathways:
D-fructose 1,6-bisphosphate + H2O = D-fructose 6-phosphate + phosphate
Phosphofructokinase (EC 2.7.1.11) catalyses the reverse conversion of fructose 6-phosphate to fructose-1,6-bisphosphate, but this is not just the reverse reaction, because the co-substrates are different (and so thermodynamic requirements are not violated). The two enzymes each catalyse the conversion in one direction only, and are regulated by metabolites such as fructose 2,6-bisphosphate so that high activity of one of them is accompanied by low activity of the other. More specifically, fructose 2,6-bisphosphate allosterically inhibits fructose 1,6-bisphosphatase, but activates phosphofructokinase-I. Fructose 1,6-bisphosphatase is involved in many different metabolic pathways and found in most organisms. FBPase requires metal ions for catalysis (Mg2+ and Mn2+ being preferred) and the enzyme is potently inhibited by Li+.
Structure
The fold of fructose-1,6-bisphosphatase from pigs was noted to be identical to that of inositol-1-phosphatase (IMPase). Inositol polyphosphate 1-phosphatase (IPPase), IMPase and FBPase share a sequence motif (Asp-Pro-Ile/Leu-Asp-Gly/Ser-Thr/Ser) which has been shown to bind metal ions and participate in catalysis. This motif is also found in the distantly-related fungal, bacterial and yeast IMPase homologues. It has been suggested that these proteins define an ancient structurally conserved family involved in diverse metabolic pathways, including inositol signalling, gluconeogenesis, sulphate assimilation and possibly quinone metabolism.
Species distribution
Three different groups of FBPases have been identified in eukaryotes and bacteria (FBPase I-III). None of these groups have been found in Archaea so far, though a new group of FBPases (FBPase IV) which also show inositol monophosphatase activity has recently been identified in Archaea.
A new group of FBPases (FBPase V) is found in thermophilic archaea and the hyperthermophilic bacterium Aquifex aeolicus. The characterised members of this group show strict substrate specificity for FBP and are suggested to be the true FBPase in these organisms. A structural study suggests that FBPase V has a novel fold for a sugar phosphatase, forming a four-layer alpha-beta-beta-alpha sandwich, unlike the more usual five-layered alpha-beta-alpha-beta-alpha arrangement. The arrangement of the catalytic side chains and metal ligands was found to be consistent with the three-metal ion assisted catalysis mechanism proposed for other FBPases.
The fructose 1,6-bisphosphatases found within the Bacillota (low GC Gram-positive bacteria) do not show any significant sequence similarity to the enzymes from other organisms. The Bacillus subtilis enzyme is inhibited by AMP, though this can be overcome by phosphoenolpyruvate, and is dependent on Mn(2+). Mutants lacking this enzyme are apparently still able to grow on gluconeogenic growth substrates such as malate and glycerol.
Interactive pathway map
Hibernation and cold adaptation
Fructose 1,6-bisphosphatase also plays a key role in hibernation, which requires strict regulation of metabolic processes to facilitate entry into hibernation, maintenance, arousal from hibernation, and adjustments to allow long-term dormancy. During hibernation, an animal's metabolic rate may decrease to around 1/25 of its euthermic resting metabolic rate. FBPase is modified in hibernating animals to be much more temperature sensitive than it is in euthermic animals. FBPase in the liver of a hibernating bat showed a 75% decrease in Km for its substrate FBP at 5 °C than at 37 °C. However, in a euthermic bat this decrease was only 25%, demonstrating the difference in temperature sensitivity between hibernating and euthermic bats. When sensitivity to allosteric inhibitors such as AMP, ADP, inorganic phosphate, and fructose-2,6-bisphosphate were examined, FBPase from hibernating bats was much more sensitive to inhibitors at low temperature than in euthermic bats.
During hibernation, respiration also dramatically decreases, resulting in conditions of relative anoxia in the tissues. Anoxic conditions inhibit gluconeogenesis, and therefore FBPase, while stimulating glycolysis, and this is another reason for reduced FBPase activity in hibernating animals. The substrate of FBPase, fructose 1,6-bisphosphate, has also been shown to activate pyruvate kinase in glycolysis, linking increased glycolysis to decreased gluconeogenesis when FBPase activity is decreased during hibernation.
In addition to hibernation, there is evidence that FBPase activity varies significantly between warm and cold seasons even for animals that do not hibernate.
In rabbits exposed to cold temperatures, FBPase activity decreased throughout the duration of cold exposure, increasing when temperatures became warmer again. The mechanism of this FBPase inhibition is thought to be digestion of FBPase by lysosomal proteases, which are released at higher levels during colder periods. Inhibition of FBPase through proteolytic digestion decreases gluconeogenesis relative to glycolysis during cold periods, similar to hibernation.
Fructose 1,6-bisphosphate aldolase is another temperature dependent enzyme that plays an important role in the regulation of glycolysis and gluconeogenesis during hibernation. Its main role is in glycolysis instead of gluconeogenesis, but its substrate is the same as FBPase's, so its activity affects that of FBPase in gluconeogenesis. Aldolase shows similar changes in activity to FBPase at colder temperatures, such as an upward shift in optimum pH at colder temperatures. This adaptation allows enzymes such as FBPase and fructose-1,6-bisphosphate aldolase to track intracellular pH changes in hibernating animals and match their activity ranges to these shifts. Aldolase also complements the activity of FBPase in anoxic conditions (discussed above) by increasing glycolytic output while FBPase inhibition decreases gluconeogenesis activity.
Diabetes
Fructose 1,6-bisphosphatase is also a key player in treating type 2 diabetes. In this disease, hyperglycemia causes many serious problems, and treatments often focus on lowering blood sugar levels. Gluconeogenesis in the liver is a major cause of glucose overproduction in these patients, and so inhibition of gluconeogenesis is a reasonable way to treat type 2 diabetes. FBPase is a good enzyme to target in the gluconeogenesis pathway because it is rate-limiting and controls the incorporation of all three-carbon substrates into glucose but is not involved in glycogen breakdown and is removed from mitochondrial steps in the pathway. This means that altering its activity can have a large effect on gluconeogenesis while reducing the risk of hypoglycemia and other potential side effects from altering other enzymes in gluconeogenesis.
Drug candidates have been developed that mimic the inhibitory activity of AMP on FBPase. Efforts were made to mimic the allosteric inhibitory effects of AMP while making the drug as structurally different from it as possible. Second-generation FBPase inhibitors have now been developed and have had good results in clinical trials with non-human mammals and now humans.
See also
Fructose bisphosphatase deficiency
Fructose
Gluconeogenesis
Metabolism
References
Further reading
External links
EC 3.1.3
Protein families | Fructose 1,6-bisphosphatase | [
"Biology"
] | 1,890 | [
"Protein families",
"Protein classification"
] |
316,524 | https://en.wikipedia.org/wiki/Fructose%20bisphosphatase%20deficiency | In fructose bisphosphatase deficiency, there is not enough fructose bisphosphatase for gluconeogenesis to occur correctly. Glycolysis (the breakdown of glucose) will still work, as it does not use this enzyme.
History
The first known description of a patient with this condition was published in 1970 in The Lancet journal.
Early research into the disorder was conducted by a team led by Anthony S. Pagliara and Barbara Illingworth Brown at Washington University Medical Center, based on the case of an infant girl from Oak Ridge, Missouri.
Presentation
Without effective gluconeogenesis (GNG), hypoglycaemia will set in after about 12 hours of fasting. This is the time when liver glycogen stores have been exhausted, and the body has to rely on GNG. When given a dose of glucagon (which would normally increase blood glucose) nothing will happen, as stores are depleted and GNG doesn't work. (In fact, the patient would already have high glucagon levels.)
There is no problem with the metabolism of glucose or galactose, but fructose and glycerol cannot be used by the liver to maintain blood glucose levels. If fructose or glycerol are given, there will be a buildup of phosphorylated three-carbon sugars. This leads to phosphate depletion within the cells, and also in the blood. Without phosphate, ATP cannot be made, and many cell processes cannot occur.
High levels of glucagon will tend to release fatty acids from adipose tissue, and this will combine with glycerol that cannot be used in the liver, to make triacylglycerides causing a fatty liver.
As three carbon molecules cannot be used to make glucose, they will instead be made into pyruvate and lactate. These acids cause a drop in the pH of the blood (a metabolic acidosis). Acetyl CoA (acetyl co-enzyme A) will also build up, leading to the creation of ketone bodies.
Diagnosis
Diagnosis is made by measurement of FDPase in cultured lymphocytes and confirmed by detection of mutation of FBP1, encoding FDPase.
Treatment
To treat people with a deficiency of this enzyme, they must avoid needing gluconeogenesis to make glucose. This can be accomplished by not fasting for long periods, and eating high-carbohydrate food. They should avoid fructose containing foods (as well as sucrose which breaks down to fructose).
As with all single-gene metabolic disorders, there is always hope for genetic therapy, inserting a healthy copy of the gene into existing liver cells.
See also
Fructose
Gluconeogenesis
Metabolism
References
External links
Inborn errors of carbohydrate metabolism | Fructose bisphosphatase deficiency | [
"Chemistry"
] | 601 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism"
] |
316,528 | https://en.wikipedia.org/wiki/Thallus | Thallus (: thalli), from Latinized Greek (), meaning "a green shoot" or "twig", is the vegetative tissue of some organisms in diverse groups such as algae, fungi, some liverworts, lichens, and the Myxogastria. A thallus usually names the entire body of a multicellular non-moving organism in which there is no organization of the tissues into organs. Many of these organisms were previously known as the thallophytes, a polyphyletic group of distantly related organisms. An organism or structure resembling a thallus is called thalloid, thalloidal, thalliform, thalline, or thallose.
Even though thalli do not have organized and distinct parts (leaves, roots, and stems) as do the vascular plants, they may have analogous structures that resemble their vascular "equivalents". The analogous structures have similar function or macroscopic structure, but different microscopic structure; for example, no thallus has vascular tissue. In exceptional cases such as the Lemnoideae, where the structure of a vascular plant is in fact thallus-like, it is referred to as having a thalloid structure, or sometimes as a thalloid.
Although a thallus is largely undifferentiated in terms of its anatomy, there can be visible differences and functional differences. A kelp, for example, may have its thallus divided into three regions. The parts of a kelp thallus include the holdfast (anchor), stipe (supports the blades) and the blades (for photosynthesis).
The thallus of a fungus is usually called a mycelium. The term thallus is also commonly used to refer to the vegetative body of a lichen. In seaweed, thallus is sometimes also called 'frond'.
The gametophyte of some non-thallophyte plants – clubmosses, horsetails, and ferns is termed "prothallus".
See also
Homothallism
Heterothallism
Wengania
References
Plant morphology
Fungal morphology and anatomy | Thallus | [
"Biology"
] | 438 | [
"Plant morphology",
"Plants"
] |
316,533 | https://en.wikipedia.org/wiki/Thallophyte | Thallophytes (Thallophyta, Thallophyto or Thallobionta) are a polyphyletic group of non-motile organisms traditionally described as "thalloid plants", "relatively simple plants" or "lower plants". They form a division of kingdom Plantae that include lichens and algae and occasionally bryophytes, bacteria and slime moulds. Thallophytes have a hidden reproductive system and hence they are also incorporated into the similar Cryptogamae category (together with ferns), as opposed to Phanerogamae. Thallophytes are defined by having undifferentiated bodies (thalloid, pseudotissue), as opposed to cormophytes (Cormophyta) with roots and stems. Various groups of thallophytes are major contributors to marine ecosystems.
Definitions
Several different definitions of the group have been used.
Thallophytes (Thallophyta or Thallobionta) are a polyphyletic group of non-mobile organisms traditionally described as "thalloid plants", "relatively simple plants" or "lower plants".
Stephan Endlicher, a 19th-century Austrian botanist, separated the vegetable kingdom into the thallophytes (algae, lichens, fungi) and the cormophytes (including bryophytes and thus being equivalent to Embryophyta in this case) in 1836. This definition of Thallophyta is approximately equivalent to Protophyta, which has always been a loosely defined group.
In the Lindley system (1830–1839), Endlicher's cormophytes were divided into the thallogens (including the bryophytes), and cormogens ("non-flowering" plants with roots), as well as the six other classes. Cormogens were a much smaller group than Endlicher's cormophytes, including just the ferns (and Equisetopsida) and the plants now known as lycopodiophytes.
Thallophyta is a division of the plant kingdom including primitive forms of plant life showing a simple plant body. Including unicellular to large algae, fungi, lichens.
The first ten phyla are referred to as thallophytes. They are simple plants without roots stems or leaves.
They are non-embryophyta. These plants grow mainly in water.
Subdivisions
The Thallophyta have been divided into two subdivisions:
Myxothallophyta (myxomycetes)
Euthallophyta (bacteria, fungi, lichens, algae)
The term Euthallophyta was originally used by Adolf Engler.
See also
Bryophyte
Pteridophyte
References
Bibliography
Cryptogams
Historically recognized plant taxa | Thallophyte | [
"Biology"
] | 588 | [
"Cryptogams",
"Eukaryotes"
] |
316,612 | https://en.wikipedia.org/wiki/Spring%20%28hydrology%29 | A spring is a natural exit point at which groundwater emerges from an aquifer and flows across the ground surface as surface water. It is a component of the hydrosphere, as well as a part of the water cycle. Springs have long been important for humans as a source of fresh water, especially in arid regions which have relatively little annual rainfall.
Springs are driven out onto the surface by various natural forces, such as gravity and hydrostatic pressure. A spring produced by the emergence of geothermally heated groundwater is known as a hot spring. The yield of spring water varies widely from a volumetric flow rate of nearly zero to more than for the biggest springs.
Formation
Springs are formed when groundwater flows onto the surface. This typically happens when the water table reaches above the surface level, or if the terrain depresses sharply. Springs may also be formed as a result of karst topography,[aquifers or volcanic activity. Springs have also been observed on the ocean floor, spewing warmer, low-salinity water directly into the ocean.
Springs formed as a result of karst topography create karst springs, in which ground water travels through a network of cracks and fissures—openings ranging from intergranular spaces to large caves, later emerging in a spring.
The forcing of the spring to the surface can be the result of a confined aquifer in which the recharge area of the spring water table rests at a higher elevation than that of the outlet. Spring water forced to the surface by elevated sources are artesian wells. This is possible even if the outlet is in the form of a cave. In this case the cave is used like a hose by the higher elevated recharge area of groundwater to exit through the lower elevation opening.
Non-artesian springs may simply flow from a higher elevation through the earth to a lower elevation and exit in the form of a spring, using the ground like a drainage pipe. Still other springs are the result of pressure from an underground source in the earth, in the form of volcanic or magma activity. The result can be water at elevated temperature and pressure, i.e. hot springs and geysers.
The action of the groundwater continually dissolves permeable bedrock such as limestone and dolomite, creating vast cave systems.
Types
Depression springs occur along a depression, such as the bottom of alluvial valleys, basins, or valleys made of highly permeable materials.
Contact springs, which occur along the side of a hill or mountain, are created when the groundwater is underlaid by an impermeable layer of rock or soil known as an aquiclude or aquifuge
Fracture, or joint occur when groundwater running along an impermeable layer of rock meets a crack (fracture) or joint in the rock.
Tubular springs occur when groundwater flows from circular fissures such as those found in caverns (solution tubular springs) or lava tubular springs found in lava tube caves.
Artesian springs typically occur at the lowest point in a given area. An artesian spring is created when the pressure for the groundwater becomes greater than the pressure from the atmosphere. In this case the water is pushed straight up out of the ground.
Wonky holes are freshwater submarine exit points for coral and sediment-covered, sediment-filled old river channels.
Karst springs occur as outflows of groundwater that are part of a karst hydrological system.
Thermal springs are heated by geothermal activity; they have a water temperature significantly higher than the mean air temperature of the surrounding area. Geysers are a type of hot spring where steam is created underground by trapped superheated groundwater resulting in recurring eruptions of hot water and steam.
Carbonated springs, such as Soda Springs Geyser, are springs that emit naturally occurring carbonated water, due to dissolved carbon dioxide in the water content. They are sometimes called boiling springs or bubbling springs.
"Gushette springs pour from cliff faces"
Helocrene springs are diffuse that sustain marshlands with groundwater.
Flow
Spring discharge, or resurgence, is determined by the spring's recharge basin. Factors that affect the recharge include the size of the area in which groundwater is captured, the amount of precipitation, the size of capture points, and the size of the spring outlet. Water may leak into the underground system from many sources including permeable earth, sinkholes, and losing streams. In some cases entire creeks seemingly disappear as the water sinks into the ground via the stream bed. Grand Gulf State Park in Missouri is an example of an entire creek vanishing into the groundwater system. The water emerges away, forming some of the discharge of Mammoth Spring in Arkansas. Human activity may also affect a spring's discharge—withdrawal of groundwater reduces the water pressure in an aquifer, decreasing the volume of flow.
Classification
Springs fall into three general classifications: perennial (springs that flow constantly during the year); intermittent (temporary springs that are active after rainfall, or during certain seasonal changes); and periodic (as in geysers that vent and erupt at regular or irregular intervals).
Springs are often classified by the volume of the water they discharge. The largest springs are called "first-magnitude", defined as springs that discharge water at a rate of at least 2800 liters or of water per second. Some locations contain many first-magnitude springs, such as Florida where there are at least 27 known to be that size; the Missouri and Arkansas Ozarks, which contain 10 known of first-magnitude; and 11 more in the Thousand Springs area along the Snake River in Idaho. The scale for spring flow is as follows:
Water content
Minerals become dissolved in the water as it moves through the underground rocks. This mineral content is measured as total dissolved solids (TDS). This may give the water flavor and even carbon dioxide bubbles, depending on the nature of the geology through which it passes. This is why spring water is often bottled and sold as mineral water, although the term is often the subject of deceptive advertising. Mineral water contains no less than 250 parts per million (ppm) of tds. Springs that contain significant amounts of minerals are sometimes called 'mineral springs'. (Springs without such mineral content, meanwhile, are sometimes distinguished as 'sweet springs'.) Springs that contain large amounts of dissolved sodium salts, mostly sodium carbonate, are called 'soda springs'. Many resorts have developed around mineral springs and are known as spa towns. Mineral springs are alleged to have healing properties. Soaking in them is said to result in the absorption of the minerals from the water. Some springs contain arsenic levels that exceed the 10 ppb World Health Organization (WHO) standard for drinking water. Where such springs feed rivers they can also raise the arsenic levels in the rivers above WHO limits.
Water from springs is usually clear. However, some springs may be colored by the minerals that are dissolved in the water. For instance, water heavy with iron or tannins will have an orange color.
In parts of the United States a stream carrying the outflow of a spring to a nearby primary stream may be called a spring branch, spring creek, or run. Groundwater tends to maintain a relatively long-term average temperature of its aquifer; so flow from a spring may be cooler than other sources on a summer day, but remain unfrozen in the winter. The cool water of a spring and its branch may harbor species such as certain trout that are otherwise ill-suited to a warmer local climate.
Types of mineral springs
Sulfur springs contain a high level of dissolved sulfur or hydrogen sulfide in the water. Historically they have been used to alleviate the symptoms of arthritis and other inflammatory diseases.
Borax springs
Gypsum springs
Saline springs
Iron springs (chalybeate spring)
Radium springs (or radioactive springs) have a detectable level of radiation produced by the natural radioactive decay process
Uses
Springs have been used for a variety of human needs - including drinking water, domestic water supply, irrigation, mills, navigation, and electricity generation. Modern uses include recreational activities such as fishing, swimming, and floating; therapy; water for livestock; fish hatcheries; and supply for bottled mineral water or bottled spring water. Springs have taken on a kind of mythic quality in that some people falsely believe that springs are always healthy sources of drinking water. They may or may not be. One must take a comprehensive water quality test to know how to use a spring appropriately, whether for a mineral bath or drinking water. Springs that are managed as spas will already have such a test.
Drinking water
Springs are often used as sources for bottled water. When purchasing bottled water labeled as spring water one can often find the water test for that spring on the website of the company selling it.
Irrigation
Springs have been used as sources of water for gravity-fed irrigation of crops. Indigenous people of the American Southwest built spring-fed acequias that directed water to fields through canals. The Spanish missionaries later used this method.
Sacred springs
A sacred spring, or holy well, is a small body of water emerging from underground and revered in some religious context: Christian and/or pagan and/or other. The lore and mythology of ancient Greece was replete with sacred and storied springs—notably, the Corycian, Pierian and Castalian springs. In medieval Europe, pagan sacred sites frequently became Christianized as holy wells. The term "holy well" is commonly employed to refer to any water source of limited size (i.e., not a lake or river, but including pools and natural springs and seeps), which has some significance in local folklore. This can take the form of a particular name, an associated legend, the attribution of healing qualities to the water through the numinous presence of its guardian spirit or of a Christian saint, or a ceremony or ritual centered on the well site. Christian legends often recount how the action of a saint caused a spring's water to flow - a familiar theme, especially in the hagiography of Celtic saints.
Thermal springs
The geothermally heated groundwater that flows from thermal springs is greater than human body temperature, usually in the range of , but they can be hotter. Those springs with water cooler than body temperature but warmer than air temperature are sometimes referred to as warm springs.
Bathing and balneotherapy
Hot springs or geothermal springs have been used for balneotherapy, bathing, and relaxation for thousands of years. Because of the folklore surrounding hot springs and their claimed medical value, some have become tourist destinations and locations of physical rehabilitation centers.
Geothermal energy
Hot springs have been used as a heat source for thousands of years. In the 20th century, they became a renewable resource of geothermal energy for heating homes and buildings. The city of Beppu, Japan contains 2,217 hot spring well heads that provide the city with hot water. Hot springs have also been used as a source of sustainable energy for greenhouse cultivation and the growing of crops and flowers.
Terminology
Spring boil
Spring pool
Spring runs also called rheocrene springs
Spring vent
Cultural representations
Springs have been represented in culture through art, mythology, and folklore throughout history. The Fountain of Youth is a mythical spring which was said to restore youth to anyone who drank from it. It has been claimed that the fountain is located in St. Augustine, Florida, and was discovered by Juan Ponce de León in 1513. However, it has not demonstrated the power to restore youth, and most historians dispute the veracity of Ponce de León's discovery.
Pythia, also known as the Oracle at Delphi was the high priestess of the Temple of Apollo. She delivered prophesies in a frenzied state of divine possession that were "induced by vapours rising from a chasm in the rock". It is believed that the vapors were emitted from the Kerna spring at Delphi.
The Greek myth of Narcissus describes a young man who fell in love with his reflection in the still pool of a spring. Narcissus gazed into "an unmuddied spring, silvery from its glittering waters, which neither shepherds nor she-goats grazing on the mountain nor any other cattle had touched, which neither bird nor beast nor branch fallen from a tree had disturbed." (Ovid)
The early 20th century American photographer, James Reuel Smith created a comprehensive series of photographs documenting the historical springs of New York City before they were capped by the city after the advent of the municipal water system. Smith later photographed springs in Europe leading to his book, Springs and Wells in Greek and Roman Literature, Their Legends and Locations (1922).
The 19th century Japanese artists Utagawa Hiroshige and Utagawa Toyokuni III created a series of wood-block prints, Two Artists Tour the Seven Hot Springs (Sōhitsu shichitō meguri) in 1854.
The Chinese city Jinan is known as "a City of Springs" (Chinese: 泉城), because of its 72 spring attractions and numerous micro spring holes spread over the city centre.
See also
Fountain
List of spa towns
Petroleum seep
Soakage
Spring line settlement
Spring supply
Water cycle
Well
References
Further reading
Springs of Missouri, Vineyard and Feder, Missouri Department of Natural Resources, Division of Geology and Land Survey in cooperation with U.S. Geological Survey and Missouri Department of Conservation, 1982
Cohen, Stan (Revised 1981 edition), Springs of the Virginias: A Pictorial History, Charleston, West Virginia: Quarrier Press.
External links
"The Science of Springs"
"What Is a Spring?"
Find a spring
Drinking water
Freshwater ecology
Geomorphology
Hydrology
Bodies of water
Articles containing video clips | Spring (hydrology) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,785 | [
"Hydrology",
"Springs (hydrology)",
"Environmental engineering"
] |
316,648 | https://en.wikipedia.org/wiki/Circular%20error%20probable | Circular error probable (CEP), also circular error probability or circle of equal probability, is a measure of a weapon system's precision in the military science of ballistics. It is defined as the radius of a circle, centered on the aimpoint, that is expected to enclose the landing points of 50% of the rounds; said otherwise, it is the median error radius, which is a 50% confidence interval. That is, if a given munitions design has a CEP of 100 m, when 100 munitions are targeted at the same point, an average of 50 will fall within a circle with a radius of 100 m about that point.
There are associated concepts, such as the DRMS (distance root mean square), which is the square root of the average squared distance error, a form of the standard deviation. Another is the R95, which is the radius of the circle where 95% of the values would fall, a 95% confidence interval.
The concept of CEP also plays a role when measuring the accuracy of a position obtained by a navigation system, such as GPS or older systems such as LORAN and Loran-C.
Concept
The original concept of CEP was based on a circular bivariate normal distribution (CBN) with CEP as a parameter of the CBN just as μ and σ are parameters of the normal distribution. Munitions with this distribution behavior tend to cluster around the mean impact point, with most reasonably close, progressively fewer and fewer further away, and very few at long distance. That is, if CEP is n metres, 50% of shots land within n metres of the mean impact, 43.7% between n and 2n, and 6.1% between 2n and 3n metres, and the proportion of shots that land farther than three times the CEP from the mean is only 0.2%.
CEP is not a good measure of accuracy when this distribution behavior is not met. Munitions may also have larger standard deviation of range errors than the standard deviation of azimuth (deflection) errors, resulting in an elliptical confidence region. Munition samples may not be exactly on target, that is, the mean vector will not be (0,0). This is referred to as bias.
To incorporate accuracy into the CEP concept in these conditions, CEP can be defined as the square root of the mean square error (MSE). The MSE will be the sum of the variance of the range error plus the variance of the azimuth error plus the covariance of the range error with the azimuth error plus the square of the bias. Thus the MSE results from pooling all these sources of error, geometrically corresponding to radius of a circle within which 50% of rounds will land.
Several methods have been introduced to estimate CEP from shot data. Included in these methods are the plug-in approach of Blischke and Halpin (1966), the Bayesian approach of Spall and Maryak (1992), and the maximum likelihood approach of Winkler and Bickert (2012). The Spall and Maryak approach applies when the shot data represent a mixture of different projectile characteristics (e.g., shots from multiple munitions types or from multiple locations directed at one target).
Conversion
While 50% is a very common definition for CEP, the circle dimension can be defined for percentages. Percentiles can be determined by recognizing that the horizontal position error is defined by a 2D vector which components are two orthogonal Gaussian random variables (one for each axis), assumed uncorrelated, each having a standard deviation . The distance error is the magnitude of that vector; it is a property of 2D Gaussian vectors that the magnitude follows the Rayleigh distribution, with scale factor . The distance root mean square (DRMS), is and doubles as a sort of standard deviation, since errors within this value make up 63% of the sample represented by the bivariate circular distribution. In turn, the properties of the Rayleigh distribution are that its percentile at level is given by the following formula:
or, expressed in terms of the DRMS:
The relation between and are given by the following table, where the values for DRMS and 2DRMS (twice the distance root mean square) are specific to the Rayleigh distribution and are found numerically, while the CEP, R95 (95% radius) and R99.7 (99.7% radius) values are defined based on the 68–95–99.7 rule
We can then derive a conversion table to convert values expressed for one percentile level, to another. Said conversion table, giving the coefficients to convert into , is given by:
For example, a GPS receiver having a 1.25 m DRMS will have a 1.25 m × 1.73 = 2.16 m 95% radius.
See also
Probable error
References
Further reading
Grubbs, F. E. (1964). "Statistical measures of accuracy for riflemen and missile engineers". Ann Arbor, ML: Edwards Brothers. Ballistipedia pdf
Winkler, V. and Bickert, B. (2012). "Estimation of the circular error probability for a Doppler-Beam-Sharpening-Radar-Mode," in EUSAR. 9th European Conference on Synthetic Aperture Radar, pp. 368–71, 23/26 April 2012. ieeexplore.ieee.org
Wollschläger, Daniel (2014), "Analyzing shape, accuracy, and precision of shooting results with shotGroups". Reference manual for shotGroups
External links
Circular Error Probable in Ballistipedia
Applied probability
Military terminology
Aerial bombs
Artillery operation
Ballistics
Weapon guidance techniques
Accuracy and precision
Statistical distance
Combat modeling | Circular error probable | [
"Physics",
"Mathematics"
] | 1,184 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Statistical distance",
"Distance",
"Applied probability",
"Applied mathematics",
"Ballistics",
"Combat modeling"
] |
316,667 | https://en.wikipedia.org/wiki/Network%20society | Network society is an expression coined in 1991 related to the social, political, economic and cultural changes caused by the spread of networked, digital information and communications technologies. The intellectual origins of the idea can be traced back to the work of early social theorists such as Georg Simmel who analyzed the effect of modernization and industrial capitalism on complex patterns of affiliation, organization, production and experience.
Origins
The term network society was coined by Jan van Dijk in his 1991 Dutch book De Netwerkmaatschappij (The Network Society) and by Manuel Castells in The Rise of the Network Society (1996), the first part of his trilogy The Information Age. In 1978 James Martin used the related term 'The Wired Society' indicating a society that is connected by mass- and telecommunication networks.
Van Dijk defines the network society as a society in which a combination of social and media networks shapes its prime mode of organization and most important structures at all levels (individual, organizational and societal). He compares this type of society to a mass society that is shaped by groups, organizations and communities ('masses') organized in physical co-presence.
Barry Wellman, Hiltz and Turoff
Wellman studied the network society as a sociologist at the University of Toronto. His first formal work was in 1973, "The Network City" with a more comprehensive theoretical statement in 1988. Since his 1979 "The Community Question", Wellman has argued that societies at any scale are best seen as networks (and "networks of networks") rather than as bounded groups in hierarchical structures. More recently, Wellman has contributed to the theory of social network analysis with an emphasis on individualized networks, also known as "networked individualism". In his studies, Wellman focuses on three main points of the network society: community, work and organizations. He states that with recent technological advances an individual's community can be socially and spatially diversified. Organizations can also benefit from the expansion of networks in that having ties with members of different organizations can help with specific issues.
In 1978, Roxanne Hiltz and Murray Turoff's The Network Nation explicitly built on Wellman's community analysis, taking the book's title from Craven and Wellman's "The Network City". The book argued that computer supported communication could transform society. It was remarkably prescient, as it was written well before the advent of the Internet. Turoff and Hiltz were the progenitors of an early computer supported communication system, called EIES.
Manuel Castells
According to Castells, networks constitute the new social morphology of our societies. When interviewed by Harry Kreisler from the University of California Berkeley, Castells said "...the definition, if you wish, in concrete terms of a network society is a society where the key social structures and activities are organized around electronically processed information networks. So it's not just about networks or social networks, because social networks have been very old forms of social organization. It's about social networks which process and manage information and are using micro-electronic based technologies." The diffusion of a networking logic substantially modifies the operation and outcomes in processes of production, experience, power, and culture. For Castells, networks have become the basic units of modern society. Van Dijk does not go that far; for him these units still are individuals, groups, organizations and communities, though they may increasingly be linked by networks.
The network society goes further than the information society that is often proclaimed. Castells argues that it is not purely the technology that defines modern societies, but also cultural, economic and political factors that make up the network society. Influences such as religion, cultural upbringing, political organizations, and social status all shape the network society. Societies are shaped by these factors in many ways. These influences can either raise or hinder these societies. For van Dijk, information forms the substance of contemporary society, while networks shape the organizational forms and infrastructures of this society.
The space of flows plays a central role in Castells' vision of the network society. It is a network of communications, defined by hubs where these networks crisscross. Élites in cities are not attached to a particular locality but to the space of flows.
Castells puts great importance on the networks and argues that the real power is to be found within the networks rather than confined in global cities. This contrasts with other theorists who rank cities hierarchically.
Jan van Dijk
Van Dijk has defined the idea "network society" as a form of society increasingly organizing its relationships in media networks gradually replacing or complementing the social networks of face-to-face communication. Personal and social-network communication is supported by digital technology. This means that social and media networks are shaping the prime mode of organization and most important structures of modern society.
Van Dijk's The Network Society describes what the network society is and what it might be like in the future. The first conclusion of this book is that modern society is in a process of becoming a network society. This means that on the internet interpersonal, organizational, and mass communication come together. People become linked to one another and have access to information and communication with one another constantly. Using the internet brings the “whole world” into homes and work places. Also, when media like the internet becomes even more advanced it will gradually appear as “normal media” in the first decade of the 21st century as it becomes used by larger sections of the population and by vested interests in the economy, politics and culture. It asserts that paper means of communication will become out of date, with newspapers and letters becoming ancient forms for spreading information.
Interaction with new media
New media are “media which are both integrated and interactive and also use digital code at the turn of the 20th and 21st centuries.”
In western societies, the individual linked by networks is becoming the basic unit of the network society. In eastern societies, this might still be the group (family, community, work team) linked by networks. In the contemporary process of individualisation, the basic unit of the network society has become the individual who is linked by networks. This is caused by simultaneous scale extension (nationalisation and internationalisation) and scale reduction (smaller living and working environments) Other kinds of communities arise. Daily living and working environments are getting smaller and more heterogenous, while the range of the division of labour, interpersonal communications and mass media extends. So, the scale of the network society is both extended and reduced as compared to the mass society. The scope of the network society is both global and local, sometimes indicated as “glocal”. The organization of its components (individuals, groups, organizations) is no longer tied to particular times and places. Aided by information and communication technology, these coordinates of existence can be transcended to create virtual times and places and to simultaneously act, perceive and think in global and local terms.
There is an explosion of horizontal networks of communication, quite independent from media business and governments, that allows the emergence of what can be called self-directed mass communication. It is mass communication because it is diffused throughout the Internet, so it potentially reaches the whole planet. It is self-directed because it is often initiated by individuals or groups by themselves bypassing the media system. The explosion of blogs, vlogs, podding, streaming and other forms of interactive, computer to computer communication set up a new system of global, horizontal communication Networks that, for the first time in history, allow people to communicate with each other without going through the channels set up by the institutions of society for socialized communication.
What results from this evolution is that the culture of the network society is largely shaped by the messages exchanged in the composite electronic hypertext made by the technologically linked networks of different communication modes. In the network society, virtuality is the foundation of reality through the new forms of socialized communication. Society shapes technology according to the needs, values and interests of people who use the technology. Furthermore, information and communication technologies are particularly sensitive to the effects of social uses on technology itself. The history of the internet provides ample evidence that the users, particularly the first thousands of users, were, to a large extent, the producers of the technology. However, technology is a necessary, albeit not sufficient condition for the emergence of a new form of social organization based on networking, that is on the diffusion of networking in all realms of activity on the basis of digital communication networks.
Modern Examples
The concepts described by Jan van Dijk, Barry Wellman, Hiltz and Turoff, and Manuel Castells are embodied in much digital technology. Social networking sites such as Facebook and Twitter, instant messaging and email are prime examples of the Network Society at work. These web services allow people all over the world to communicate through digital means without face-to-face contact. This demonstrates how the ideas of society changing will affect the persons we communicate over time.
Network society does not have any confinements and has found its way to the global scale. Network society is developed in modern society that allows for a great deal of information to be traded to help improve information and communication technologies. Having this luxury of easier communication also has consequences. This allows for globalization to take place. Having more and more people joining the online society and learning about different techniques with the world wide web. This benefits users who have access to the internet, to stay connected at all times with any topic the user wants. Individuals without internet may be affected because they are not directly connected into this society. People always have an option to find public space with computers with internet. This allows a user to keep up with the ever changing system. Network society is constantly changing the “cultural production in a hyper-connected world.”
Social Structures revolve around the relationship of the “production/consumption, power, and experience.” These conclusively create a culture, which continues to sustain by getting new information constantly. Our society system was a mass media system where it was a more general place for information. Now the system is more individualized and custom system for users making the internet more personal. This makes messages to the audience more inclusive sent into society. Ultimately allowing more sources to be included to better communication. Network society is seen as a global system that helps with globalization. This is beneficial to the people who have access to the internet to get this media. The negative to this is the people without access do not get this sense of the network society. These networks, that have now been digitized, are more efficient of connecting people. Everything we know now can be put into a computer and processed. Users put messages online for others to read and learn about. This allows people to gain knowledge faster and more efficiently. Networked society allows for people to connect to each other quicker and to engage more actively. This networks go away from having a central theme, but still has a focus in what it is there to accomplish.
See also
References
External links
The Network Society on Googlebooks
Interview with Manuel Castells
Sociological terminology
Information society
Information Age
Information economy
Hyperreality
sl:Spletna skupnost (tipologije) | Network society | [
"Technology"
] | 2,269 | [
"Information Age",
"Information society",
"Science and technology studies",
"Computing and society",
"Hyperreality"
] |
316,678 | https://en.wikipedia.org/wiki/Gated%20community | A gated community (or walled community) is a form of residential community or housing estate containing strictly controlled entrances for pedestrians, bicycles, and automobiles, and often characterized by a closed perimeter of walls and fences. Historically, cities have built defensive city walls and controlled gates to protect their inhabitants, and such fortifications have also separated quarters of some cities. Today, gated communities usually consist of small residential streets and include various shared amenities. For smaller communities, these amenities may include only a park or other common area. For larger communities, it may be possible for residents to stay within the community for most daily activities. Gated communities are a type of common interest development, but are distinct from intentional communities.
For socio-historical reasons, in the developed world they exist primarily in the United States.
Given that gated communities are spatially a type of enclave, Setha M. Low, an anthropologist, has argued that they have a negative effect on the net social capital of the broader community outside the gated community. Some gated communities, usually called "guard-gated communities", are staffed by private security guards and are often home to high-value properties, and/or are set up as retirement villages.
Features
Besides the services of gatekeepers, many gated communities provide other amenities. These may depend on a number of factors including geographical location, demographic composition, community structure, and community fees collected. When there are sub-associations that belong to master associations, the master association may provide many of the amenities. In general, the larger the association the more amenities that can be provided.
Amenities also depend on the type of housing. For example, single-family-home communities may not have a common-area swimming pool, since individual home-owners have the ability to construct their own private pools. A condominium, on the other hand, may offer a community pool, since the individual units do not have the option of a private pool installation.
Typical amenities offered can include one or more:
Swimming pools
Bowling alleys
Tennis courts
Community centres or clubhouses
Golf courses
Marina
On-site dining
Playgrounds
Exercise rooms including workout machines
Spa
Coworking spaces
Around the world
In Brazil, the most widespread form of gated community is called "" (closed housing estate) and is the object of desire of the upper classes. Such a place is a small town with its own infrastructure (reserve power supply, sanitation, and security guards). The purpose of such a community is to protect its residents from exterior violence. The same philosophy is seen on closed buildings and most shopping centres (many of them can only be accessed from inside the parking lot or the garage).
In Mexico, the most common form of a gated community is called privada, fraccionamiento, or condominio, in which privadas and fraccionamientos are mostly composed of individual, single-family houses grouped, these may vary on size and shape, sometimes, houses will be individually developed and built while in others (primarily in fraccionamientos) houses are of the same design and shape. In fraccionamientos, these houses may have access to amenities within the gated zone as well, such as parks, gyms, party rooms, pools and maintenance by the residential's administration. Condominios are commonly similar to fraccionamientos and privadas, but applied in a scheme of apartment buildings.
In Pakistan, gated communities are located in big as well as small cities and are considered the standard of high quality living. Defence Housing Authority and Bahria Town are major private gated community developers and administrators and one of the largest in the world. The assets of Bahria Town itself are worth $30 billion. Most gated communities in Pakistan have public parks, schools, hospitals, shopping malls, gymnasiums, and country clubs.
In Argentina, they are called "barrios privados" (literal translation "private neighbourhoods") or just "countries" (which comes from a shortening of the term "country club") and are often seen as a symbol of wealth. However, gated communities enjoy dubious social prestige (many members of the middle and upper middle class regard gated community dwellers as nouveaux riches or snobs). While most gated communities have only houses, some bigger ones, such as Nordelta, have their own hospital, school, shopping mall, and more.
In post-apartheid South Africa, gated communities have mushroomed in response to high levels of violent crime. They are commonly referred to as "complexes" but also broadly classified as "security villages" (large-scale privately developed areas) or "enclosed neighbourhoods". Some of the newest neighbourhoods being developed are almost entirely composed of security villages, some with malls and few other essential services (such as hospitals). In part, property developers have adopted this response to counter squatting, which local residents fear due to associated crime, and which often results in a protracted eviction process.
They are popular in southern China and Hong Kong, where most of the new apartment compounds have 24/7 guards on duty, and for some high-end residences, facial recognition systems to grant residents and domestic workers entry into the compound. The most famous of which is Clifford Estates in Guangzhou.
In Saudi Arabia, gated communities have existed since the discovery of oil, mainly to accommodate families from Europe or North America. After threat levels increased from the late 1990s on against foreigners in general and U.S. citizens in particular, gates became armed, sometimes heavily, and all vehicles are inspected. Marksmen and Saudi Arabian National Guard armored vehicles appeared in certain times, markedly after recent terrorist attacks in areas near-by, targeting people from European or North American countries.
Gated communities are rare in continental Europe and Japan.
Criticism
Proponents of gated communities (and to a lesser degree, of cul-de-sacs) maintain that the reduction or exclusion of people who would be only passing through, or more generally, of all non-local people, makes any "stranger" much more recognisable in the closed local environment, and thus reduces crime danger. However, some have argued that, since only a very small proportion of all non-local people passing through the area are potential criminals, increased traffic should increase rather than decrease safety by having more people around whose presence could deter criminal behaviour or who could provide assistance during an incident.
Another criticism is that gated communities offer a false sense of security. Some studies indicate that safety in gated communities may be more illusion than reality and that gated communities in suburban areas of the United States have no less crime than similar non-gated neighbourhoods.
A commentary in The New York Times specifically blames the gated communities for the shooting death of Trayvon Martin as the columnist states that "gated communities churn a vicious cycle by attracting like-minded residents who seek shelter from outsiders and whose physical seclusion then worsens paranoid groupthink against outsiders."
In a paper, Vanessa Watson includes gated communities within a class of "African urban fantasies": attempts to remake African cities in the vein of Dubai or Singapore. In Watson's analysis, this kind of urban planning prizes exclusionary and self-contained spaces that limit opportunities for interaction between different classes, while worsening marginalization of the urban poor.
A study done by Breetzke, Landman & Cohn (2014) had investigated the effect of gated communities on individual's risk of burglary victimization in South Africa. Results shown that not only are gated communities not able to reduce burglary, but even facilitate criminal activities. For both the gated communities and the areas surrounding them, the densities of burglary were found to be four times higher than that of Tshwane. The crime rates did not decrease in areas that were far away from the gated communities. Also, the high risk of burglaries was found consistent in both daytime and night-time. As this research on the effect of gated communities in South Africa reflects a negative correlation between the use of gated communities and crime prevention, the effectiveness of gated communities is in doubt.
Common economic model types
Life-style — country clubs, retirement developments.
Prestige — gates for status appeal
Physical security communities — gates for crime and traffic.
Purpose-designed communities — catering to foreigners (e.g. worker compounds in Mid-West Asia, built largely for the oil industry)
Comparison to closed cities
The closed cities of Russia are different from the gated communities.
The guard duty in closed cities is free to residents (paid by taxes).
The public transport in closed cities may go with transit checkpoints or checking passes/passports at technical stops and available to others outside the security area. Within a gated community in the best cases, the bus stop is opposite the checkpoint; a few offer free buses. Via gated communities buses go due to easement or good will.
Fares and communal payment in closed cities are often equal (in all cases not greater) if it would be a common city. In gated communities service companies raise non legally fixed tariffs because it is difficult for those communities either to negotiate lower terms or knock out this service company (in most cases the result is no quorum).
Countries
A limited number of gated communities have long been established for foreigners in various countries of the world:
The worker compounds in the Middle East, built largely for the oil industry.
The Arbor Oaks subdivision in El Monte, California, which appears in the film Back to the Future Part II as "Hilldale", is now gated because of the fans coming to see it in person. Residents are sometimes angry at fans who come by the development.
Argentina
There are many gated communities in Argentina, especially in Greater Buenos Aires, in the suburb of Pilar, 60 km N of Buenos Aires city, and in other suburban areas, such as Nordelta. Tortugas Country Club was the first gated community developed in Argentina, dating from the 1930s/1940s, but most date from the 1990s, when liberal reforms were consolidated.
Since Buenos Aires has been traditionally regarded as a socially integrated city, gated communities have been the subject of research by sociologists. Gated communities are an important way through middle and upper-class people cope with insecurity in Greater Buenos Aires.
As Mara Dicenta writes, "The story of Nordelta exposes how violent environments are enacted through whiteness and drives for elite distinction. Exemplified by Nordelta, MPCs generate profit by transforming rural into elite lands while rearticulating racial and spatial borders that make distinctions sharper, more guarded, and less porous—between centers and peripheries, grounded and flooded lands, or poachers and conservationists. MPCs originated in the U.S. and continue to circulate American imaginaries of race, segregation, and neoliberal commons worldwide. In this process, they are met with different forms of slow violence rooted in colonial and postcolonial national geographies. Furthermore, in seeking to capitalize on those racialized differences, global real estate corporations also circulate and help materialize homogenizing visions of racial formation."
Australia
Although gated communities have been rare in Australia, since the 1980s, a few have been built. The most well-known are those at Hope Island, in particular Sanctuary Cove, on the Gold Coast of Queensland. Other similar projects are being built in the area. In Victoria, the first such development is Sanctuary Lakes, in the local government area of Wyndham, about 16 km south west of Melbourne. In New South Wales, there is Macquarie Links gated community as well as Southgate Estate gated community. Many Australian gated communities are built within private golf courses.
In the ACT, the only example is Uriarra Village, based around community horse paddocks and dwellings jointly managed through strata title.
Bangladesh
The trade association for real estate developers in Bangladesh, RAJUK, stated in 2021 that the capital city Dhaka had approximately 25 gated communities. While RAJUK claimed that these communities adhered to the global standard of keeping 60% of the land in the gated community for open spaces and common use, the Bangladesh Institute of Planners disputed this claim, advocating for fewer gated communities in the city centres.
Brazil
Brazil also has many gated communities, particularly in the metropolitan regions Rio de Janeiro and São Paulo. For example, one of São Paulo's suburbs, Tamboré, has at least 6 such compounds known as Tamboré 1, 2, 3, and so on. Each consists of generously spaced detached houses with very little to separate front gardens.
One of the first large-scale gated community projects in São Paulo city region was Barueri's Alphaville, planned and constructed during the 1970s military dictatorship when the big cities of Brazil faced steep increases of car ownership by the middle and higher-classes, rural exodus, poverty, crime, urban sprawl, and downtown decay.
Canada
Neighbourhoods with "physical" or explicit gating with security checkpoints and patrols are extremely rare, being absent in even some of Canada's richest neighbourhoods such as Bridle Path, Toronto. Furthermore, municipal planning laws in many Canadian provinces ban locked gates on public roads as a health issue since they deny emergency vehicles quick access.
A noted exception in Canada is Arbutus Ridge, an age-restricted community constructed between 1988 and 1992 on the southeastern coast of Vancouver Island.
More common in most Canadian neighbourhoods, especially the largest cities, is an implicit or symbolic gating which effectively partitions the private infrastructure and amenities of these communities from their surrounding neighbourhoods. A classic example of this is the affluent Montreal suburb of Mount Royal, which has a long fence running along its side of L'Acadie Boulevard that for all intents and purposes separates the community from the more working-class neighbourhood of Park Extension. Also, many newer suburban subdivisions employ decorative gates to give the impression of exclusivity and seclusion. Some gated communities have been planned in recent years in Greater Toronto, although they are infrequent.
China
In China, some of these compounds, like most other gated communities around the world, target the rich. Also many foreigners live in gated communities in Beijing. Often foreign companies choose the locations where their foreign employees will live, and in most cases, they pay the rent and associated costs (e.g. management fees and garden work).
Similar communities exist in Shanghai, another major Chinese city. Shanghai Links, an exclusive expatriate community enclosing a golf course and the Pudong campus of Shanghai American School, is an example. The Shanghai Links project began in 1994 with the signing by the then Prime Minister of Canada, Honourable Jean Chretien, of a Memorandum of Agreement with the Shanghai Pudong New Area Government controlled company, Huaxia Tourism Development Company. Other notable gated communities in Shanghai include Seasons Villas, a development by Hutchinson Whampoa; Thomson Golf Villas, and Green Villas.
Other gated communities in China such as those in the Daxing District of Beijing enclose the homes of rural migrants. These are intended to reduce crime and increase public order and safety, which the Chinese Communist Party-run People's Daily claims it has, by 73%. The system is controversial as it segregates migrants and the poor, with some claiming its true purpose is to keep track of migrants, but it is scheduled for implementation in Changping District also.
Ecuador
Ecuador has many gated communities, mostly in Guayaquil and Quito. In the coastal city of Guayaquil, gated communities are mostly located in Samborondón, and in Quito in the valleys surrounding the city. These are home mostly for the wealthiest. However, there is a trend -especially in Guayaquil- of houses in gated communities with moderate prices.
Egypt
Due to the population boom and an increasing class segregation, gated communities have been more and more established in Egypt since the 1970s. Greater Cairo, for example, is home to El Rehab and Dreamland, Mountain View Egypt. There is also El Maamoura in Alexandria. They were criticized for being a niche market that fails to address the crippling congestion problem in Cairo.
Indonesia
In Indonesia, some gated communities are luxurious (with up to 740 square metres (8000 sq ft)), and some are very affordable (with lots ranging from 40 to 120 square metres). From 2000, most of the new residential areas built by private developers are mostly composed of gated communities. Examples include the residential areas of Bumi Serpong Damai in South Tangerang, Tropicana Residence Community in Tangerang City, Telaga Golf Sawangan and Pesona Khayangan in Depok, and Sentul City in Bogor Regency. Gated communities in Indonesia still allow outsiders to use some of the facilities inside the community because there is a regulation that the social facilities in the residential development should be handed to the local government to be used by the public.
Italy
Two examples of gated community in Italy are Olgiata in Rome and the village of Roccamare, in Castiglione della Pescaia, Tuscany.
Lebanon
Because of the pollution, the lack of proper infrastructure, electrical power and green spaces in residential areas outside of Beirut, the high class society chooses to reside in gated communities for a better living environment.
Gated communities in Lebanon are mainly in the suburbs of the capital city Beirut.
Malaysia
In Malaysia, these are known as Gated and Guarded Communities and have been seeing a steady increase in popularity. Currently, according to the Town and Country Planning Department, there are four types of gated communities in Malaysia, namely:
Elite community: this type of gated community is primarily occupied by the upper-class or high-income group of people. It focuses on exclusion and status in which security is one of the major concerns due to the resident's status within the community.
Lifestyle community: the lifestyle community generally consists of retirement communities, leisure communities and suburban 'new towns'. Activities inside these communities can include golf courses, horseback riding and residents-oriented leisure activities.
Security zone community: the security zone community is the most popular type of gated community in which it offers a housing development that is surrounded by fences or gates. This development is normally provided with guard services.
Security zone community and lifestyle: this type of gated community housing development is usually developed within a city centre. It focuses on both the security aspect and the provision of lifestyle facilities for its residents.
The gated community is a concept that emerged in response to the rise of safety and security issues, and offers more advantages in terms of a calm environment and enhanced safety that is ideal for family development.
Mauritius
Several gated communities now exist on the island of Mauritius since the government introduced Integrated Resort Schemes (IRS) and Real Estate Schemes (RES) in the mid-1990s. Recently they have been amalgamated into Property Development Schemes (PDS). The government also introduced Smart City permits and in 2016 it secured the assistance of Saudi Arabia to launch the government's version of a gated Smart City known as Heritage City Project which was proposed at Minissy, in the region of Ebène, with the assistance of Saudi Arabia.
Mexico
Gated communities in Mexico are a result of the huge income gap existing in the country. A 2008 study found that the average income in an urban area of Mexico was $26,654, a rate higher than advanced regions like Spain or Greece while the average income in rural areas (sometimes just a short distance away) was only $8,403. This close a proximity of wealth and poverty has created a large security risk for Mexico's middle class. Gated communities can be found in virtually every medium and large-sized city in Mexico with the largest found in major cities, such as Monterrey, Mexico City or Guadalajara.
Luxury or "status" gated communities are very popular with middle to high income residents in Mexico. Gated luxury communities in Mexico are considerably cheaper than in countries such as the United States while retaining houses of similar size and quality due to the commonness of the communities and the lower cost to build them and are priced lower to attract middle class residents.
Many gated communities in Mexico have fully independent and self-contained infrastructure, such as schools, water and power facilities, security and fire forces, and medical facilities. Some of the larger gated communities even retain their own school districts and police departments. The Interlomas area of Mexico City contains hundreds of gated communities and is the largest concentration of gated communities in the world, stretching over . The surrounding areas of Santa Fe, Bosques-Lomas, Interlomas-Bosque Real, are also made up predominantly of gated communities and span over 30% of Greater Mexico City.
Many smaller gated communities in Mexico are not officially classified as separate gated communities as many municipal rules prohibit closed off roads. Most of these small neighbourhoods cater to lower middle income residents and offer a close perimeter and check points similar to an "authentic" gated community. This situation is tolerated and sometimes even promoted by some city governments due to the lack of capacity to provide reliable and trusted security forces.
New Zealand
In New Zealand, gated communities have been developed in suburban areas of the main cities since the 1980s and 1990s.
Nigeria
Gated communities are widespread in Nigeria, where they are often referred to as housing estates or villas. These estates can cover between 50 and 100 hectares, with one, the Gwarimpa estate, spanning more than a thousand hectares. Additionally, some of them offer amenities like schools, daycares, gyms, playgrounds, supermarkets and parks. In urban centers like Lagos, Lagos Island, and Abuja, a number of estates are luxury estates catering exclusively to Nigeria's upper class, including politicians, government officials, CEOs, and celebrities. In contrast, there are also more affordable estates which target Nigeria's middle class.
Gated communities appeal to Nigerians primarily because of the exclusivity and sense of security they provide. Many estates have tightly controlled entrances, ensuring that unauthorized or unfamiliar individuals cannot gain access.
Another form of gated community common in Nigeria is the quarters, typically developed and reserved for employees of a particular organization and their families. For example, many Nigerian universities, including the University of Ibadan have staff quarters. These staff quarters are usually located within or near the organization's premises.
Pakistan
Pakistan has a very large number of gated communities, most of which target middle class Pakistanis. The largest being Bahria Town, which is also the largest in Asia and has communities in major cities. Defence Housing Authority is also a major developer of gated communities.
Others include WAPDA Town, Gulberg, Islamabad and Schon Properties, while Emaar Properties also maintains several gated communities in the country targeting primarily upper class people. Gated communities in Pakistan are mostly immune from problems of law enforcement and lack of energy faced by the majority of the other housing societies. In a short time, the property prices in such communities have greatly increased – in 2007 a 20-square-meter house in Bahria Town, Lahore cost around four million Pakistani rupees ($40,000); a similar property in 2012 costed nine million rupees, while houses are priced around 100-300 million rupees. The Bahria Town of Karachi is currently constructing Rafi Cricket Stadium, when completed will be the largest stadium in the country and Grand Jamia Mosque, when completed will be the largest in South Asia and third-largest in the world.
Peru
Lima, Peru has several gated communities, especially in the wealthy districts of La Molina and Santiago de Surco. They are home to many prominent Peruvians.
Philippines
The Philippines has a large number of gated communities which are known in Philippine English as "subdivisions" or "villages". Gated communities represent one of the main residency types for upper and middle class Filipinos up and down the country, along with condominiums. Regardless of their names, such communities may either form part of a larger barangay (village), or constitute a single barangay in and of themselves. Gated communities are often grouped by the phase of build, or by project number, and homes within these communities are designated with a lot and block number on a street as opposed to conventional house numbers or building names. Gated communities are divided between two types:
Executive subdivisions/villages: Along with condominiums, these are the residential neighbourhoods of the wealthiest in Philippine society. Examples of these include Bel-Air, Greenwoods Executive Villages, Magallanes village, Dasmarinas Village and Ayala Alabang village.
Subdivisions/villages: Gated communities targeted to the Philippine middle classes. These may begin with a name and end with a number E.g. "Kalayaan Village 3", the 3 designating that it is the third subdivision project for the land development company under the "Kalayaan" homes project. Examples of subdivisions include BF Homes, Camella Homes, among a majority of wealthier suburban areas.
Russia
Aerobus live complex is the de facto first gated community in Moscow. Business center is in this complex but outside the residential area.
Potapovo (known as New Moscow, but developed with North Butovo to RAS members) - known as fenced Kommunarka-Butovo road between 1994-2000 and Moscow had to build another road. Service company has bus license and guard wearing uniform is always in a bus (not usual even in strict restricted acsess). Later they opened trade center but neither allowed to cut the corner or use 2 entrances (from northbound). Fence lowers effectiveness of bus routes and direct or loop extension of line 12. This area set to order bus on demand but paxes ordered to walk outside.
Ozero is a cooperative formed around Vladimir Putin's dacha on Lake Komsomolskoe (), Priozersky District, Leningrad Oblast.
Saudi Arabia
In Saudi Arabia, many expatriate workers are required to live in company-provided housing. After the 2003 attack on Al Hambra, Jadawel, Siyanco and Vinell by militant Saudi dissidents, the government established tight military security for those compounds with large western populations. Many western individuals also reside in the many other gated compounds or non-gated villas and apartments in the cities that they work. Saudi Aramco provides a compound in Dhahran which is one of the largest of its kind within Saudi Arabia. Gated communities are also popular with many Saudis, which accounts for the limited availability of open villas in these communities and the premium rent paid for that housing. These compounds can be found in many of Saudi Arabia's cities, including but not limited to Abha, Dhahran, Riyadh, and Taif.
Singapore
Sentosa Cove is the only gated residential community in Singapore, containing 350 bungalows on a 99-year leasehold term. It is the only landed housing that can be purchased by non-citizens. Despite this, as of 2021, the properties transact at a lower price to comparable areas on the mainland.
South Africa
South Africa has an increasing number of gated communities, where the wealthy sometimes live in close proximity to the urban poor (yet with little contact between the two).
Thailand
Many housing estates in Thailand take the form of gated communities, targeted to the upper and middle classes. They may be managed by the development company or by resident committees.
Gated communities are often referred to as mubans in Thailand.
Turkey
Turkey has several gated communities, especially in Istanbul and Ankara. Called "site" in Turkish, they are mostly located around the edge of the city.
United Arab Emirates
In the United Arab Emirates, gated communities have exploded in popularity, particularly in Dubai, where the 2002 decision to allow foreigners to own freehold properties has resulted in the construction of numerous such communities built along various themes. Examples include The Lakes, Springs, Meadows, and Arabian Ranches.
United Kingdom
In the United Kingdom, gated communities are relatively rare. In 2004 there were an estimated 1,000 such communities in England (i.e. not including Scotland, Wales and Northern Ireland). As of 2002, the majority of these communities were found to be in the South East and most were small developments; only four local authorities had one or more gated communities with over 300 dwellings. They generally consisted of a gated street of up to 60 or 100 houses, or a single block of flats.
The bulk of gated communities in the United Kingdom have been constructed by private developers, with smaller proportions built by social landlords and through public-private partnerships. Research has found that they typically exist within surrounding areas that are also highly affluent, if not necessarily gated. It has been noted that as far as buyer motivation is concerned, issues of security, exclusivity and prestige were often subsumed by a desire to obtain property that would maintain its value. Although the appeal of these communities is said to be mainly be to "young affluent singles" and older couples, this varies significantly according to location and the availability of accommodation types. Unlike in America, marketing materials rarely refer to any community aspect of living in a gated development.
There is "considerable diversity in terms of the built form" of gated communities. In London, many new-build and converted spaces that have been turned into gated communities consist largely of flats and are therefore unlikely to be suitable for families. The redevelopment of these inner city sites for "secure, upscale, condominium-style housing in central London" was partly fuelled by a cultural shift among young professionals towards "loft-living" in the city centre. Examples from London include the Docklands developments of New Caledonian Wharf, Kings and Queen Wharf and Pan Peninsula, and East London locations like the Bow Quarter in Bow.
Outside London, gated communities tend to be more spacious. The most prestigious gated communities, or "private estates", are generally seen as being those around the fringes of London; these include the large estates at St George's Hill and Wentworth, both of which are in Surrey. In general, there is a heavy concentration around Surrey's Cobham-Esher-Weybridge triangle – examples are Burwood Park and Kingswood Warren, while Blackhills and Clare Hill represent smaller competitors with somewhat lower property values. In suburban Kent, Keston Park and Farnborough Park are most notable.
United States
The earliest American gated communities date to the 1850s, though it was in the early 1900s when they first began to proliferate. Most gated communities are today located in the Sun Belt region in the South and West, where there is more land available for development. They are usually developed privately and for this reason it is difficult to determine the total number. Although they are often unincorporated, there are numerous incorporated gated cities in Southern California, namely Bradbury, Canyon Lake, Hidden Hills, Laguna Woods, and Rolling Hills.
In 2002, USA Today reported that approximately 40% of new builds in California were behind walls. By 1997, an estimated 20,000 gated communities had been built across the country. That year, estimates of the number of people in gated communities ranged from 4 million in 30,000 communities up to around 8 million. This growth continued and by 2009 figures from the American Housing Survey indicated that the number of people living in gated communities had risen to 11 million households. Setha Low, a psychologist who has studied gated communities and their residents, suggested this was likely an underestimate.
The notion that gated communities in the United States are "bastions of affluence" has been challenged by academic research. A 2005 study published in the Journal of Planning Education and Research revealed they often contain both owner-occupied and renter-occupied properties and that socioeconomic status and income levels can vary significantly across communities. Criminologists Nicholas Branic and Charis Kubrin have argued that the "affluent, high socioeconomic status gated communities" of popular imagination in fact represent just one type of gated community.
Ed Blakely distinguishes between "prestige communities" of the rich and socially distinguished and "security zones", in which safety is the main goal. Nevertheless, much of the literature points to gated communities as contributing to greater social seclusion and segregation. Common motivations for moving into them are fear of crime and a desire to escape demographic change — gated communities are generally ethnically homogenous. Alternatively, a 2015 Urban Studies article argued that while gated communities in the southwestern United States entrenched social segregation, no effect was present for racial segregation, as the streets they bordered tended to already be segregated on a racial level.
Uruguay
Gated communities in Uruguay are known locally as 'barrios privados' or private neighbourhoods. There is no official data on how widespread gated communities in Uruguay are, but by 2013 at least 20 were known to exist in the Maldonado Department, seven in the Canelones Department, seven in Rocha, one in Soriano, and two in Rio Negro. Gated communities had experienced "huge growth" by 2020, and were a popular choice for foreigners with families, although rental returns are reportedly low and the properties will not necessarily appreciate in value. While socially exclusive areas exist in the capital Montevideo, such as the Carrasco barrio, the closing of streets is not permitted and attempts to build gated communities within the city limits have been refused.
Contrary to what might be assumed, research has found that gated communities in this country do not increase residential segregation as they select for affluent people who already experience a high degree of segregation. These residents tend to be a very homogenous group in age, family stage, and social class. La Tahona and Carmel are two of the most prominent gated communities located just outside Montevideo in the Canelones Department and nearby to the Zonamerica business park and Carrasco International Airport. La Tahona and Carmel both adjoin Camino de los Horneros, a road immediately north of Ciudad de la Costa that connects to multiple gated communities with 2000 homes in total, though these numbers are expected to eventually treble. Numerous gated communities have also been constructed in Punta Del Este.
Demand for gated communities in Uruguay has increased in recent years due to the growing availability of remote working, and amenities like swimming pools can be found more often than in Montevideo. They have also become more accessible to the middle class as pre-designed new builds have become common, as opposed to the older model of requiring individual owners to buy a plot and then hire their own builder. The resulting economies of scale have helped to keep average prices lower than they otherwise would be.
In popular media
In comics
Early in The Walking Dead comic book series, Rick Grimes attempts to settle in a gated condominium neighbourhood. This move proves disastrous as one of the group members is killed after the survivors realize the neighbourhood is infested with zombies.
In film
John Duigan's Lawn Dogs (1997), starring Mischa Barton and Sam Rockwell, films a young girl from a gated community who befriends a landscape worker, and examines the societal repercussions of their friendship.
Mexican film La Zona, directed by Rodrigo Plá, talks about a gated community invaded by a group of very young and poor children.
In the ABC Family movie Picture This starring Ashley Tisdale, Drew lives in a gated community called Camelot.
In the Spanish movie Secuestrados, kidnappers take the daughter of a family living in an allegedly secure gated community.
In the 2007 animated movie Alvin and the Chipmunks, the chipmunks and their guardian Dave Seville live in a gated community.
In the 2022 film Cheaper by the Dozen, the Baker family relocated from Los Angeles to a gated community in Calabasas, California where they moved into a larger house there to accommodate their large family. By the end of the film, the Baker family move back to Los Angeles.
In games
Grand Theft Auto IV has a gated community called Beachgate, a fictional rendition of Sea Gate, Brooklyn. It includes the home of early antagonist Mikhail Faustin.
In literature
(Alphabetical by author's last name)
In Margaret Atwood's novel Oryx and Crake, the characters Snowman and Crake live and work in corporate-owned gated communities known as compounds.
J.G. Ballard has examined the phenomenon in his novella Running Wild (1988) and in his novel Super-Cannes (2000).
T.C. Boyle's novel The Tortilla Curtain (1995) is set in and near a gated community in California.
The novel Parable of the Sower, by Octavia Butler, takes place in a world where much of civilization lives within gated communities.
In the novel I Will Fear No Evil by Robert A. Heinlein, the wealthiest citizens shelter from urban poverty inside fortress-like guarded gated communities.
Ira Levin's novel The Stepford Wives (1972) takes place inside an idyllic gated community that secretly enslaves its female members to conform to the standards of the men.
In the Percy Jackson & the Olympians and The Heroes of Olympus series by Rick Riordan, Elysium is depicted as a gated community from which one could hear laughter and smell barbecue cooking. Besides having the Isle of the Blessed in the center of a large lake there, Elysium has neighbourhoods of beautiful houses from every time period in history such as Roman villas, medieval castles, and Victorian mansions. It also has flowers of silver and gold blooming on the lawns and grass rippling in rainbow colors.
In Mexico, Fernanda Melchor's third novel Paradais tells the crime committed by two teenagers inside a Mexican gated community of Veracruz, hometown of Melchor.
In Argentina, Claudia Piñeiro's Las viudas de los jueves (Thursday Widows) became a local bestseller after winning the 2005 edition of El Clarín newspaper book award. The novel depicts life of dwellers of a gated community, among them, families who enjoyed high incomes now facing economic hardships.
In Egypt, Ahmed Khaled Towfik's novel Utopia takes readers on a chilling journey beyond the gated communities of the North Coast where the wealthy are insulated from the bleakness of life outside the walls.
In the novel Snow Crash (1992) by Neal Stephenson, gated communities have evolved into "burbclaves" (suburban enclaves) which are effectively sovereign city-states.
In songs
In "Versace", Drake says: "This is a gated community please get the fuck off the property"–referring to the gated community of Hidden Hills, California.
In television
In the Season Six episode of The X-Files entitled "Arcadia", Mulder and Scully investigate disappearances within a gated community that seems to be harboring a terrible secret involving a Tulpa.
In the cartoon As Told By Ginger, the Griplings reside in Protective Pines.
In the SpongeBob SquarePants episode "Squidville", Squidward Tentacles temporarily moves to a gated community of squids to get away from SpongeBob SquarePants and Patrick Star.
The Bravo network reality television series The Real Housewives of Orange County was initially set primarily in the gated community of Coto de Caza, California ("Coto") and followed the lavish livestyles of "housewives" and their families who resided within Coto. One housewife, Lauri Waring, who lived in a Ladera Ranch townhouse, was the exception-to-the-rule that was used as a foil to the extravagant lifestyle of the other four housewives, who lived "behind the Coto gates".
Most of The Starter Wife miniseries is set within the gated community where the main character's friend lives. She even becomes friends with the security guard at the front gate.
The pilot episode of the 2002 revival of The Twilight Zone titled "Evergreen" deals with a gated community called Evergreen Estates which has a sinister way of dealing with nonconformity that involves rebellious children committing those acts being taken to "Arcadia Fertilizer Company" where they are executed and made into red fertilizer for the small trees in front of the houses there.
In one episode of VeggieTales (Sherlock Holmes and the Golden Ruler), Larry sings a song about gated communities in Silly Songs with Larry.
In seasons 1-3 of Weeds, Nancy Botwin and her family inhabit the gated community of Agrestic.
In The Neighbors, a 2012 TV series in the United States, a family has relocated to a gated townhouse community called "Hidden Hills" in New Jersey only they discover that the entire community is populated by residents from another planet who identify themselves by the names of sport celebrities, patrol the community in golf carts, receive nourishment through their eyes and mind by reading books rather than eating, and cry green goo from their ears.
In Safe, a Netflix original series, it tells the mystery in and around a gated community.
In the short-lived TV series The Gates, Nick Monohan and his family move from Chicago to a quiet, upscale planned community called The Gates where he will be its chief of police. They soon realize that their neighbors are not who they seem to be. The Gates is filled with such beings as vampires, witches, werewolves, and a succubus.
See also
Age-restricted community
Barracks
Castle
Closed city
Closed community
Condominium
Defensive wall
Fort
Homeowners association
Peace lines
Planned unit development
Private community
References
Further reading
Arizaga, Maria Cecilia: El Mito de comunidad en la Ciudad Mundializada.
[Arizaga, Maria Cecilia: Murallas y barrios cerrados, La morfología espacial del ajuste en Buenos Aires. Nueva Sociedad, 166, 2000 ]
Blakely, Edward J. and Mary Gail Snyder; Fortress America: Gated Communities in the United States; Brookings Institution Press, New Ed edition (15 June 1999);
Gasior-Niemiec, Anna; Glasze, Georg and Pütz, Robert (2009): A Glimpse over the Rising Walls: The Reflection of Post-Communist Transformation in the Polish Discourse of Gated Communities. In: East European Politics & Societies 23 (2009) 2: 244–265.
Glasze, Georg, Chris Webster and Klaus Frantz (2006): "Introduction: global and local perspectives on the rise of private neighbourhoods". In: Georg Glasze, Chris Webster and Klaus Frantz (Eds.): Private Cities: Global and Local Perspectives. Routledge. London und New York: 1–8.
Glasze, Georg (2003): Private Neighbourhoods as Club Economies and Shareholder Democracies. – In: BelGeo 1/2003 Theme Issue "Privatization of Urban Spaces in Contemporary European Cities": 87-98
Libertun de Duren, Nora. "Planning à la Carte: The location patterns of gated communities around Buenos Aires in a decentralized planning context". International Journal of Urban and Regional Research 30.2 (2006): 308–327.
Low, Setha M: Behind the Gates: Life, Security and the Pursuit of Happiness in Fortress America. Routledge: New York and London: 2003.
Webster, Chris, Georg Glasze und Klaus Frantz (2002): "The global spread of gated communities". In: Environment and Planning B 29 (2002) 3: 315–320
Davis, Mike : City of Quartz, 1990.
External links
Built Metaphors: Gated Communities and Fiction, by Stéphane Degoutin and Gwenola Wagon
Gated communities as an urban pathology?, by Renaud Le Goix
Land Use and Design Innovations in Private Communities
China's Transition at a Turning Point
Forbes: Most Expensive Gated Communities In America 2004
Fortress Bulgaria: gated communities
Access control
Types of communities
Security engineering
Human habitats | Gated community | [
"Engineering"
] | 9,026 | [
"Systems engineering",
"Security engineering"
] |
316,737 | https://en.wikipedia.org/wiki/Autoignition%20temperature | The autoignition temperature or self-ignition temperature, often called spontaneous ignition temperature or minimum ignition temperature (or shortly ignition temperature) and formerly also known as kindling point, of a substance is the lowest temperature at which it spontaneously ignites in a normal atmosphere without an external source of ignition, such as a flame or spark. This temperature is required to supply the activation energy needed for combustion. The temperature at which a chemical ignites decreases as the pressure is increased.
Substances which spontaneously ignite in a normal atmosphere at naturally ambient temperatures are termed pyrophoric.
Autoignition temperatures of liquid chemicals are typically measured using a flask placed in a temperature-controlled oven in accordance with the procedure described in ASTM E659.
When measured for plastics, autoignition temperature can also be measured under elevated pressure and at 100% oxygen concentration. The resulting value is used as a predictor of viability for high-oxygen service. The main testing standard for this is ASTM G72.
Autoignition time equation
The time it takes for a material to reach its autoignition temperature when exposed to a heat flux is given by the following equation:
where k = thermal conductivity, ρ = density, and c = specific heat capacity of the material of interest, is the initial temperature of the material (or the temperature of the bulk material).
Autoignition temperature of selected substances
Temperatures vary widely in the literature and should only be used as estimates. Factors that may cause variation include partial pressure of oxygen, altitude, humidity, and amount of time required for ignition. Generally the autoignition temperature for hydrocarbon/air mixtures decreases with increasing molecular mass and increasing chain length. The autoignition temperature is also higher for branched-chain hydrocarbons than for straight-chain hydrocarbons.
See also
Fire point
Flash point
Gas burner (for flame temperatures, combustion heat energy values and ignition temperatures)
Spontaneous combustion
References
External links
Analysis of Effective Thermal Properties of Thermally Thick Materials.
Chemical properties
Fire
Threshold temperatures
sv:Självantändning#Självantändningspunkt | Autoignition temperature | [
"Physics",
"Chemistry"
] | 427 | [
"Physical phenomena",
"Phase transitions",
"Threshold temperatures",
"Combustion",
"nan",
"Fire"
] |
316,760 | https://en.wikipedia.org/wiki/Polymer-bonded%20explosive | Polymer-bonded explosives, also called PBX or plastic-bonded explosives, are explosive materials in which explosive powder is bound together in a matrix using small quantities (typically 5–10% by weight) of a synthetic polymer. PBXs are normally used for explosive materials that are not easily melted into a casting, or are otherwise difficult to form.
PBX was first developed in 1952 at Los Alamos National Laboratory, as RDX embedded in polystyrene with diisooctyl phthalate (DEHP) plasticizer. HMX compositions with teflon-based binders were developed in 1960s and 1970s for gun shells and for Apollo Lunar Surface Experiments Package (ALSEP) seismic experiments, although the latter experiments are usually cited as using hexanitrostilbene (HNS).
Potential advantages
Polymer-bonded explosives have several potential advantages:
If the polymer matrix is an elastomer (rubbery material), it tends to absorb shocks, making the PBX very insensitive to accidental detonation, and thus ideal for insensitive munitions.
Hard polymers can produce PBX that is very rigid and maintains a precise engineering shape even under severe stress.
PBX powders can be pressed into a desired shape at room temperature; casting normally requires hazardous melting of the explosive. High pressure pressing can achieve density for the material very close to the theoretical crystal density of the base explosive material.
Many PBXes are safe to machine; turning solid blocks into complex three-dimensional shapes. For example, a billet of PBX can be precisely shaped on a lathe or CNC machine. This technique is used to machine explosive lenses necessary for modern nuclear weapons.
Binders
Fluoropolymers
Fluoropolymers are advantageous as binders due to their high density (yielding high detonation velocity) and inert chemical behavior (yielding long shelf stability and low aging). They are somewhat brittle, as their glass transition temperature is at room temperature or above. This limits their use to insensitive explosives (e.g. TATB) where the brittleness does not have detrimental effects on safety. They are also difficult to process.
Elastomers
Elastomers have to be used with more mechanically sensitive explosives like HMX. The elasticity of the matrix lowers sensitivity of the bulk material to shock and friction; their glass transition temperature is chosen to be below the lower boundary of the temperature working range (typically below -55 °C). Crosslinked rubber polymers are however sensitive to aging, mostly by action of free radicals and by hydrolysis of the bonds by traces of water vapor. Rubbers like Estane or hydroxyl-terminated polybutadiene (HTPB) are used for these applications extensively. Silicone rubbers and thermoplastic polyurethanes are also in use.
Fluoroelastomers, e.g. Viton, combine the advantages of both.
Energetic polymers
Energetic polymers (e.g. nitro or azido derivates of polymers) can be used as a binder to increase the explosive power in comparison with inert binders. Energetic plasticizers can be also used. The addition of a plasticizer lowers the sensitivity of the explosive and improves its processibility.
Insults (potential explosive inhibitors)
Explosive yields can be affected by the introduction of mechanical loads or the application of temperature; such damages are called insults. The mechanism of a thermal insult at low temperatures on an explosive is primarily thermomechanical, at higher temperatures it is primarily thermochemical.
Thermomechanical
Thermomechanical mechanisms involve stresses by thermal expansion (namely differential thermal expansions, as thermal gradients tend to be involved), melting/freezing or sublimation/condensation of components, and phase transitions of crystals (e.g. transition of HMX from beta phase to delta phase at 175 °C involves a large change in volume and causes extensive cracking of its crystals).
Thermochemical
Thermochemical changes involve decomposition of the explosives and binders, loss of strength of binder as it softens or melts, or stiffening of the binder if the increased temperature causes crosslinking of the polymer chains. The changes can also significantly alter the porosity of the material, whether by increasing it (fracturing of crystals, vaporization of components) or decreasing it (melting of components). The size distribution of the crystals can be also altered, e.g. by Ostwald ripening. Thermochemical decomposition starts to occur at the crystal nonhomogeneities, e.g. intragranular interfaces between crystal growth zones, on damaged parts of the crystals, or on interfaces of different materials (e.g. crystal/binder). Presence of defects in crystals (cracks, voids, solvent inclusions...) may increase the explosive's sensitivity to mechanical shocks.
Some example PBXs
References
Cooper, Paul W. Explosives Engineering. New York: Wiley-VCH, 1996. .
Norris, Robert S., Hans M. Kristensen, and Joshua Handler. "The B61 family of bombs", http://thebulletin.org, The Bulletin of the Atomic Scientists, Jan/Feb 2003.
Explosives
Physical chemistry | Polymer-bonded explosive | [
"Physics",
"Chemistry"
] | 1,098 | [
"Applied and interdisciplinary physics",
"Explosives",
"nan",
"Explosions",
"Physical chemistry"
] |
316,824 | https://en.wikipedia.org/wiki/Nozzle | A nozzle is a device designed to control the direction or characteristics of a fluid flow (specially to increase velocity) as it exits (or enters) an enclosed chamber or pipe.
A nozzle is often a pipe or tube of varying cross sectional area, and it can be used to direct or modify the flow of a fluid (liquid or gas). Nozzles are frequently used to control the rate of flow, speed, direction, mass, shape, and/or the pressure of the stream that emerges from them. In a nozzle, the velocity of fluid increases at the expense of its pressure energy.
Types
Jet
A gas jet, fluid jet, or hydro jet is a nozzle intended to eject gas or fluid in a coherent stream into a surrounding medium. Gas jets are commonly found in gas stoves, ovens, or barbecues. Gas jets were commonly used for light before the development of electric light. Other types of fluid jets are found in carburetors, where smooth calibrated orifices are used to regulate the flow of fuel into an engine, and in jacuzzis or spas.
Another specialized jet is the laminar jet. This is a water jet that contains devices to smooth out the pressure and flow, and gives laminar flow, as its name suggests. This gives better results for fountains.
The foam jet is another type of jet which uses foam instead of a gas or fluid.
Nozzles used for feeding hot blast into a blast furnace or forge are called tuyeres.
Jet nozzles are also used in large rooms where the distribution of air via ceiling diffusers is not possible or not practical. Diffusers that uses jet nozzles are called jet diffuser where it will be arranged in the side wall areas in order to distribute air. When the temperature difference between the supply air and the room air changes, the supply air stream is deflected upwards, to supply warm air, or downwards, to supply cold air.
High velocity
Frequently, the goal of a nozzle is to increase the kinetic energy of the flowing medium at the expense of its pressure and internal energy.
Nozzles can be described as convergent (narrowing down from a wide diameter to a smaller diameter in the direction of the flow) or divergent (expanding from a smaller diameter to a larger one). A de Laval nozzle has a convergent section followed by a divergent section and is often called a convergent-divergent (CD) nozzle ("con-di nozzle").
Convergent nozzles accelerate subsonic fluids. If the nozzle pressure ratio is high enough, then the flow will reach sonic velocity at the narrowest point (i.e. the nozzle throat). In this situation, the nozzle is said to be choked.
Increasing the nozzle pressure ratio further will not increase the throat Mach number above one. Downstream (i.e. external to the nozzle) the flow is free to expand to supersonic velocities; however, Mach 1 can be a very high speed for a hot gas because the speed of sound varies as the square root of absolute temperature. This fact is used extensively in rocketry where hypersonic flows are required and where propellant mixtures are deliberately chosen to further increase the sonic speed.
Divergent nozzles slow fluids if the flow is subsonic, but they accelerate sonic or supersonic fluids.
Convergent-divergent nozzles can therefore accelerate fluids that have choked in the convergent section to supersonic speeds. This CD process is more efficient than allowing a convergent nozzle to expand supersonically externally.
The shape of the divergent section also ensures that the direction of the escaping gases is directly backwards, as any
sideways component would not contribute to thrust.
Propelling
A jet exhaust produces thrust from the energy obtained from burning fuel. The hot gas is at a higher pressure than the outside air and escapes from the engine through a propelling nozzle, which increases the speed of the gas.
Exhaust speed needs to be faster than the aircraft speed in order to produce thrust but an excessive speed difference wastes fuel (poor propulsive efficiency). Jet engines for subsonic flight use convergent nozzles with a sonic exit velocity. Engines for supersonic flight, such as used for fighters and SST aircraft (e.g. Concorde) achieve the high exhaust speeds necessary for supersonic flight by using a divergent extension to the convergent engine nozzle which accelerates the exhaust to supersonic speeds.
Rocket motors maximise thrust and exhaust velocity by using convergent-divergent nozzles with very large area ratios and therefore extremely high pressure ratios. Mass flow is at a premium because all the propulsive mass is carried with vehicle, and very high exhaust speeds are desirable.
Magnetic
Magnetic nozzles have also been proposed for some types of propulsion, such as VASIMR, in which the flow of plasma is directed by magnetic fields instead of walls made of solid matter.
Spray
Many nozzles produce a very fine spray of liquids.
Atomizer nozzles are used for spray painting, perfumes, carburetors for internal combustion engines, spray on deodorants, antiperspirants and many other similar uses.
Air-aspirating nozzles use an opening in the cone shaped nozzle to inject air into a stream of water based foam (CAFS/AFFF/FFFP) to make the concentrate "foam up". Most commonly found on foam extinguishers and foam handlines.
Swirl nozzles inject the liquid in tangentially, and it spirals into the center and then exits through the central hole. Due to the vortexing this causes the spray to come out in a cone shape.
Vacuum
Vacuum cleaner nozzles come in several different shapes. Vacuum nozzles are used in vacuum cleaners.
Shaping
Some nozzles are shaped to produce a stream that is of a particular shape. For example, extrusion molding is a way of producing lengths of metals or plastics or other materials with a particular cross-section. This nozzle is typically referred to as a die.
See also
Fire hose#Forces on fire hoses and nozzles
Rocket engine nozzle
SERN
References
External links
Fluid mechanics | Nozzle | [
"Engineering"
] | 1,286 | [
"Civil engineering",
"Fluid mechanics"
] |
316,826 | https://en.wikipedia.org/wiki/Fibration | The notion of a fibration generalizes the notion of a fiber bundle and plays an important role in algebraic topology, a branch of mathematics.
Fibrations are used, for example, in Postnikov systems or obstruction theory.
In this article, all mappings are continuous mappings between topological spaces.
Formal definitions
Homotopy lifting property
A mapping satisfies the homotopy lifting property for a space if:
for every homotopy and
for every mapping (also called lift) lifting (i.e. )
there exists a (not necessarily unique) homotopy lifting (i.e. ) with
The following commutative diagram shows the situation:
Fibration
A fibration (also called Hurewicz fibration) is a mapping satisfying the homotopy lifting property for all spaces The space is called base space and the space is called total space. The fiber over is the subspace
Serre fibration
A Serre fibration (also called weak fibration) is a mapping satisfying the homotopy lifting property for all CW-complexes.
Every Hurewicz fibration is a Serre fibration.
Quasifibration
A mapping is called quasifibration, if for every and holds that the induced mapping is an isomorphism.
Every Serre fibration is a quasifibration.
Examples
The projection onto the first factor is a fibration. That is, trivial bundles are fibrations.
Every covering is a fibration. Specifically, for every homotopy and every lift there exists a uniquely defined lift with
Every fiber bundle satisfies the homotopy lifting property for every CW-complex.
A fiber bundle with a paracompact and Hausdorff base space satisfies the homotopy lifting property for all spaces.
An example of a fibration which is not a fiber bundle is given by the mapping induced by the inclusion where a topological space and is the space of all continuous mappings with the compact-open topology.
The Hopf fibration is a non-trivial fiber bundle and, specifically, a Serre fibration.
Basic concepts
Fiber homotopy equivalence
A mapping between total spaces of two fibrations and with the same base space is a fibration homomorphism if the following diagram commutes:
The mapping is a fiber homotopy equivalence if in addition a fibration homomorphism exists, such that the mappings and are homotopic, by fibration homomorphisms, to the identities and
Pullback fibration
Given a fibration and a mapping , the mapping is a fibration, where is the pullback and the projections of onto and yield the following commutative diagram:
The fibration is called the pullback fibration or induced fibration.
Pathspace fibration
With the pathspace construction, any continuous mapping can be extended to a fibration by enlarging its domain to a homotopy equivalent space. This fibration is called pathspace fibration.
The total space of the pathspace fibration for a continuous mapping between topological spaces consists of pairs with and paths with starting point where is the unit interval. The space carries the subspace topology of where describes the space of all mappings and carries the compact-open topology.
The pathspace fibration is given by the mapping with The fiber is also called the homotopy fiber of and consists of the pairs with and paths where and holds.
For the special case of the inclusion of the base point , an important example of the pathspace fibration emerges. The total space consists of all paths in which starts at This space is denoted by and is called path space. The pathspace fibration maps each path to its endpoint, hence the fiber consists of all closed paths. The fiber is denoted by and is called loop space.
Properties
The fibers over are homotopy equivalent for each path component of
For a homotopy the pullback fibrations and are fiber homotopy equivalent.
If the base space is contractible, then the fibration is fiber homotopy equivalent to the product fibration
The pathspace fibration of a fibration is very similar to itself. More precisely, the inclusion is a fiber homotopy equivalence.
For a fibration with fiber and contractible total space, there is a weak homotopy equivalence
Puppe sequence
For a fibration with fiber and base point the inclusion of the fiber into the homotopy fiber is a homotopy equivalence. The mapping with , where and is a path from to in the base space, is a fibration. Specifically it is the pullback fibration of the pathspace fibration along . This procedure can now be applied again to the fibration and so on. This leads to a long sequence:
The fiber of over a point consists of the pairs where is a path from to , i.e. the loop space . The inclusion of the fiber of into the homotopy fiber of is again a homotopy equivalence and iteration yields the sequence:Due to the duality of fibration and cofibration, there also exists a sequence of cofibrations. These two sequences are known as the Puppe sequences or the sequences of fibrations and cofibrations.
Principal fibration
A fibration with fiber is called principal, if there exists a commutative diagram:
The bottom row is a sequence of fibrations and the vertical mappings are weak homotopy equivalences. Principal fibrations play an important role in Postnikov towers.
Long exact sequence of homotopy groups
For a Serre fibration there exists a long exact sequence of homotopy groups. For base points and this is given by:The homomorphisms and are the induced homomorphisms of the inclusion and the projection
Hopf fibration
Hopf fibrations are a family of fiber bundles whose fiber, total space and base space are spheres:The long exact sequence of homotopy groups of the hopf fibration yields:
This sequence splits into short exact sequences, as the fiber in is contractible to a point:This short exact sequence splits because of the suspension homomorphism and there are isomorphisms:The homotopy groups are trivial for so there exist isomorphisms between and for
Analog the fibers in and in are contractible to a point. Further the short exact sequences split and there are families of isomorphisms:
and
Spectral sequence
Spectral sequences are important tools in algebraic topology for computing (co-)homology groups.
The Leray-Serre spectral sequence connects the (co-)homology of the total space and the fiber with the (co-)homology of the base space of a fibration. For a fibration with fiber where the base space is a path connected CW-complex, and an additive homology theory there exists a spectral sequence:
Fibrations do not yield long exact sequences in homology, as they do in homotopy. But under certain conditions, fibrations provide exact sequences in homology. For a fibration with fiber where base space and fiber are path connected, the fundamental group acts trivially on and in addition the conditions for and for hold, an exact sequence exists (also known under the name Serre exact sequence):This sequence can be used, for example, to prove Hurewicz's theorem or to compute the homology of loopspaces of the form
For the special case of a fibration where the base space is a -sphere with fiber there exist exact sequences (also called Wang sequences) for homology and cohomology:
Orientability
For a fibration with fiber and a fixed commutative ring with a unit, there exists a contravariant functor from the fundamental groupoid of to the category of graded -modules, which assigns to the module and to the path class the homomorphism where is a homotopy class in
A fibration is called orientable over if for any closed path in the following holds:
Euler characteristic
For an orientable fibration over the field with fiber and path connected base space, the Euler characteristic of the total space is given by:Here the Euler characteristics of the base space and the fiber are defined over the field .
See also
Approximate fibration
References
Algebraic topology
Topological spaces | Fibration | [
"Mathematics"
] | 1,728 | [
"Mathematical structures",
"Algebraic topology",
"Space (mathematics)",
"Topological spaces",
"Fields of abstract algebra",
"Topology"
] |
316,837 | https://en.wikipedia.org/wiki/Incenter | In geometry, the incenter of a triangle is a triangle center, a point defined for any triangle in a way that is independent of the triangle's placement or scale. The incenter may be equivalently defined as the point where the internal angle bisectors of the triangle cross, as the point equidistant from the triangle's sides, as the junction point of the medial axis and innermost point of the grassfire transform of the triangle, and as the center point of the inscribed circle of the triangle.
Together with the centroid, circumcenter, and orthocenter, it is one of the four triangle centers known to the ancient Greeks, and the only one of the four that does not in general lie on the Euler line. It is the first listed center, X(1), in Clark Kimberling's Encyclopedia of Triangle Centers, and the identity element of the multiplicative group of triangle centers.
For polygons with more than three sides, the incenter only exists for tangential polygons: those that have an incircle that is tangent to each side of the polygon. In this case the incenter is the center of this circle and is equally distant from all sides.
Definition and construction
It is a theorem in Euclidean geometry that the three interior angle bisectors of a triangle meet in a single point. In Euclid's Elements, Proposition 4 of Book IV proves that this point is also the center of the inscribed circle of the triangle. The incircle itself may be constructed by dropping a perpendicular from the incenter to one of the sides of the triangle and drawing a circle with that segment as its radius.<ref>[[Euclid's Elements|Euclid's Elements]], Book IV, Proposition 4: To inscribe a circle in a given triangle. David Joyce, Clark University, retrieved 2014-10-28.</ref>
The incenter lies at equal distances from the three line segments forming the sides of the triangle, and also from the three lines containing those segments. It is the only point equally distant from the line segments, but there are three more points equally distant from the lines, the excenters, which form the centers of the excircles of the given triangle. The incenter and excenters together form an orthocentric system.
The medial axis of a polygon is the set of points whose nearest neighbor on the polygon is not unique: these points are equidistant from two or more sides of the polygon. One method for computing medial axes is using the grassfire transform, in which one forms a continuous sequence of offset curves, each at some fixed distance from the polygon; the medial axis is traced out by the vertices of these curves. In the case of a triangle, the medial axis consists of three segments of the angle bisectors, connecting the vertices of the triangle to the incenter, which is the unique point on the innermost offset curve. The straight skeleton, defined in a similar way from a different type of offset curve, coincides with the medial axis for convex polygons and so also has its junction at the incenter.
Proofs
Ratio proof
Let the bisection of and meet at , and the bisection of and meet at , and and meet at .
And let and meet at .
Then we have to prove that is the bisection of .
In , , by the Angle bisector theorem.
In , .
Therefore, , so that .
So is the bisection of .
Perpendicular proof
A line that is an angle bisector is equidistant from both of its lines when measuring by the perpendicular. At the point where two bisectors intersect, this point is perpendicularly equidistant from the final angle's forming lines (because they are the same distance from this angles opposite edge), and therefore lies on its angle bisector line.
Relation to triangle sides and vertices
Trilinear coordinates
The trilinear coordinates for a point in the triangle give the ratio of distances to the triangle sides. Trilinear coordinates
for the incenter are given by
The collection of triangle centers may be given the structure of a group under coordinatewise multiplication of trilinear coordinates; in this group, the incenter forms the identity element.
Barycentric coordinates
The barycentric coordinates for a point in a triangle give weights such that the point is the weighted average of the triangle vertex positions.
Barycentric coordinates for the incenter are given by
where , , and are the lengths of the sides of the triangle, or equivalently (using the law of sines) by
where , , and are the angles at the three vertices.
Cartesian coordinates
The Cartesian coordinates of the incenter are a weighted average of the coordinates of the three vertices using the side lengths of the triangle relative to the perimeter—i.e., using the barycentric coordinates given above, normalized to sum to unity—as weights. (The weights are positive so the incenter lies inside the triangle as stated above.) If the three vertices are located at , , and , and the sides opposite these vertices have corresponding lengths , , and , then the incenter is at
Distances to vertices
Denoting the incenter of triangle ABC as I, the distances from the incenter to the vertices combined with the lengths of the triangle sides obey the equation
Additionally,
where R and r are the triangle's circumradius and inradius respectively.
Related constructions
Other centers
The distance from the incenter to the centroid is less than one third the length of the longest median of the triangle.
By Euler's theorem in geometry, the squared distance from the incenter I to the circumcenter O is given by, p. 232.
where R and r are the circumradius and the inradius respectively; thus the circumradius is at least twice the inradius, with equality only in the equilateral case.
The distance from the incenter to the center N of the nine point circle is
The squared distance from the incenter to the orthocenter H is
Inequalities include:
The incenter is the Nagel point of the medial triangle (the triangle whose vertices are the midpoints of the sides) and therefore lies inside this triangle. Conversely the Nagel point of any triangle is the incenter of its anticomplementary triangle.
The incenter must lie in the interior of a disk whose diameter connects the centroid G and the orthocenter H (the orthocentroidal disk), but it cannot coincide with the nine-point center, whose position is fixed 1/4 of the way along the diameter (closer to G). Any other point within the orthocentroidal disk is the incenter of a unique triangle.
Euler line
The Euler line of a triangle is a line passing through its circumcenter, centroid, and orthocenter, among other points.
The incenter generally does not lie on the Euler line; it is on the Euler line only for isosceles triangles, for which the Euler line coincides with the symmetry axis of the triangle and contains all triangle centers.
Denoting the distance from the incenter to the Euler line as d, the length of the longest median as v, the length of the longest side as u, the circumradius as R, the length of the Euler line segment from the orthocenter to the circumcenter as e, and the semiperimeter as s, the following inequalities hold:
Area and perimeter splitters
Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter; every line through the incenter that splits the area in half also splits the perimeter in half. There are either one, two, or three of these lines for any given triangle.
Relative distances from an angle bisector
Let X be a variable point on the internal angle bisector of A. Then X = I'' (the incenter) maximizes or minimizes the ratio along that angle bisector.
References
External links
Triangle centers | Incenter | [
"Physics",
"Mathematics"
] | 1,713 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
316,849 | https://en.wikipedia.org/wiki/Hazard%20analysis%20and%20critical%20control%20points | Hazard analysis critical control points, or HACCP (), is a systematic preventive approach to food safety from biological, chemical, and physical hazards in production processes that can cause the finished product to be unsafe and designs measures to reduce these risks to a safe level. In this manner, HACCP attempts to avoid hazards rather than attempting to inspect finished products for the effects of those hazards. The HACCP system can be used at all stages of a food chain, from food production and preparation processes including packaging, distribution, etc. The Food and Drug Administration (FDA) and the United States Department of Agriculture (USDA) require mandatory HACCP programs for juice and meat as an effective approach to food safety and protecting public health. Meat HACCP systems are regulated by the USDA, while seafood and juice are regulated by the FDA. All other food companies in the United States that are required to register with the FDA under the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, as well as firms outside the US that export food to the US, are transitioning to mandatory hazard analysis and risk-based preventive controls (HARPC) plans.
It is believed to stem from a production process monitoring used during World War II because traditional "end of the pipe" testing on artillery shells' firing mechanisms could not be performed, and a large percentage of the artillery shells made at the time were either duds or misfiring. HACCP itself was conceived in the 1960s when the US National Aeronautics and Space Administration (NASA) asked Pillsbury to design and manufacture the first foods for space flights. Since then, HACCP has been recognized internationally as a logical tool for adapting traditional inspection methods to a modern, science-based, food safety system. Based on risk-assessment, HACCP plans allow both industry and government to allocate their resources efficiently by establishing and auditing safe food production practices. In 1994, the organization International HACCP Alliance was established, initially to assist the US meat and poultry industries with implementing HACCP. As of 2007, its membership spread over other professional and industrial areas.
HACCP has been increasingly applied to industries other than food, such as cosmetics and pharmaceuticals. This method, which in effect seeks to plan out unsafe practices based on science, differs from traditional "produce and sort" quality control methods that do nothing to prevent hazards from occurring and must identify them at the end of the process. HACCP is focused only on the health safety issues of a product and not the quality of the product, yet HACCP principles are the basis of most food quality and safety assurance systems. In the United States, HACCP compliance is regulated by 21 CFR part 120 and 123. Similarly, FAO and WHO published a guideline for all governments to handle the issue in small and less developed food businesses.
History
In the early 1960s, a collaborated effort between the Pillsbury Company, NASA, and the U.S. Army Laboratories began with the objective to provide safe food for space expeditions. People involved in this collaboration included Herbert Hollander, Mary Klicka, and Hamed El-Bisi of the United States Army Laboratories in Natick, Massachusetts, Paul A. Lachance of the Manned Spacecraft Center in Houston, Texas, and Howard E. Baumann representing Pillsbury as its lead scientist.
To ensure that the food sent to space was safe, Lachance imposed strict microbial requirements, including pathogen limits (including E. coli, Salmonella, and Clostridium botulinum). Using the traditional end product testing method, it was soon realized that almost all of the food manufactured was being used for testing and very little was left for actual use. Therefore, a new approach was needed.
NASA's own requirements for critical control points (CCP) in engineering management would be used as a guide for food safety. CCP derived from failure mode and effects analysis (FMEA) from NASA via the munitions industry to test weapon and engineering system reliability. Using that information, NASA and Pillsbury required contractors to identify "critical failure areas" and eliminate them from the system, a first in the food industry then. Baumann, a microbiologist by training, was so pleased with Pillsbury's experience in the space program that he advocated for his company to adopt what would become HACCP at Pillsbury.
Soon, Pillsbury was confronted with a food safety issue of its own when glass contamination was found in farina, a cereal commonly used in infant food. Baumann's leadership promoted HACCP in Pillsbury for producing commercial foods, and applied to its own food production. This led to a panel discussion at the 1971 National Conference on Food Protection that included examining CCPs and good manufacturing practices in producing safe foods. Several botulism cases were attributed to under-processed low-acid canned foods in 1970–71. The United States Food and Drug Administration (FDA) asked Pillsbury to organize and conduct a training program on the inspection of canned foods for FDA inspectors. This 21-day program was first held in September 1972 with 11 days of classroom lecture and 10 days of canning plant evaluations. Canned food regulations (21 CFR 108, 21 CFR 110, 21 CFR 113, and 21 CFR 114) were first published in 1969. Pillsbury's training program, which was submitted to the FDA for review in 1969, entitled "Food Safety through the Hazard Analysis and Critical Control Point System" was the first use of the acronym HACCP.
HACCP was initially set on three principles, now shown as principles one, two, and four in the section below. Pillsbury quickly adopted two more principles, numbers three and five, to its own company in 1975. It was further supported by the National Academy of Sciences (NAS) when they wrote that the FDA inspection agency should transform itself from reviewing plant records into an HACCP system compliance auditor.
Over the period 1986 to 1990, a team consisting of National Sea Products and the Department of Fisheries and Oceans developed the first mandatory food inspection programme based on HACCP principles in the world. Together, these Canadian innovators developed and implemented a Total Quality Management Program and HACCP plans for all their groundfish trawlers and production facilities.
A second proposal by the NAS led to the development of the National Advisory Committee on Microbiological Criteria for Foods (NACMCF) in 1987. NACMCF was initially responsible for defining HACCP's systems and guidelines for its application and were coordinated with the Codex Alimentarius Committee for Food Hygiene, that led to reports starting in 1992 and further harmonization in 1997. By 1997, the seven HACCP principles listed below became the standard.
A year earlier, the American Society for Quality offered their first certifications for HACCP Auditors. First known as Certified Quality Auditor-HACCP, they were changed to Certified HACCP Auditor (CHA) in 2004.
HACCP expanded in all realms of the food industry, going into meat, poultry, seafood, dairy, and has spread now from the farm to the fork.
Principles
Conduct a hazard analysis
Plan to determine the food safety hazards and identify the preventive measures that can be applied to control these hazards. A food safety hazard is any biological, chemical, or physical property that may cause a food to be unsafe for human consumption.
Identify critical control points
A critical control point (CCP) is a point, step, or procedure in a food manufacturing process at which control can be applied and, as a result, a food safety hazard can be prevented, eliminated, or reduced to an acceptable level.
Establish critical limits for each critical control point
A critical limit is the maximum or minimum value to which a physical, biological, or chemical hazard must be controlled at a critical control point to prevent, eliminate, or reduce that hazard to an acceptable level.
Establish critical control point monitoring requirements
Monitoring activities are necessary to ensure that the process is under control at each critical control point. In the United States, the FSIS requires that each monitoring procedure and its frequency be listed in the HACCP plan.
Establish corrective actions
These are actions to be taken when monitoring indicates a deviation from an established critical limit. The final rule requires a plant's HACCP plan to identify the corrective actions to be taken if a critical limit is not met. Corrective actions are intended to ensure that no product is injurious to health or otherwise adulterated as a result if the deviation enters commerce.
Establish procedures for ensuring the HACCP system is working as intended
Validation ensures that the plans do what they were designed to do; that is, they are successful in ensuring the production of a safe product. Plants will be required to validate their own HACCP plans. FSIS will not approve HACCP plans in advance, but will review them for conformance with the final rule.
Verification ensures the HACCP plan is adequate, that is, working as intended. Verification procedures may include such activities as review of HACCP plans, CCP records, critical limits and microbial sampling and analysis. FSIS is requiring that the HACCP plan include verification tasks to be performed by plant personnel. Verification tasks would also be performed by FSIS inspectors. Both FSIS and industry will undertake microbial testing as one of several verification activities.
Verification also includes 'validation' – the process of finding evidence for the accuracy of the HACCP system (e.g. scientific evidence for critical limitations).
Establish record keeping procedures
The HACCP regulation requires that all plants maintain certain documents, including its hazard analysis and written HACCP plan, and records documenting the monitoring of critical control points, critical limits, verification activities, and the handling of processing deviations. Implementation involves monitoring, verifying, and validating of the daily work that is compliant with regulatory requirements in all stages all the time. The differences among those three types of work are given by Saskatchewan Agriculture and Food.
Standards
The seven HACCP principles are included in the international standard ISO 22000. This standard is a complete food safety and quality management system incorporating the elements of prerequisite programmes(GMP & SSOP), HACCP and the quality management system, which together form an organization's Total Quality Management system.
Other schemes with recognition from the Global Food Safety Initiative (GFSI), such as Safe Quality Food Institute's SQF Code, also relies upon the HACCP methodology as the basis for developing and maintaining food safety (level 2) and food quality (level 3) plans and programs in concert with the fundamental prerequisites of good manufacturing practices.
Training
Training for developing and implementing HACCP food safety management system are offered by several quality assurance companies.
However, ASQ
does provide a Trained HACCP Auditor (CHA) exam to
individuals seeking professional training.
In the UK the Chartered Institute of Environmental Health (CIEH) and Royal Society for Public Health offer HACCP for Food Manufacturing qualifications, accredited by the QCA
(Qualifications and Curriculum Authority).
Application
Consequent to the promulgation of US Seafood Regulation on HACCP on 18 December 1995, it became mandatory that every processor exporting to USA to comply with HACCP with effect from 18.12.1997. The Marine Products Export Development Authority of India (MPEDA) constituted an HACCP Cell in early 1996 to assist the Indian seafood industry in the effective implementation of HACCP. Technical personnel of MPEDA are trained in India and abroad on various aspects of HACCP including HACCP Audit. Seafood Exporters Association of India has eight regional offices to monitor compliance and members use the latest sustainable aquaculture practices and a high-tech hatchery that provides disease-resistant baby shrimp and fingerlings to its own farm, and to hundreds of farmers who supply raw shrimp to major brands Falcon Marine, Devi Seafoods, Ananda Group, Gadre Marine and Mukka Seafood.
Fish and fishery products
Fresh-cut produce
Juice and nectar products
Food outlets
Meat and poultry products
School food and services
Water quality management
The use of HACCP for water quality management was first proposed in 1994. Thereafter, a number of water quality initiatives applied HACCP principles and steps to the control of infectious disease from water, and provided the basis for the Water Safety Plan (WSP) approach in the third edition of the WHO Guidelines for Drinking-water Quality report. This WSP has been described as "a way of adapting the HACCP approach to drinking water systems".
Water quality management programme guidelines
Program Modernization: According to Ongley, 1998, a series of steps could be taken to execute a more useful transition – from technical programmes to policy to management decisions. Various aspects of the modernization process have been discussed by Ongley in ESCAP (1997):
Policy reform – A consultative process must define all the policy tenets and should review the execution of the said policy tenets.
Legal reform – Legal reform with respect to water quality management is one of the most crucial elements. This could be addressed by the creation of national data standards as well as the creation of a national process to analyze and review collected data.
Institutional reform – This is a complex issue and has no simple answers. Still, there are some key principles that can be helpful for institutional reform in light of water quality management. One of them is water quality monitoring as a service function. Apart from that, both technical efficiency and capacity issues emerge as major factors in reformed water quality programs.
Technical reform – This is the area that garners the most attention as well as investment. Such a reform targets facility modernization, including other co-factors like data programmes/networks, technical innovation, data management/data products and remediation.
HACCP for building water systems
Hazards associated with water systems in buildings include physical, chemical and microbial hazards. In 2013, NSF International, a public health and safety NGO, established education, training and certification programs in HACCP for building water systems. The programs, championed by NSF Executive VP Clif McLellan, were developed by subject matter experts Aaron Rosenblatt (Co-founder of Gordon & Rosenblatt, LLC) and William McCoy (Co-founder of Phigenics, Inc.) center on the use of HACCP principles adapted to the specific requirements of domestic (hot and cold) and utility (Cooling Towers, etc.) water systems in buildings, to prevent plumbing-associated hazards from harming people. Hazards addressed include scalding, lead, and disinfection byproducts as well as a range of clinically-important pathogens, such as Legionella, Pseudomonas, nontuberculous mycobacteria (NTM). Early adopters of HACCP for building water systems include leading healthcare institutions, notably the Mayo Clinic in Rochester, Minnesota.
ISO 22000 Food Safety Management System
ISO 22000 is a standard designed to help augment HACCP on issues related to food safety. Although several companies, especially big ones, have either implemented or are on the point of implementing ISO 22000, there are many others which are hesitant to do so. The main reason behind that is the lack of information and the fear that the new standard is too demanding in terms of bureaucratic work. ISO 22000 references the Codex Alimentarius General Principles of Food Hygiene, CXC 1-1969 which includes HACCP principles and 12 HACCP application steps. This is explained in a joint publication from ISO and United Nations Industrial Development Organization (UNIDO) which provides guidance to assist all organizations (including small and medium-sized) that recognize the potential benefits of implementing a Food Safety Management System.
See also
Failure mode and effects analysis
Failure mode, effects, and criticality analysis
Fault tree analysis
Food safety
Design Review Based on Failure Mode
Fast food restaurant
ISO 22000
Hazard analysis
Hazard analysis and risk-based preventive controls
Hazop
Hygiene
Sanitation
Sanitation Standard Operating Procedures
Codex Alimentarius
Total quality management
References
Food safety
Food technology
Quality management
Hazard analysis
United States Department of Agriculture
Drinking water | Hazard analysis and critical control points | [
"Engineering"
] | 3,286 | [
"Safety engineering",
"Hazard analysis"
] |
316,904 | https://en.wikipedia.org/wiki/Ehrhart%20polynomial | In mathematics, an integral polytope has an associated Ehrhart polynomial that encodes the relationship between the volume of a polytope and the number of integer points the polytope contains. The theory of Ehrhart polynomials can be seen as a higher-dimensional generalization of Pick's theorem in the Euclidean plane.
These polynomials are named after Eugène Ehrhart who studied them in the 1960s.
Definition
Informally, if is a polytope, and is the polytope formed by expanding by a factor of in each dimension, then is the number of integer lattice points in .
More formally, consider a lattice in Euclidean space and a -dimensional polytope in with the property that all vertices of the polytope are points of the lattice. (A common example is and a polytope for which all vertices have integer coordinates.) For any positive integer , let be the -fold dilation of (the polytope formed by multiplying each vertex coordinate, in a basis for the lattice, by a factor of ), and let
be the number of lattice points contained in the polytope . Ehrhart showed in 1962 that is a rational polynomial of degree in , i.e. there exist rational numbers such that:
for all positive integers .
The Ehrhart polynomial of the interior of a closed convex polytope can be computed as:
where is the dimension of . This result is known as Ehrhart–Macdonald reciprocity.
Examples
Let be a -dimensional unit hypercube whose vertices are the integer lattice points all of whose coordinates are 0 or 1. In terms of inequalities,
Then the -fold dilation of is a cube with side length , containing integer points. That is, the Ehrhart polynomial of the hypercube is . Additionally, if we evaluate at negative integers, then
as we would expect from Ehrhart–Macdonald reciprocity.
Many other figurate numbers can be expressed as Ehrhart polynomials. For instance, the square pyramidal numbers are given by the Ehrhart polynomials of a square pyramid with an integer unit square as its base and with height one; the Ehrhart polynomial in this case is .
Ehrhart quasi-polynomials
Let be a rational polytope. In other words, suppose
where and (Equivalently, is the convex hull of finitely many points in ) Then define
In this case, is a quasi-polynomial in . Just as with integral polytopes, Ehrhart–Macdonald reciprocity holds, that is,
Examples of Ehrhart quasi-polynomials
Let be a polygon with vertices (0,0), (0,2), (1,1) and (, 0). The number of integer points in will be counted by the quasi-polynomial
Interpretation of coefficients
If is closed (i.e. the boundary faces belong to ), some of the coefficients of have an easy interpretation:
the leading coefficient, , is equal to the -dimensional volume of , divided by (see lattice for an explanation of the content or covolume of a lattice);
the second coefficient, , can be computed as follows: the lattice induces a lattice on any face of ; take the -dimensional volume of , divide by , and add those numbers for all faces of ;
the constant coefficient, , is the Euler characteristic of . When is a closed convex polytope, .
The Betke–Kneser theorem
Ulrich Betke and Martin Kneser established the following characterization of the Ehrhart coefficients. A functional defined on integral polytopes is an and translation invariant valuation if and only if there are real numbers such that
Ehrhart series
We can define a generating function for the Ehrhart polynomial of an integral -dimensional polytope as
This series can be expressed as a rational function. Specifically, Ehrhart proved (1962) that there exist complex numbers , , such that the Ehrhart series of is
Additionally, Richard P. Stanley's non-negativity theorem states that under the given hypotheses, will be non-negative integers, for .
Another result by Stanley shows that if is a lattice polytope contained in , then for all . The -vector is in general not unimodal, but it is whenever it is symmetric, and the polytope has a regular unimodular triangulation.
Ehrhart series for rational polytopes
As in the case of polytopes with integer vertices, one defines the Ehrhart series for a rational polytope. For a d-dimensional rational polytope , where is the smallest integer such that is an integer polytope ( is called the denominator of ), then one has
where the are still non-negative integers.
Non-leading coefficient bounds
The polynomial's non-leading coefficients in the representation
can be upper bounded:
where is a Stirling number of the first kind. Lower bounds also exist.
Toric variety
The case and of these statements yields Pick's theorem. Formulas for the other coefficients are much harder to get; Todd classes of toric varieties, the Riemann–Roch theorem as well as Fourier analysis have been used for this purpose.
If is the toric variety corresponding to the normal fan of , then defines an ample line bundle on , and the Ehrhart polynomial of coincides with the Hilbert polynomial of this line bundle.
Ehrhart polynomials can be studied for their own sake. For instance, one could ask questions related to the roots of an Ehrhart polynomial. Furthermore, some authors have pursued the question of how these polynomials could be classified.
Generalizations
It is possible to study the number of integer points in a polytope if we dilate some facets of but not others. In other words, one would like to know the number of integer points in semi-dilated polytopes. It turns out that such a counting function will be what is called a multivariate quasi-polynomial. An Ehrhart-type reciprocity theorem will also hold for such a counting function.
Counting the number of integer points in semi-dilations of polytopes has applications in enumerating the number of different dissections of regular polygons and the number of non-isomorphic unrestricted codes, a particular kind of code in the field of coding theory.
See also
Quasi-polynomial
Stanley's reciprocity theorem
References
Further reading
. Introduces the Fourier analysis approach and gives references to other related articles.
.
Figurate numbers
Polynomials
Lattice points
Polytopes | Ehrhart polynomial | [
"Mathematics"
] | 1,339 | [
"Polynomials",
"Lattice points",
"Mathematical objects",
"Number theory",
"Figurate numbers",
"Numbers",
"Algebra"
] |
316,915 | https://en.wikipedia.org/wiki/Ecological%20damage | Ecological damage may refer to:
environmental degradation
something adversely affecting ecological health
something adversely affecting ecosystem health
Ecology | Ecological damage | [
"Biology"
] | 23 | [
"Ecology"
] |
316,958 | https://en.wikipedia.org/wiki/City%20rhythm | City rhythm is a metaphor for the regular coming and going in cities, the repetitive activities, the sounds and smells that occur regularly in cities. The recognition of city rhythms is a useful metaphor, helping to understand modern city life. The concept of city rhythm makes it possible to understand the multitude of aspects of city life. Traditional approaches to urban thinking focus on one such rhythm only, normally the dominant one. This leads to the omission of many aspects of city life.
Dominant rhythm is a metaphor used in conjunction with city rhythm. It is the most powerful rhythm in a city, enabling the shaping and forming of time and space, both within the city and in faraway places through networks.
These dominant rhythms are not fixed and indeed change. Religious rhythms were more dominant in the past, whereas at present economic rhythms prevail.
References
Crang, M.A. (2001) Rhythms of the City in Thrift, N. and May, J. 2001 Timespace.
Lefebvre, H. Henri Lefebvre (2004) Rhythmanalysis – Space, Time and Everyday Life. Continuum, London
Lynch, K. Kevin Lynch (1972) What time is this place? MIT Press Cambridge, Massachusetts
Parkes, D.N.; Thrift, N., Carlstein, T. Nigel Thrift (1978) Timing Space and Spacing time, Volumes 1, 2 and 3. Arnold Publishers, London
Parkes, D.N.; Thrift, N. (1980) Times, Spaces and Places – A Chronogeographic Perspective. John Wiley & Sons, Chichester
Urban planning | City rhythm | [
"Engineering"
] | 326 | [
"Urban planning",
"Architecture"
] |
316,966 | https://en.wikipedia.org/wiki/Ecological%20modernization | Ecological modernization is a school of thought that argues that both the state and the market can work together to protect the environment. It has gained increasing attention among scholars and policymakers in the last several decades internationally. It is an analytical approach as well as a policy strategy and environmental discourse (Hajer, 1995).
Origins and key elements
Ecological modernization emerged in the early 1980s within a group of scholars at Free University and the Social Science Research Centre in Berlin, among them Joseph Huber, and . Various authors pursued similar ideas at the time, e.g. Arthur H. Rosenfeld, Amory Lovins, Donald Huisingh, René Kemp, or Ernst Ulrich von Weizsäcker. Further substantial contributions were made by Arthur P.J. Mol, Gert Spaargaren and David A Sonnenfeld (Mol and Sonnenfeld, 2000; Mol, 2001).
One basic assumption of ecological modernization relates to environmental readaptation of economic growth and industrial development. On the basis of enlightened self-interest, economy and ecology can be favourably combined: Environmental productivity, i.e. productive use of natural resources and environmental media (air, water, soil, ecosystems), can be a source of future growth and development in the same way as labour productivity and capital productivity. This includes increases in energy and resource efficiency as well as product and process innovations such as environmental management and sustainable supply chain management, clean technologies, benign substitution of hazardous substances, and product design for environment. Radical innovations in these fields can not only reduce quantities of resource turnover and emissions, but also change the quality or structure of the industrial metabolism. In the co-evolution of humans and nature, and in order to upgrade the environment's carrying capacity, ecological modernization gives humans an active role to play, which may entail conflicts with nature conservation.
There are different understandings of the scope of ecological modernization - whether it is just about techno-industrial progress and related aspects of policy and economy, and to what extent it also includes cultural aspects (ecological modernization of mind, value orientations, attitudes, behaviour and lifestyles). Similarly, there is some pluralism as to whether ecological modernization would need to rely mainly on government, or markets and entrepreneurship, or civil society, or some sort of multi-level governance combining the three. Some scholars explicitly refer to general modernization theory as well as non-Marxist world-system theory, others don't.
Ultimately, however, there is a common understanding that ecological modernization will have to result in innovative structural change. So research is now still more focused on environmental innovations, or eco-innovations, and the interplay of various societal factors (scientific, economic, institutional, legal, political, cultural) which foster or hamper such innovations (Klemmer et al., 1999; Huber, 2004; Weber and Hemmelskamp, 2005; Olsthoorn and Wieczorek, 2006).
Ecological modernization shares a number of features with neighbouring, overlapping approaches. Among the most important are
the concept of sustainable development
the approach of industrial metabolism (Ayres and Simonis, 1994)
the concept of industrial ecology (Socolow, 1994)
Additional elements
A special topic of ecological modernization research during recent years was sustainable household, i.e. environment-oriented reshaping of lifestyles, consumption patterns, and demand-pull control of supply chains (Vergragt, 2000; OECD 2002).
Some scholars of ecological modernization share an interest in industrial symbiosis, i.e. inter-site recycling that helps to reduce the consumption of resources via increasing efficiency (i.e. pollution prevention, waste reduction), typically by taking externalities from one economic production process and using them as raw material inputs for another (Christoff, 1996).
Ecological modernization also relies on product life-cycle assessment and the analysis of materials and energy flows. In this context, ecological modernization promotes 'cradle to cradle' manufacturing (Braungart and McDonough, 2002), contrasted against the usual 'cradle to grave' forms of manufacturing - where waste is not re-integrated back into the production process.
Another special interest in the ecological modernization literature has been the role of social movements and the emergence of civil society as a key agent of change (Fisher and Freudenburg, 2001).
As a strategy of change, some forms of ecological modernization may be favored by business interests because they seemingly meet the triple bottom line of economics, society, and environment, which, it is held, underpin sustainability, yet do not challenge free market principles. This contrasts with many environmental movement perspectives, which regard free trade and its notion of business self-regulation as part of the problem, or even an origin of environmental degradation. Under ecological modernization, the state is seen in a variety of roles and capacities: as the enabler for markets that help produce the technological advances via competition; as the regulatory (see regulation) medium through which corporations are forced to 'take back' their various wastes and re-integrate them in some manner into the production of new goods and services (e.g. the way that car corporations in Germany are required to accept back cars they manufactured once those vehicles have reached the end of their product lifespan); and in some cases as an institution that is incapable of addressing critical local, national, and global environmental problems. In the latter case, ecological modernization shares with Ulrich Beck (1999, 37-40) and others notions of the necessity of emergence of new forms of environmental governance, sometimes referred to as subpolitics or political modernization, where the environmental movement, community groups, businesses, and other stakeholders increasingly take on direct and leadership roles in stimulating environmental transformation. Political modernization of this sort requires certain supporting norms and institutions such as a free, independent, or at least critical press, basic human rights of expression, organization, and assembly, etc. New media such as the Internet greatly facilitate this.
Criticisms
Critics argue that ecological modernization will fail to protect the environment and does nothing to alter the impulses within the capitalist economic mode of production (see capitalism) that inevitably lead to environmental degradation (Foster, 2002). As such, it is just a form of 'green-washing'. Critics question whether technological advances alone can achieve resource conservation and better environmental protection, particularly if left to business self-regulation practices (York and Rosa, 2003). For instance, many technological improvements are currently feasible but not widely utilized. The most environmentally friendly product or manufacturing process (which is often also the most economically efficient) is not always the one automatically chosen by self-regulating corporations (e.g. hydrogen or biofuel vs. peak oil). In addition, some critics have argued that ecological modernization does not redress gross injustices that are produced within the capitalist system, such as environmental racism - where people of color and low income earners bear a disproportionate burden of environmental harm such as pollution, and lack access to environmental benefits such as parks, and social justice issues such as eliminating unemployment (Bullard, 1993; Gleeson and Low, 1999; Harvey, 1996) - environmental racism is also referred to as issues of the asymmetric distribution of environmental resources and services (Everett & Neu, 2000). Moreover, the theory seems to have limited global efficacy, applying primarily to its countries of origin - Germany and the Netherlands, and having little to say about the developing world (Fisher and Freudenburg, 2001). Perhaps the harshest criticism though, is that ecological modernization is predicated upon the notion of 'sustainable growth', and in reality this is not possible because growth entails the consumption of natural and human capital at great costs to ecosystems and societies.
Ecological modernization, its effectiveness and applicability, strengths and limitations, remains a dynamic and contentious area of environmental social science research and policy discourse in the early 21st century.
See also
Bright green environmentalism
Ecological design
Ecological civilization
Ecomodernism
Environmental sociology
Reflexive modernization
Restoration ecology
Sustainable development
References
Ayres, R. U. and Simonis, U. E., 1994, Industrial Metabolism. Restructuring for Sustainable Development, Tokyo, UN University Press.
Beck, U., 1999, World Risk Society, Cambridge, UK, Polity Press, .
Braungart, M., and McDonough, W., 2002, Cradle to Cradle. Remaking the way we make things, New York, N.Y., North Point Press.
Bullard, R., (ed.) 1993, Confronting Environmental Racism: Voices from the Grassroots, Boston, South End Press.
Dickens, P. 2004, Society & Nature: Changing Our Environment, Changing Ourselves, Cambridge, UK, Polity, .
Everett, J., and Neu, D., 2000, "Ecological Modernization and the Limits of Environmental Accounting?", Accounting Forum, 24(1), pp. 5–29.
Fisher, D.R., and Freudenburg, W.R., 2001, "Ecological modernization and its critics: Assessing the past and looking toward the future", Society and Natural Resources, 14, pp. 701–709.
Foster, J.B., 2002, Ecology Against Capitalism, New York, Monthly Review Press.
Gleeson, B. and Low, N. (eds.) 1999, Global Ethics and Environment, London, Routledge.
Hajer, M.A., 1995, The Politics of Environmental Discourse: Ecological Modernization and the Policy Process, Oxford, UK, Oxford University Press, .
Harvey, D., 1996, Justice, Nature and the Geography of Difference, Malden, Ma., Blackwell, p. 377-402.
Huber, J., 2004, New Technologies and Environmental Innovation, Cheltenham, UK, Edward Elgar.
Klemmer, P., et al., 1999, Environmental Innovations. Incentives and Barriers, Berlin, Analytica.
Mol, A.P.J., 2001, Globalization and Environmental Reform: The Ecological Modernization of the Global Economy, Cambridge, Ma., MIT Press, .
Mol, A.P.J., and Sonnenfeld, D.A., (eds.) 2000, Ecological Modernisation around the World: Perspectives and Critical Debates, London and Portland, OR, Frank Cass/ Routledge, .
Mol, A.P.J., Sonnenfeld, D.A., and Spaargaren, G., (eds.) 2009, The Ecological Modernisation Reader: Environmental Reform in Theory and Practice, London and New York, Routledge, hardback, paperback.
OECD (ed.), Towards Sustainable Household Consumption? Trends and Policies in OECD Countries, Paris, OECD Publ., 2002.
Olsthoorn, X., and Wieczorek, A., (eds.) 2006, Understanding Industrial Transformation. Views from Different Disciplines, Dordrecht: Springer.
Redclift, M. R., and Woodgate, G. (eds.) 1997, The International Handbook of Environmental Sociology, Cheltenham, UK, Edward Elgar, .
Redclift, M. R., and Woodgate, G., (eds.) 2005, New Developments in Environmental Sociology, Cheltenham, Edward Elgar, .
Socolow, R. et al., (eds.) 1994, Industrial Ecology and Global Change, Cambridge University Press.
Vergragt, Ph., Strategies Towards the Sustainable Household, SusHouse Project Final Report, Delft University of Technology, NL, 2000.
Environmental sociology
Environmental social science concepts
Environmental policy
Industrial ecology
Global ethics
Modernity | Ecological modernization | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,375 | [
"Industrial engineering",
"Environmental social science concepts",
"Environmental sociology",
"Environmental engineering",
"Industrial ecology",
"Environmental social science"
] |
316,993 | https://en.wikipedia.org/wiki/Emulsion%20polymerization | In polymer chemistry, emulsion polymerization is a type of radical polymerization that usually starts with an emulsion incorporating water, monomers, and surfactants. The most common type of emulsion polymerization is an oil-in-water emulsion, in which droplets of monomer (the oil) are emulsified (with surfactants) in a continuous phase of water. Water-soluble polymers, such as certain polyvinyl alcohols or hydroxyethyl celluloses, can also be used to act as emulsifiers/stabilizers. The name "emulsion polymerization" is a misnomer that arises from a historical misconception. Rather than occurring in emulsion droplets, polymerization takes place in the latex/colloid particles that form spontaneously in the first few minutes of the process. These latex particles are typically 100 nm in size, and are made of many individual polymer chains. The particles are prevented from coagulating with each other because each particle is surrounded by the surfactant ('soap'); the charge on the surfactant repels other particles electrostatically. When water-soluble polymers are used as stabilizers instead of soap, the repulsion between particles arises because these water-soluble polymers form a 'hairy layer' around a particle that repels other particles, because pushing particles together would involve compressing these chains.
Emulsion polymerization is used to make several commercially important polymers. Many of these polymers are used as solid materials and must be isolated from the aqueous dispersion after polymerization. In other cases the dispersion itself is the end product. A dispersion resulting from emulsion polymerization is often called a latex (especially if derived from a synthetic rubber) or an emulsion (even though "emulsion" strictly speaking refers to a dispersion of an immiscible liquid in water). These emulsions find applications in adhesives, paints, paper coating and textile coatings. They are often preferred over solvent-based products in these applications due to the absence of volatile organic compounds (VOCs) in them.
Advantages of emulsion polymerization include:
High molecular weight polymers can be made at fast polymerization rates. By contrast, in bulk and solution free-radical polymerization, there is a tradeoff between molecular weight and polymerization rate.
The continuous water phase is an excellent conductor of heat, enabling fast polymerization rates without loss of temperature control.
Since polymer molecules are contained within the particles, the viscosity of the reaction medium remains close to that of water and is not dependent on molecular weight.
The final product can be used as is and does not generally need to be altered or processed.
Disadvantages of emulsion polymerization include:
Surfactants and other polymerization adjuvants remain in the polymer or are difficult to remove
For dry (isolated) polymers, water removal is an energy-intensive process
Emulsion polymerizations are usually designed to operate at high conversion of monomer to polymer. This can result in significant chain transfer to polymer.
Can not be used for condensation, ionic, or Ziegler-Natta polymerization, although some exceptions are known.
History
The early history of emulsion polymerization is connected with the field of synthetic rubber. The idea of using an emulsified monomer in an aqueous suspension or emulsion was first conceived at Bayer, before World War I, in an attempt to prepare synthetic rubber. The impetus for this development was the observation that natural rubber is produced at room temperature in dispersed particles stabilized by colloidal polymers, so the industrial chemists tried to duplicate these conditions. The Bayer workers used naturally occurring polymers such as gelatin, ovalbumin, and starch to stabilize their dispersion. By today's definition these were not true emulsion polymerizations, but suspension polymerizations.
The first "true" emulsion polymerizations, which used a surfactant and polymerization initiator, were conducted in the 1920s to polymerize isoprene. Over the next twenty years, through the end of World War II, efficient methods for production of several forms of synthetic rubber by emulsion polymerization were developed, but relatively few publications in the scientific literature appeared: most disclosures were confined to patents or were kept secret due to wartime needs.
After World War II, emulsion polymerization was extended to production of plastics. Manufacture of dispersions to be used in latex paints and other products sold as liquid dispersions commenced. Ever more sophisticated processes were devised to prepare products that replaced solvent-based materials. Ironically, synthetic rubber manufacture turned more and more away from emulsion polymerization as new organometallic catalysts were developed that allowed much better control of polymer architecture.
Theoretical overview
The first successful theory to explain the distinct features of emulsion polymerization was developed by Smith and Ewart, and Harkins in the 1940s, based on their studies of polystyrene. Smith and Ewart arbitrarily divided the mechanism of emulsion polymerization into three stages or intervals. Subsequently, it has been recognized that not all monomers or systems undergo these particular three intervals. Nevertheless, the Smith-Ewart description is a useful starting point to analyze emulsion polymerizations.
The Smith-Ewart-Harkins theory for the mechanism of free-radical emulsion polymerization is summarized by the following steps:
A monomer is dispersed or emulsified in a solution of surfactant and water, forming relatively large droplets in water.
Excess surfactant creates micelles in the water.
Small amounts of monomer diffuse through the water to the micelle.
A water-soluble initiator is introduced into the water phase where it reacts with monomer in the micelles. (This characteristic differs from suspension polymerization where an oil-soluble initiator dissolves in the monomer, followed by polymer formation in the monomer droplets themselves.) This is considered Smith-Ewart interval 1.
The total surface area of the micelles is much greater than the total surface area of the fewer, larger monomer droplets; therefore the initiator typically reacts in the micelle and not the monomer droplet.
Monomer in the micelle quickly polymerizes and the growing chain terminates. At this point the monomer-swollen micelle has turned into a polymer particle. When both monomer droplets and polymer particles are present in the system, this is considered Smith-Ewart interval 2.
More monomer from the droplets diffuses to the growing particle, where more initiators will eventually react.
Eventually the free monomer droplets disappear and all remaining monomer is located in the particles. This is considered Smith-Ewart interval 3.
Depending on the particular product and monomer, additional monomer and initiator may be continuously and slowly added to maintain their levels in the system as the particles grow.
The final product is a dispersion of polymer particles in water. It can also be known as a polymer colloid, a latex, or commonly and inaccurately as an 'emulsion'.
Smith-Ewart theory does not predict the specific polymerization behavior when the monomer is somewhat water-soluble, like methyl methacrylate or vinyl acetate. In these cases homogeneous nucleation occurs: particles are formed without the presence or need for surfactant micelles.
High molecular weights are developed in emulsion polymerization because the concentration of growing chains within each polymer particle is very low. In conventional radical polymerization, the concentration of growing chains is higher, which leads to termination by coupling, which ultimately results in shorter polymer chains. The original Smith-Ewart-Hawkins mechanism required each particle to contain either zero or one growing chain. Improved understanding of emulsion polymerization has relaxed that criterion to include more than one growing chain per particle, however, the number of growing chains per particle is still considered to be very low.
Because of the complex chemistry that occurs during an emulsion polymerization, including polymerization kinetics and particle formation kinetics, quantitative understanding of the mechanism of emulsion polymerization has required extensive computer simulation. Robert Gilbert has summarized a recent theory.
More detailed treatment of Smith-Ewart theory
Interval 1
When radicals generated in the aqueous phase encounter the monomer within the micelle, they initiate polymerization. The conversion of monomer to polymer within the micelle lowers the monomer concentration and generates a monomer concentration gradient. Consequently, the monomer from monomer droplets and uninitiated micelles begin to diffuse to the growing, polymer-containing, particles. Those micelles that did not encounter a radical during the earlier stage of conversion begin to disappear, losing their monomer and surfactant to the growing particles. The theory predicts that after the end of this interval, the number of growing polymer particles remains constant.
Interval 2
This interval is also known as steady state reaction stage. Throughout this stage, monomer droplets act as reservoirs supplying monomer to the growing polymer particles by diffusion through the water. While at steady state, the ratio of free radicals per particle can be divided into three cases. When the number of free radicals per particle is less than , this is called Case 1. When the number of free radicals per particle equals , this is called Case 2. And when there is greater than radical per particle, this is called Case 3. Smith-Ewart theory predicts that Case 2 is the predominant scenario for the following reasons. A monomer-swollen particle that has been struck by a radical contains one growing chain. Because only one radical (at the end of the growing polymer chain) is present, the chain cannot terminate, and it will continue to grow until a second initiator radical enters the particle. As the rate of termination is much greater than the rate of propagation, and because the polymer particles are extremely small, chain growth is terminated immediately after the entrance of the second initiator radical. The particle then remains dormant until a third initiator radical enters, initiating the growth of a second chain. Consequently, the polymer particles in this case either have zero radicals (dormant state), or 1 radical (polymer growing state) and a very short period of 2 radicals (terminating state) which can be ignored for the free radicals per particle calculation. At any given time, a micelle contains either one growing chain or no growing chains (assumed to be equally probable). Thus, on average, there is around 1/2 radical per particle, leading to the Case 2 scenario. The polymerization rate in this stage can be expressed by
where is the homogeneous propagation rate constant for polymerization within the particles and is the equilibrium monomer concentration within a particle. represents the overall concentration of polymerizing radicals in the reaction. For Case 2, where the average number of free radicals per micelle are , can be calculated in following expression:
where is number concentration of micelles (number of micelles per unit volume), and is the Avogadro constant (). Consequently, the rate of polymerization is then
Interval 3
Separate monomer droplets disappear as the reaction continues. Polymer particles in this stage may be sufficiently large enough that they contain more than 1 radical per particle.
Process considerations
Emulsion polymerizations have been used in batch, semi-batch, and continuous processes. The choice depends on the properties desired in the final polymer or dispersion and on the economics of the product. Modern process control schemes have enabled the development of complex reaction processes, with ingredients such as initiator, monomer, and surfactant added at the beginning, during, or at the end of the reaction.
Early styrene-butadiene rubber (SBR) recipes are examples of true batch processes: all ingredients added at the same time to the reactor. Semi-batch recipes usually include a programmed feed of monomer to the reactor. This enables a starve-fed reaction to ensure a good distribution of monomers into the polymer backbone chain. Continuous processes have been used to manufacture various grades of synthetic rubber.
Some polymerizations are stopped before all the monomer has reacted. This minimizes chain transfer to polymer. In such cases the monomer must be removed or stripped from the dispersion.
Colloidal stability is a factor in design of an emulsion polymerization process. For dry or isolated products, the polymer dispersion must be isolated, or converted into solid form. This can be accomplished by simple heating of the dispersion until all water evaporates. More commonly, the dispersion is destabilized (sometimes called "broken") by addition of a multivalent cation. Alternatively, acidification will destabilize a dispersion with a carboxylic acid surfactant. These techniques may be employed in combination with application of shear to increase the rate of destabilization. After isolation of the polymer, it is usually washed, dried, and packaged.
By contrast, products sold as a dispersion are designed with a high degree of colloidal stability. Colloidal properties such as particle size, particle size distribution, and viscosity are of critical importance to the performance of these dispersions.
Living polymerization processes that are carried out via emulsion polymerization such as iodine-transfer polymerization and RAFT have been developed.
Controlled coagulation techniques can enable better control of the particle size and distribution.
Components
Monomers
Typical monomers are those that undergo radical polymerization, are liquid or gaseous at reaction conditions, and are poorly soluble in water. Solid monomers are difficult to disperse in water. If monomer solubility is too high, particle formation may not occur and the reaction kinetics reduce to that of solution polymerization.
Ethene and other simple olefins must be polymerized at very high pressures (up to 800 bar).
Comonomers
Copolymerization is common in emulsion polymerization. The same rules and comonomer pairs that exist in radical polymerization operate in emulsion polymerization. However, copolymerization kinetics are greatly influenced by the aqueous solubility of the monomers. Monomers with greater aqueous solubility will tend to partition in the aqueous phase and not in the polymer particle. They will not get incorporated as readily in the polymer chain as monomers with lower aqueous solubility. This can be avoided by a programmed addition of monomer using a semi-batch process.
Ethene and other alkenes are used as minor comonomers in emulsion polymerization, notably in vinyl acetate copolymers.
Small amounts of acrylic acid or other ionizable monomers are sometimes used to confer colloidal stability to a dispersion.
Initiators
Both thermal and redox generation of free radicals have been used in emulsion polymerization. Persulfate salts are commonly used in both initiation modes. The persulfate ion readily breaks up into sulfate radical ions above about 50 °C, providing a thermal source of initiation. Redox initiation takes place when an oxidant such as a persulfate salt, a reducing agent such as glucose, Rongalite, or sulfite, and a redox catalyst such as an iron compound are all included in the polymerization recipe. Redox recipes are not limited by temperature and are used for polymerizations that take place below 50 °C.
Although organic peroxides and hydroperoxides are used in emulsion polymerization, initiators are usually water soluble and partition into the water phase. This enables the particle generation behavior described in the theory section. In redox initiation, either the oxidant or the reducing agent (or both) must be water-soluble, but one component can be water-insoluble.
Surfactants
Selection of the correct surfactant is critical to the development of any emulsion polymerization process. The surfactant must enable a fast rate of polymerization, minimize coagulum or fouling in the reactor and other process equipment, prevent an unacceptably high viscosity during polymerization (which leads to poor heat transfer), and maintain or even improve properties in the final product such as tensile strength, gloss, and water absorption.
Anionic, nonionic, and cationic surfactants have been used, although anionic surfactants are by far most prevalent. Surfactants with a low critical micelle concentration (CMC) are favored; the polymerization rate shows a dramatic increase when the surfactant level is above the CMC, and minimization of the surfactant is preferred for economic reasons and the (usually) adverse effect of surfactant on the physical properties of the resulting polymer. Mixtures of surfactants are often used, including mixtures of anionic with nonionic surfactants. Mixtures of cationic and anionic surfactants form insoluble salts and are not useful.
Examples of surfactants commonly used in emulsion polymerization include fatty acids, sodium lauryl sulfate, and alpha-olefin sulfonate.
Non-surfactant stabilizers
Some grades of polyvinyl alcohol and other water-soluble polymers can promote emulsion polymerization even though they do not typically form micelles and do not act as surfactants (for example, they do not lower surface tension). It is believed that growing polymer chains graft onto these water-soluble polymers, which stabilize the resulting particles.
Dispersions prepared with such stabilizers typically exhibit excellent colloidal stability (for example, dry powders may be mixed into the dispersion without causing coagulation). However, they often result in products that are very water sensitive due to the presence of the water-soluble polymer.
Other ingredients
Other ingredients found in emulsion polymerization include chain transfer agents, buffering agents, and inert salts. Preservatives are added to products sold as liquid dispersions to retard bacterial growth. These are usually added after polymerization, however.
Applications
Polymers produced by emulsion polymerization can roughly be divided into three categories.
Synthetic rubber
Some grades of styrene-butadiene (SBR)
Some grades of Polybutadiene
Polychloroprene (Neoprene)
Nitrile rubber
Acrylic rubber
Fluoroelastomer (FKM)
Plastics
Some grades of PVC
Some grades of polystyrene
Some grades of PMMA
Acrylonitrile-butadiene-styrene terpolymer (ABS)
Polyvinylidene fluoride
Polyvinyl fluoride
PTFE
Dispersions (i.e. polymers sold as aqueous dispersions)
polyvinyl acetate
polyvinyl acetate copolymers
polyacrylates
Styrene-butadiene
VAE (vinyl acetate – ethylene copolymers)
See also
International Union of Pure and Applied Chemistry
Radical polymerization
RAFT (chemistry)
Robert Gilbert
Dispersion polymerization
Ray P. Dinsmore
References
Chemical processes
Polymerization reactions
fr:Procédé de polymérisation#Polymérisation en émulsion | Emulsion polymerization | [
"Chemistry",
"Materials_science"
] | 3,929 | [
"Chemical processes",
"nan",
"Polymer chemistry",
"Chemical process engineering",
"Polymerization reactions"
] |
317,018 | https://en.wikipedia.org/wiki/Quantization%20%28signal%20processing%29 | Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.
The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error, noise or distortion. A device or algorithmic function that performs quantization is called a quantizer. An analog-to-digital converter is an example of a quantizer.
Example
For example, rounding a real number to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread) uniform quantizer with a quantization step size equal to some value can be expressed as
,
where the notation denotes the floor function.
Alternatively, the same quantizer may be expressed in terms of the ceiling function, as
.
(The notation denotes the ceiling function).
The essential property of a quantizer is having a countable set of possible output values smaller than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size is equal to 1. With or with equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs.
When the quantization step size (Δ) is small relative to the variation in the signal being quantized, it is relatively simple to show that the mean squared error produced by such a rounding operation will be approximately . Mean squared error is also called the quantization noise power. Adding one bit to the quantizer halves the value of Δ, which reduces the noise power by the factor . In terms of decibels, the noise power change is
Because the set of possible output values of a quantizer is countable, any quantizer can be decomposed into two distinct stages, which can be referred to as the classification stage (or forward quantization stage) and the reconstruction stage (or inverse quantization stage), where the classification stage maps the input value to an integer quantization index and the reconstruction stage maps the index to the reconstruction value that is the output approximation of the input value. For the example uniform quantizer described above, the forward quantization stage can be expressed as
,
and the reconstruction stage for this example quantizer is simply
.
This decomposition is useful for the design and analysis of quantization behavior, and it illustrates how the quantized data can be communicated over a communication channel – a source encoder can perform the forward quantization stage and send the index information through a communication channel, and a decoder can perform the reconstruction stage to produce the output approximation of the original input data. In general, the forward quantization stage may use any function that maps the input data to the integer space of the quantization index data, and the inverse quantization stage can conceptually (or literally) be a table look-up operation to map each quantization index to a corresponding reconstruction value. This two-stage decomposition applies equally well to vector as well as scalar quantizers.
Mathematical properties
Because quantization is a many-to-few mapping, it is an inherently non-linear and irreversible process (i.e., because the same output value is shared by multiple input values, it is impossible, in general, to recover the exact input value when given only the output value).
The set of possible input values may be infinitely large, and may possibly be continuous and therefore uncountable (such as the set of all real numbers, or all real numbers within some limited range). The set of possible output values may be finite or countably infinite. The input and output sets involved in quantization can be defined in a rather general way. For example, vector quantization is the application of quantization to multi-dimensional (vector-valued) input data.
Types
Analog-to-digital converter
An analog-to-digital converter (ADC) can be modeled as two processes: sampling and quantization. Sampling converts a time-varying voltage signal into a discrete-time signal, a sequence of real numbers. Quantization replaces each real number with an approximation from a finite set of discrete values. Most commonly, these discrete values are represented as fixed-point words. Though any number of quantization levels is possible, common word lengths are 8-bit (256 levels), 16-bit (65,536 levels) and 24-bit (16.8 million levels). Quantizing a sequence of numbers produces a sequence of quantization errors which is sometimes modeled as an additive random signal called quantization noise because of its stochastic behavior. The more levels a quantizer uses, the lower is its quantization noise power.
Rate–distortion optimization
Rate–distortion optimized quantization is encountered in source coding for lossy data compression algorithms, where the purpose is to manage distortion within the limits of the bit rate supported by a communication channel or storage medium. The analysis of quantization in this context involves studying the amount of data (typically measured in digits or bits or bit rate) that is used to represent the output of the quantizer and studying the loss of precision that is introduced by the quantization process (which is referred to as the distortion).
Mid-riser and mid-tread uniform quantizers
Most uniform quantizers for signed input data can be classified as being of one of two types: mid-riser and mid-tread. The terminology is based on what happens in the region around the value 0, and uses the analogy of viewing the input-output function of the quantizer as a stairway. Mid-tread quantizers have a zero-valued reconstruction level (corresponding to a tread of a stairway), while mid-riser quantizers have a zero-valued classification threshold (corresponding to a riser of a stairway).
Mid-tread quantization involves rounding. The formulas for mid-tread uniform quantization are provided in the previous section.
,
Mid-riser quantization involves truncation. The input-output formula for a mid-riser uniform quantizer is given by:
,
where the classification rule is given by
and the reconstruction rule is
.
Note that mid-riser uniform quantizers do not have a zero output value – their minimum output magnitude is half the step size. In contrast, mid-tread quantizers do have a zero output level. For some applications, having a zero output signal representation may be a necessity.
In general, a mid-riser or mid-tread quantizer may not actually be a uniform quantizer – i.e., the size of the quantizer's classification intervals may not all be the same, or the spacing between its possible output values may not all be the same. The distinguishing characteristic of a mid-riser quantizer is that it has a classification threshold value that is exactly zero, and the distinguishing characteristic of a mid-tread quantizer is that is it has a reconstruction value that is exactly zero.
Dead-zone quantizers
A dead-zone quantizer is a type of mid-tread quantizer with symmetric behavior around 0. The region around the zero output value of such a quantizer is referred to as the dead zone or deadband. The dead zone can sometimes serve the same purpose as a noise gate or squelch function. Especially for compression applications, the dead-zone may be given a different width than that for the other steps. For an otherwise-uniform quantizer, the dead-zone width can be set to any value by using the forward quantization rule
,
where the function is the sign function (also known as the signum function). The general reconstruction rule for such a dead-zone quantizer is given by
,
where is a reconstruction offset value in the range of 0 to 1 as a fraction of the step size. Ordinarily, when quantizing input data with a typical probability density function (PDF) that is symmetric around zero and reaches its peak value at zero (such as a Gaussian, Laplacian, or generalized Gaussian PDF). Although may depend on in general and can be chosen to fulfill the optimality condition described below, it is often simply set to a constant, such as . (Note that in this definition, due to the definition of the function, so has no effect.)
A very commonly used special case (e.g., the scheme typically used in financial accounting and elementary mathematics) is to set and for all . In this case, the dead-zone quantizer is also a uniform quantizer, since the central dead-zone of this quantizer has the same width as all of its other steps, and all of its reconstruction values are equally spaced as well.
Noise and error characteristics
Additive noise model
A common assumption for the analysis of quantization error is that it affects a signal processing system in a similar manner to that of additive white noise – having negligible correlation with the signal and an approximately flat power spectral density. The additive noise model is commonly used for the analysis of quantization error effects in digital filtering systems, and it can be very useful in such analysis. It has been shown to be a valid model in cases of high-resolution quantization (small relative to the signal strength) with smooth PDFs.
Additive noise behavior is not always a valid assumption. Quantization error (for quantizers defined as described here) is deterministically related to the signal and not entirely independent of it. Thus, periodic signals can create periodic quantization noise. And in some cases, it can even cause limit cycles to appear in digital signal processing systems. One way to ensure effective independence of the quantization error from the source signal is to perform dithered quantization (sometimes with noise shaping), which involves adding random (or pseudo-random) noise to the signal prior to quantization.
Quantization error models
In the typical case, the original signal is much larger than one least significant bit (LSB). When this is the case, the quantization error is not significantly correlated with the signal and has an approximately uniform distribution. When rounding is used to quantize, the quantization error has a mean of zero and the root mean square (RMS) value is the standard deviation of this distribution, given by . When truncation is used, the error has a non-zero mean of and the RMS value is . Although rounding yields less RMS error than truncation, the difference is only due to the static (DC) term of . The RMS values of the AC error are exactly the same in both cases, so there is no special advantage of rounding over truncation in situations where the DC term of the error can be ignored (such as in AC-coupled systems). In either case, the standard deviation, as a percentage of the full signal range, changes by a factor of 2 for each 1-bit change in the number of quantization bits. The potential signal-to-quantization-noise power ratio therefore changes by 4, or , approximately 6 dB per bit.
At lower amplitudes the quantization error becomes dependent on the input signal, resulting in distortion. This distortion is created after the anti-aliasing filter, and if these distortions are above 1/2 the sample rate they will alias back into the band of interest. In order to make the quantization error independent of the input signal, the signal is dithered by adding noise to the signal. This slightly reduces signal-to-noise ratio, but can completely eliminate the distortion.
Quantization noise model
Quantization noise is a model of quantization error introduced by quantization in the ADC. It is a rounding error between the analog input voltage to the ADC and the output digitized value. The noise is non-linear and signal-dependent. It can be modeled in several different ways.
In an ideal ADC, where the quantization error is uniformly distributed between −1/2 LSB and +1/2 LSB, and the signal has a uniform distribution covering all quantization levels, the Signal-to-quantization-noise ratio (SQNR) can be calculated from
where Q is the number of quantization bits.
The most common test signals that fulfill this are full amplitude triangle waves and sawtooth waves.
For example, a 16-bit ADC has a maximum signal-to-quantization-noise ratio of 6.02 × 16 = 96.3 dB.
When the input signal is a full-amplitude sine wave the distribution of the signal is no longer uniform, and the corresponding equation is instead
Here, the quantization noise is once again assumed to be uniformly distributed. When the input signal has a high amplitude and a wide frequency spectrum this is the case. In this case a 16-bit ADC has a maximum signal-to-noise ratio of 98.09 dB. The 1.761 difference in signal-to-noise only occurs due to the signal being a full-scale sine wave instead of a triangle or sawtooth.
For complex signals in high-resolution ADCs this is an accurate model. For low-resolution ADCs, low-level signals in high-resolution ADCs, and for simple waveforms the quantization noise is not uniformly distributed, making this model inaccurate. In these cases the quantization noise distribution is strongly affected by the exact amplitude of the signal.
The calculations are relative to full-scale input. For smaller signals, the relative quantization distortion can be very large. To circumvent this issue, analog companding can be used, but this can introduce distortion.
Design
Granular distortion and overload distortion
Often the design of a quantizer involves supporting only a limited range of possible output values and performing clipping to limit the output to this range whenever the input exceeds the supported range. The error introduced by this clipping is referred to as overload distortion. Within the extreme limits of the supported range, the amount of spacing between the selectable output values of a quantizer is referred to as its granularity, and the error introduced by this spacing is referred to as granular distortion. It is common for the design of a quantizer to involve determining the proper balance between granular distortion and overload distortion. For a given supported number of possible output values, reducing the average granular distortion may involve increasing the average overload distortion, and vice versa. A technique for controlling the amplitude of the signal (or, equivalently, the quantization step size ) to achieve the appropriate balance is the use of automatic gain control (AGC). However, in some quantizer designs, the concepts of granular error and overload error may not apply (e.g., for a quantizer with a limited range of input data or with a countably infinite set of selectable output values).
Rate–distortion quantizer design
A scalar quantizer, which performs a quantization operation, can ordinarily be decomposed into two stages:
Classification
A process that classifies the input signal range into non-overlapping intervals , by defining decision boundary values , such that for , with the extreme limits defined by and . All the inputs that fall in a given interval range are associated with the same quantization index .
Reconstruction
Each interval is represented by a reconstruction value which implements the mapping .
These two stages together comprise the mathematical operation of .
Entropy coding techniques can be applied to communicate the quantization indices from a source encoder that performs the classification stage to a decoder that performs the reconstruction stage. One way to do this is to associate each quantization index with a binary codeword . An important consideration is the number of bits used for each codeword, denoted here by . As a result, the design of an -level quantizer and an associated set of codewords for communicating its index values requires finding the values of , and which optimally satisfy a selected set of design constraints such as the bit rate and distortion .
Assuming that an information source produces random variables with an associated PDF , the probability that the random variable falls within a particular quantization interval is given by:
.
The resulting bit rate , in units of average bits per quantized value, for this quantizer can be derived as follows:
.
If it is assumed that distortion is measured by mean squared error, the distortion D, is given by:
.
A key observation is that rate depends on the decision boundaries and the codeword lengths , whereas the distortion depends on the decision boundaries and the reconstruction levels .
After defining these two performance metrics for the quantizer, a typical rate–distortion formulation for a quantizer design problem can be expressed in one of two ways:
Given a maximum distortion constraint , minimize the bit rate
Given a maximum bit rate constraint , minimize the distortion
Often the solution to these problems can be equivalently (or approximately) expressed and solved by converting the formulation to the unconstrained problem where the Lagrange multiplier is a non-negative constant that establishes the appropriate balance between rate and distortion. Solving the unconstrained problem is equivalent to finding a point on the convex hull of the family of solutions to an equivalent constrained formulation of the problem. However, finding a solution – especially a closed-form solution – to any of these three problem formulations can be difficult. Solutions that do not require multi-dimensional iterative optimization techniques have been published for only three PDFs: the uniform, exponential, and Laplacian distributions. Iterative optimization approaches can be used to find solutions in other cases.
Note that the reconstruction values affect only the distortion – they do not affect the bit rate – and that each individual makes a separate contribution to the total distortion as shown below:
where
This observation can be used to ease the analysis – given the set of values, the value of each can be optimized separately to minimize its contribution to the distortion .
For the mean-square error distortion criterion, it can be easily shown that the optimal set of reconstruction values is given by setting the reconstruction value within each interval to the conditional expected value (also referred to as the centroid) within the interval, as given by:
.
The use of sufficiently well-designed entropy coding techniques can result in the use of a bit rate that is close to the true information content of the indices , such that effectively
and therefore
.
The use of this approximation can allow the entropy coding design problem to be separated from the design of the quantizer itself. Modern entropy coding techniques such as arithmetic coding can achieve bit rates that are very close to the true entropy of a source, given a set of known (or adaptively estimated) probabilities .
In some designs, rather than optimizing for a particular number of classification regions , the quantizer design problem may include optimization of the value of as well. For some probabilistic source models, the best performance may be achieved when approaches infinity.
Neglecting the entropy constraint: Lloyd–Max quantization
In the above formulation, if the bit rate constraint is neglected by setting equal to 0, or equivalently if it is assumed that a fixed-length code (FLC) will be used to represent the quantized data instead of a variable-length code (or some other entropy coding technology such as arithmetic coding that is better than an FLC in the rate–distortion sense), the optimization problem reduces to minimization of distortion alone.
The indices produced by an -level quantizer can be coded using a fixed-length code using bits/symbol. For example, when 256 levels, the FLC bit rate is 8 bits/symbol. For this reason, such a quantizer has sometimes been called an 8-bit quantizer. However using an FLC eliminates the compression improvement that can be obtained by use of better entropy coding.
Assuming an FLC with levels, the rate–distortion minimization problem can be reduced to distortion minimization alone. The reduced problem can be stated as follows: given a source with PDF and the constraint that the quantizer must use only classification regions, find the decision boundaries and reconstruction levels to minimize the resulting distortion
.
Finding an optimal solution to the above problem results in a quantizer sometimes called a MMSQE (minimum mean-square quantization error) solution, and the resulting PDF-optimized (non-uniform) quantizer is referred to as a Lloyd–Max quantizer, named after two people who independently developed iterative methods to solve the two sets of simultaneous equations resulting from and , as follows:
,
which places each threshold at the midpoint between each pair of reconstruction values, and
which places each reconstruction value at the centroid (conditional expected value) of its associated classification interval.
Lloyd's Method I algorithm, originally described in 1957, can be generalized in a straightforward way for application to vector data. This generalization results in the Linde–Buzo–Gray (LBG) or k-means classifier optimization methods. Moreover, the technique can be further generalized in a straightforward way to also include an entropy constraint for vector data.
Uniform quantization and the 6 dB/bit approximation
The Lloyd–Max quantizer is actually a uniform quantizer when the input PDF is uniformly distributed over the range . However, for a source that does not have a uniform distribution, the minimum-distortion quantizer may not be a uniform quantizer. The analysis of a uniform quantizer applied to a uniformly distributed source can be summarized in what follows:
A symmetric source X can be modelled with , for and 0 elsewhere.
The step size and the signal to quantization noise ratio (SQNR) of the quantizer is
.
For a fixed-length code using bits, , resulting in
,
or approximately 6 dB per bit. For example, for =8 bits, =256 levels and SQNR = 8×6 = 48 dB; and for =16 bits, =65536 and SQNR = 16×6 = 96 dB. The property of 6 dB improvement in SQNR for each extra bit used in quantization is a well-known figure of merit. However, it must be used with care: this derivation is only for a uniform quantizer applied to a uniform source. For other source PDFs and other quantizer designs, the SQNR may be somewhat different from that predicted by 6 dB/bit, depending on the type of PDF, the type of source, the type of quantizer, and the bit rate range of operation.
However, it is common to assume that for many sources, the slope of a quantizer SQNR function can be approximated as 6 dB/bit when operating at a sufficiently high bit rate. At asymptotically high bit rates, cutting the step size in half increases the bit rate by approximately 1 bit per sample (because 1 bit is needed to indicate whether the value is in the left or right half of the prior double-sized interval) and reduces the mean squared error by a factor of 4 (i.e., 6 dB) based on the approximation.
At asymptotically high bit rates, the 6 dB/bit approximation is supported for many source PDFs by rigorous theoretical analysis. Moreover, the structure of the optimal scalar quantizer (in the rate–distortion sense) approaches that of a uniform quantizer under these conditions.
In other fields
Many physical quantities are actually quantized by physical entities. Examples of fields where this limitation applies include electronics (due to electrons), optics (due to photons), biology (due to DNA), physics (due to Planck limits) and chemistry (due to molecules).
See also
Beta encoder
Color quantization
Data binning
Discretization
Discretization error
Posterization
Pulse-code modulation
Quantile
Quantization (image processing)
Regression dilution – a bias in parameter estimates caused by errors such as quantization in the explanatory or independent variable
Notes
References
Further reading
See also
Least count
Digital signal processing
Computer graphic artifacts
Digital audio
Noise (electronics)
Signal processing
Telecommunication theory
Data compression | Quantization (signal processing) | [
"Technology",
"Engineering"
] | 5,104 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
317,034 | https://en.wikipedia.org/wiki/Roommate | A roommate is a person with whom one shares a living facility such as a room or dormitory except when being family or romantically involved. Similar terms include dorm-mate, suite-mate, housemate, or flatmate ("flat": the usual term in British English for an apartment). Flatmate is the term most commonly used in New Zealand, when referring to the rental of an unshared room within any type of dwelling. Another similar term is sharemate (shared living spaces are often called sharehouses in Australia and other Commonwealth countries). A sharehome is a model of household in which a group of usually unrelated people reside together, including lease-by-room arrangements. The term generally applies to people living together in rental properties rather than in properties in which any resident is an owner occupier. In the United Kingdom, the term "roommate" means a person sharing the same bedroom; in the United States and Canada, "roommate" and "housemate" are used interchangeably regardless of whether a bedroom is shared, although at US universities having a roommate commonly implies sharing a bedroom. This article uses the term "roommate" in the US sense of a person one shares a residence with who is not a relative or significant other. The informal term for roommate is roomie, which is commonly used by university students and members of the younger generation.
The most common reason for sharing housing is to reduce the cost of housing. In many rental markets, the monthly rent for a two- or three-bedroom apartment is proportionately less per bedroom than the rent for a one-bedroom apartment (in other words, a three-bedroom flat costs more than a one-bedroom, but not three times as much). By pooling their monthly housing money, a group of people can achieve a lower housing expense at the cost of less privacy. Other motivations are to gain better amenities than those available in single-person housing, to share the work of maintaining a household, and to have the companionship of other people.
People become roommates when they move into a rental property, with one or more of them having applied to rent the property through a real estate agent, being accepted and having signed a lease.
Demographics
Housemates and roommates are typically unmarried young adults, including workers and students. It is not rare for middle-aged and elderly adults who are single, divorced, or widowed to have housemates. Married couples, however, typically discontinue living with roommates, especially when they have children.
Those moving to another city or another country may decide to look for a shared house or apartment to avoid loneliness. Social changes such as the declining affordability of home ownership and decreasing marriage rates are reasons why people may choose to live with roommates. Despite this rise, shared housing is little researched.
Roommates are a fairly common point of reference in Western culture. In the United States, most young adults spend at least a short part of their lives living with roommates after they leave their family's home. Very often this involves moving out of the home and to college, where the primary option for living is with a roommate. Therefore, many novels, movies, plays, and television programs employ roommates as a basic principle or a plot device (such as the popular series Friends or The Big Bang Theory). Sharing a house or a flat is also very common in European countries such as France (French colocation, corenting) or Germany (German WG for Wohngemeinschaft, living [together] community). Many websites are specialized in finding a flatmate. On the other hand, it is less common for people of any age to live with roommates in some countries, such as Japan, where single-person one-room apartments are plentiful.
There are many different forms of flat shares also, from the more established flat shares where the flatmate will get their own room that is furnished to "couch surfing" where people lend their sofa for a short period.
Sharehome residents are typically unrelated to each other in that they generally come from different families, although they may be composed of some siblings and sometimes single parents and their children. Perhaps because of the social cohesion required for their formation, sharehomes will often be composed of members of the same peer group. For example, university students who have relocated to a new area to commence a course of study often need to form a sharehome. Share housing often occurs in the 18–35 age bracket—during a life stage between leaving home and having children. Sharehome residents may have pre-existing friendships or other interpersonal relationships or they may form new relationships whilst living together.
Many universities in the United States require first-year students to live in on-campus residence halls, sharing a dormitory room with a same-sex roommate.
Popularity
According to the American Community Survey, 7.7% of Americans lived with a roommate in 2014. From 2000 to 2014, the proportion of Americans living with roommates increased by 13%, revealing that it is an increasingly common housing situation.
The change in the cost of housing makes the consideration of roommates more attractive. As housing costs rise, so too does the rate of living as roommates. When prices drop, the opposite can be expected. This has been seen extensively in cities such as Washington D.C., Phoenix, and San Diego.
Student exchanges are getting more and more popular with globalization and has influenced a lot in the Roommate Boom. The Erasmus exchange program in Europe has contributed as being the biggest exchange program in Europe. Exchange students can live in university residences but a growing number want to share apartments with other international students in shared apartments.
Roommates and house-sharing are not limited to students and young adults however. American politicians Chuck Schumer, William Delahunt, Richard Durbin, and George Miller famously share a house in Washington, D.C., while Congress is in session.
In Indian universities and colleges it is quite common that students share their rooms with a couple of others. Usually students in the master or doctoral programs are allocated with own rooms.
Sharing an apartment is quite popular with young adults (most of them university students) in countries like Germany, Austria and Switzerland, while sharing a bedroom is more uncommon.
Cities with most roommates in the United States
The following table lists the top 25 US cities by proportion of people who live with roommates according to the US Census 2016 and a 2017 Zillow housing trends report.
Challenges
One difficulty is finding suitable roommates. Living with a roommate can mean much less privacy than having a residence of one's own, and for some people this can cause a lot of stress. Another thing to consider when choosing a roommate is how to divide the cost of living. Who pays for what, or are the shared expenses divided between the two or more roommates. Also, the potential roommate should be trusted to pay their share and trusted to pay it on time. Sleeping patterns can also be disrupted when living with a number of people. Some of the challenges that come with share housing may include advertising for, interviewing and choosing potential housemates; sharing communal household goods, rent (often this may be determined by the size or position of respective bedrooms); sharing household bills and grocery costs; and sharing housework, cleaning, and cooking responsibilities. Conflicts may arise if, for example, residents have different standards of cleanliness, different diets, or different hours of employment or study. Guests and partners may also begin to board frequently, which can raise complications pertaining to utility expenses, additional rent and further possible cleaning duties. Often when these responsibilities go untended, friction may result between co-tenants. For this reason, responsibilities should be delegated and fairly assigned as early as possible in any living arrangement with roommates. A clear and defined list of alternating chores and bill lists are easy to see and enforce.
Roommates matter, as they have a great impression upon the ones whom they live with and therefore surround themselves by. More and more research has been produced in order to properly understand this impact. The areas of impact can vary greatly in both positive and negative ways; most important is that individuals should be aware of the possible behaviour and social changes that may happen when living with a roommate.
Eating and drinking habits
Living with an individual who exercises and diets can be beneficial because it very often "rubs off" on the other roommates, while a calorie cutting roommate could be a potential negative influence. College is a time that students often start drinking, specifically binge drinking (more than 4 or 5 drinks in a row). By the end of second semester in college 53% of freshman students had binged. Students explained that having a drinking roommate provided a “buddy” to go through it all and was a big influence in the decision to do so.
Mood susceptible: "Each happy friend a person has increases that person's probability of being happy by 9 percent and each unhappy friend decreases it by 7 percent," says Nicholas A. Christakis, a co-author of "Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives". Whether the roommates are friends or not the interactions and behaviors shared and expressed will undoubtedly have an effect on the roommates. Although shown not to be as impressionable as eating habits, our moods can change, specifically more in roommates based on the others emotions.
Effects on studies: Studies showed that having a roommate that plays video games causes the other to most likely participate, which reflected in a half-hour less of studying, also showing GPA's .02 lower than others. When dealing with a college roommate the choice to study or sleep should take precedence over the choice to party or play loud music. This understanding allows those to choose to focus differently on school to do so without harm to the roommate relationship or grades.
Addressing an issue:
The best approach to address an issue with a roommate is an upfront and in person conversation, preferably a one on one conversation. While approaching the issues understand and respect each other's differences. When discussing the issues allow both sides to express their thoughts and feelings on the issue. And after both listening and speaking to each other present a resolution. and in doing so create a win-win situation this allows for the conflict to be more easily resolved. The resolution may not be the personal idea, but it should help the situation to some degree.
See also
Co-living
Cohabitation
Communal apartment
Family
Home
Household
Stable roommates problem
Tenant
References
External links
Student culture
Living arrangements
Interpersonal relationships
Family economics
de:Wohngemeinschaft | Roommate | [
"Biology"
] | 2,153 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
317,062 | https://en.wikipedia.org/wiki/Significant%20figures | Significant figures, also referred to as significant digits or sig figs, are specific digits within a number written in positional notation that carry both reliability and necessity in conveying a particular quantity. When presenting the outcome of a measurement (such as length, pressure, volume, or mass), if the number of digits exceeds what the measurement instrument can resolve, only the number of digits within the resolution's capability are dependable and therefore considered significant.
For instance, if a length measurement yields 114.8 mm, using a ruler with the smallest interval between marks at 1 mm, the first three digits (1, 1, and 4, representing 114 mm) are certain and constitute significant figures. Further, digits that are uncertain yet meaningful are also included in the significant figures. In this example, the last digit (8, contributing 0.8 mm) is likewise considered significant despite its uncertainty. Therefore, this measurement contains four significant figures.
Another example involves a volume measurement of 2.98 L with an uncertainty of ± 0.05 L. The actual volume falls between 2.93 L and 3.03 L. Even if certain digits are not completely known, they are still significant if they are meaningful, as they indicate the actual volume within an acceptable range of uncertainty. In this case, the actual volume might be 2.94 L or possibly 3.02 L, so all three digits are considered significant. Thus, there are three significant figures in this example.
The following types of digits are not considered significant:
Leading zeros. For instance, 013 kg has two significant figures—1 and 3—while the leading zero is insignificant since it does not impact the mass indication; 013 kg is equivalent to 13 kg, rendering the zero unnecessary. Similarly, in the case of 0.056 m, there are two insignificant leading zeros since 0.056 m is the same as 56 mm, thus the leading zeros do not contribute to the length indication.
Trailing zeros when they serve as placeholders. In the measurement 1500 m, when the measurement resolution is 100 m, the trailing zeros are insignificant as they simply stand for the tens and ones places. In this instance, 1500 m indicates the length is approximately 1500 m rather than an exact value of 1500 m.
Spurious digits that arise from calculations resulting in a higher precision than the original data or a measurement reported with greater precision than the instrument's resolution.
A zero after a decimal (e.g., 1.0) is significant, and care should be used when appending such a decimal of zero. Thus, in the case of 1.0, there are two significant figures, whereas 1 (without a decimal) has one significant figure.
Among a number's significant digits, the most significant digit is the one with the greatest exponent value (the leftmost significant digit/figure), while the least significant digit is the one with the lowest exponent value (the rightmost significant digit/figure). For example, in the number "123" the "1" is the most significant digit, representing hundreds (102), while the "3" is the least significant digit, representing ones (100).
To avoid conveying a misleading level of precision, numbers are often rounded. For instance, it would create false precision to present a measurement as 12.34525 kg when the measuring instrument only provides accuracy to the nearest gram (0.001 kg). In this case, the significant figures are the first five digits (1, 2, 3, 4, and 5) from the leftmost digit, and the number should be rounded to these significant figures, resulting in 12.345 kg as the accurate value. The rounding error (in this example, 0.00025 kg = 0.25 g) approximates the numerical resolution or precision. Numbers can also be rounded for simplicity, not necessarily to indicate measurement precision, such as for the sake of expediency in news broadcasts.
Significance arithmetic encompasses a set of approximate rules for preserving significance through calculations. More advanced scientific rules are known as the propagation of uncertainty.
Radix 10 (base-10, decimal numbers) is assumed in the following. (See unit in the last place for extending these concepts to other bases.)
Identifying significant figures
Rules to identify significant figures in a number
Identifying the significant figures in a number requires knowing which digits are meaningful, which requires knowing the resolution with which the number is measured, obtained, or processed. For example, if the measurable smallest mass is 0.001 g, then in a measurement given as 0.00234 g the "4" is not useful and should be discarded, while the "3" is useful and should often be retained.
Non-zero digits within the given measurement or reporting resolution are significant.
91 has two significant figures (9 and 1) if they are measurement-allowed digits.
123.45 has five significant digits (1, 2, 3, 4 and 5) if they are within the measurement resolution. If the resolution is, say, 0.1, then the 5 shows that the true value to 4 sig figs is equally likely to be 123.4 or 123.5.
Zeros between two significant non-zero digits are significant (significant trapped zeros).
101.12003 consists of eight significant figures if the resolution is to 0.00001.
125.340006 has seven significant figures if the resolution is to 0.0001: 1, 2, 5, 3, 4, 0, and 0.
Zeros to the left of the first non-zero digit (leading zeros) are not significant.
If a length measurement gives 0.052 km, then 0.052 km = 52 m so 5 and 2 are only significant; the leading zeros appear or disappear, depending on which unit is used, so they are not necessary to indicate the measurement scale.
0.00034 has 2 significant figures (3 and 4) if the resolution is 0.00001.
Zeros to the right of the last non-zero digit (trailing zeros) in a number with the decimal point are significant if they are within the measurement or reporting resolution.
1.200 has four significant figures (1, 2, 0, and 0) if they are allowed by the measurement resolution.
0.0980 has three significant digits (9, 8, and the last zero) if they are within the measurement resolution.
120.000 consists of six significant figures (1, 2, and the four subsequent zeroes) if, as before, they are within the measurement resolution.
Trailing zeros in an integer may or may not be significant, depending on the measurement or reporting resolution.
45,600 has 3, 4 or 5 significant figures depending on how the last zeros are used. For example, if the length of a road is reported as 45600 m without information about the reporting or measurement resolution, then it is not clear if the road length is precisely measured as 45600 m or if it is a rough estimate. If it is the rough estimation, then only the first three non-zero digits are significant since the trailing zeros are neither reliable nor necessary; 45600 m can be expressed as 45.6 km or as 4.56 × 104 m in scientific notation, and neither expression requires the trailing zeros.
An exact number has an infinite number of significant figures.
If the number of apples in a bag is 4 (exact number), then this number is 4.0000... (with infinite trailing zeros to the right of the decimal point). As a result, 4 does not impact the number of significant figures or digits in the result of calculations with it.
A mathematical or physical constant has significant figures to its known digits.
π is a specific real number with several equivalent definitions. All of the digits in its exact decimal expansion 3.14159265358979323... are significant. Although many properties of these digits are known — for example, they do not repeat, because π is irrational — not all of the digits are known. As of March 2024, more than 102 trillion digits have been calculated. A 102 trillion-digit approximation has 102 trillion significant digits. In practical applications, far fewer digits are used. The everyday approximation 3.14 has three significant figures and 7 correct binary digits. The approximation 22/7 has the same three correct decimal digits but has 10 correct binary digits. Most calculators and computer programs can handle the 16-digit expansion 3.141592653589793, which is sufficient for interplanetary navigation calculations.
The Planck constant is and is defined as an exact value so that it is more properly defined as .
Ways to denote significant figures in an integer with trailing zeros
The significance of trailing zeros in a number not containing a decimal point can be ambiguous. For example, it may not always be clear if the number 1300 is precise to the nearest unit (just happens coincidentally to be an exact multiple of a hundred) or if it is only shown to the nearest hundreds due to rounding or uncertainty. Many conventions exist to address this issue. However, these are not universally used and would only be effective if the reader is familiar with the convention:
An overline, sometimes also called an overbar, or less accurately, a vinculum, may be placed over the last significant figure; any trailing zeros following this are insignificant. For example, 130 has three significant figures (and hence indicates that the number is precise to the nearest ten).
Less often, using a closely related convention, the last significant figure of a number may be underlined; for example, "1300" has two significant figures.
A decimal point may be placed after the number; for example "1300." indicates specifically that trailing zeros are meant to be significant.
As the conventions above are not in general use, the following more widely recognized options are available for indicating the significance of number with trailing zeros:
Eliminate ambiguous or non-significant zeros by changing the unit prefix in a number with a unit of measurement. For example, the precision of measurement specified as 1300 g is ambiguous, while if stated as 1.30 kg it is not. Likewise 0.0123 L can be rewritten as 12.3 mL.
Eliminate ambiguous or non-significant zeros by using Scientific Notation: For example, 1300 with three significant figures becomes . Likewise 0.0123 can be rewritten as . The part of the representation that contains the significant figures (1.30 or 1.23) is known as the significand or mantissa. The digits in the base and exponent ( or ) are considered exact numbers so for these digits, significant figures are irrelevant.
Explicitly state the number of significant figures (the abbreviation s.f. is sometimes used): For example "20 000 to 2 s.f." or "20 000 (2 sf)".
State the expected variability (precision) explicitly with a plus–minus sign, as in 20 000 ± 1%. This also allows specifying a range of precision in-between powers of ten.
Rounding to significant figures
Rounding to significant figures is a more general-purpose technique than rounding to n digits, since it handles numbers of different scales in a uniform way. For example, the population of a city might only be known to the nearest thousand and be stated as 52,000, while the population of a country might only be known to the nearest million and be stated as 52,000,000. The former might be in error by hundreds, and the latter might be in error by hundreds of thousands, but both have two significant figures (5 and 2). This reflects the fact that the significance of the error is the same in both cases, relative to the size of the quantity being measured.
To round a number to n significant figures:
If the n + 1 digit is greater than 5 or is 5 followed by other non-zero digits, add 1 to the n digit. For example, if we want to round 1.2459 to 3 significant figures, then this step results in 1.25.
If the n + 1 digit is 5 not followed by other digits or followed by only zeros, then rounding requires a tie-breaking rule. For example, to round 1.25 to 2 significant figures:
Round half away from zero rounds up to 1.3. This is the default rounding method implied in many disciplines if the required rounding method is not specified.
Round half to even, which rounds to the nearest even number. With this method, 1.25 is rounded down to 1.2. If this method applies to 1.35, then it is rounded up to 1.4. This is the method preferred by many scientific disciplines, because, for example, it avoids skewing the average value of a long list of values upwards.
For an integer in rounding, replace the digits after the n digit with zeros. For example, if 1254 is rounded to 2 significant figures, then 5 and 4 are replaced to 0 so that it will be 1300. For a number with the decimal point in rounding, remove the digits after the n digit. For example, if 14.895 is rounded to 3 significant figures, then the digits after 8 are removed so that it will be 14.9.
In financial calculations, a number is often rounded to a given number of places. For example, to two places after the decimal separator for many world currencies. This is done because greater precision is immaterial, and usually it is not possible to settle a debt of less than the smallest currency unit.
In UK personal tax returns, income is rounded down to the nearest pound, whilst tax paid is calculated to the nearest penny.
As an illustration, the decimal quantity 12.345 can be expressed with various numbers of significant figures or decimal places. If insufficient precision is available then the number is rounded in some manner to fit the available precision. The following table shows the results for various total precision at two rounding ways (N/A stands for Not Applicable).
Another example for 0.012345. (Remember that the leading zeros are not significant.)
The representation of a non-zero number x to a precision of p significant digits has a numerical value that is given by the formula:
where
which may need to be written with a specific marking as detailed above to specify the number of significant trailing zeros.
Writing uncertainty and implied uncertainty
Significant figures in writing uncertainty
It is recommended for a measurement result to include the measurement uncertainty such as , where xbest and σx are the best estimate and uncertainty in the measurement respectively. xbest can be the average of measured values and σx can be the standard deviation or a multiple of the measurement deviation. The rules to write are:
σx should usually be quoted to only one or two significant figures, as more precision is unlikely to be reliable or meaningful:
1.79 ± 0.06 (correct), 1.79 ± 0.96 (correct), 1.79 ± 1.96 (incorrect).
The digit positions of the last significant figures in xbest and σx are the same, otherwise the consistency is lost. For example, "1.79 ± 0.067" is incorrect, as it does not make sense to have more accurate uncertainty than the best estimate.
1.79 ± 0.06 (correct), 1.79 ± 0.96 (correct), 1.79 ± 0.067 (incorrect).
Implied uncertainty
Uncertainty may be implied by the last significant figure if it is not explicitly expressed. The implied uncertainty is ± the half of the minimum scale at the last significant figure position. For example, if the mass of an object is reported as 3.78 kg without mentioning uncertainty, then ± 0.005 kg measurement uncertainty may be implied. If the mass of an object is estimated as 3.78 ± 0.07 kg, so the actual mass is probably somewhere in the range 3.71 to 3.85 kg, and it is desired to report it with a single number, then 3.8 kg is the best number to report since its implied uncertainty ± 0.05 kg gives a mass range of 3.75 to 3.85 kg, which is close to the measurement range. If the uncertainty is a bit larger, i.e. 3.78 ± 0.09 kg, then 3.8 kg is still the best single number to quote, since if "4 kg" was reported then a lot of information would be lost.
If there is a need to write the implied uncertainty of a number, then it can be written as with stating it as the implied uncertainty (to prevent readers from recognizing it as the measurement uncertainty), where x and σx are the number with an extra zero digit (to follow the rules to write uncertainty above) and the implied uncertainty of it respectively. For example, 6 kg with the implied uncertainty ± 0.5 kg can be stated as 6.0 ± 0.5 kg.
Arithmetic
As there are rules to determine the significant figures in directly measured quantities, there are also guidelines (not rules) to determine the significant figures in quantities calculated from these measured quantities.
Significant figures in measured quantities are most important in the determination of significant figures in calculated quantities with them. A mathematical or physical constant (e.g., in the formula for the area of a circle with radius as ) has no effect on the determination of the significant figures in the result of a calculation with it if its known digits are equal to or more than the significant figures in the measured quantities used in the calculation. An exact number such as in the formula for the kinetic energy of a mass with velocity as has no bearing on the significant figures in the calculated kinetic energy since its number of significant figures is infinite (0.500000...).
The guidelines described below are intended to avoid a calculation result more precise than the measured quantities, but it does not ensure the resulted implied uncertainty close enough to the measured uncertainties. This problem can be seen in unit conversion. If the guidelines give the implied uncertainty too far from the measured ones, then it may be needed to decide significant digits that give comparable uncertainty.
Multiplication and division
For quantities created from measured quantities via multiplication and division, the calculated result should have as many significant figures as the least number of significant figures among the measured quantities used in the calculation. For example,
1.234 × 2 = .468 ≈ 2
1.234 × 2.0 = 2.68 ≈ 2.5
0.01234 × 2 = 0.0468 ≈ 0.02
0.012345678 / 0.00234 = 5.259 ≈ 5.28
with one, two, and one significant figures respectively. (2 here is assumed not an exact number.) For the first example, the first multiplication factor has four significant figures and the second has one significant figure. The factor with the fewest or least significant figures is the second one with only one, so the final calculated result should also have one significant figure.
Exception
For unit conversion, the implied uncertainty of the result can be unsatisfactorily higher than that in the previous unit if this rounding guideline is followed; For example, 8 inch has the implied uncertainty of ± 0.5 inch = ± 1.27 cm. If it is converted to the centimeter scale and the rounding guideline for multiplication and division is followed, then 0.32 cm ≈ 20 cm with the implied uncertainty of ± 5 cm. If this implied uncertainty is considered as too overestimated, then more proper significant digits in the unit conversion result may be 2.32 cm ≈ 20. cm with the implied uncertainty of ± 0.5 cm.
Another exception of applying the above rounding guideline is to multiply a number by an integer, such as 1.234 × 9. If the above guideline is followed, then the result is rounded as 1.234 × 9.000.... = 11.16 ≈ 11.11. However, this multiplication is essentially adding 1.234 to itself 9 times such as 1.234 + 1.234 + … + 1.234 so the rounding guideline for addition and subtraction described below is more proper rounding approach. As a result, the final answer is 1.234 + 1.234 + … + 1.234 = 11.10 = 11.106 (one significant digit increase).
Addition and subtraction of significant figures
For quantities created from measured quantities via addition and subtraction, the last significant figure position (e.g., hundreds, tens, ones, tenths, hundredths, and so forth) in the calculated result should be the same as the leftmost or largest digit position among the last significant figures of the measured quantities in the calculation. For example,
1.234 + 2 = .234 ≈ 3
1.234 + 2.0 = 3.34 ≈ 3.2
0.01234 + 2 = .01234 ≈ 2
12000 + 77 = 1077 ≈ 12000
with the last significant figures in the ones place, tenths place, ones place, and thousands place respectively. (2 here is assumed not an exact number.) For the first example, the first term has its last significant figure in the thousandths place and the second term has its last significant figure in the ones place. The leftmost or largest digit position among the last significant figures of these terms is the ones place, so the calculated result should also have its last significant figure in the ones place.
The rule to calculate significant figures for multiplication and division are not the same as the rule for addition and subtraction. For multiplication and division, only the total number of significant figures in each of the factors in the calculation matters; the digit position of the last significant figure in each factor is irrelevant. For addition and subtraction, only the digit position of the last significant figure in each of the terms in the calculation matters; the total number of significant figures in each term is irrelevant. However, greater accuracy will often be obtained if some non-significant digits are maintained in intermediate results which are used in subsequent calculations.
Logarithm and antilogarithm
The base-10 logarithm of a normalized number (i.e., a × 10b with 1 ≤ a < 10 and b as an integer), is rounded such that its decimal part (called mantissa) has as many significant figures as the significant figures in the normalized number.
log10(3.000 × 104) = log10(104) + log10(3.000) = 4.000000... (exact number so infinite significant digits) + 0.477212547... = 4.477212547 ≈ 4.4771.
When taking the antilogarithm of a normalized number, the result is rounded to have as many significant figures as the significant figures in the decimal part of the number to be antiloged.
104.4771 = 2998.5318119... = 30000 = 3.000 × 104.
Transcendental functions
If a transcendental function (e.g., the exponential function, the logarithm, and the trigonometric functions) is differentiable at its domain element 'x', then its number of significant figures (denoted as "significant figures of ") is approximately related with the number of significant figures in x (denoted as "significant figures of x") by the formula
,
where is the condition number.
Round only on the final calculation result
When performing multiple stage calculations, do not round intermediate stage calculation results; keep as many digits as is practical (at least one more digit than the rounding rule allows per stage) until the end of all the calculations to avoid cumulative rounding errors while tracking or recording the significant figures in each intermediate result. Then, round the final result, for example, to the fewest number of significant figures (for multiplication or division) or leftmost last significant digit position (for addition or subtraction) among the inputs in the final calculation.
(2.3494 + 1.345) × 1.2 = 3.694 × 1.2 = 4.3328 ≈ 4.4.
(2.3494 × 1.345) + 1.2 = 3.15943 + 1.2 = 4.59943 ≈ 4.4.
Estimating an extra digit
When using a ruler, initially use the smallest mark as the first estimated digit. For example, if a ruler's smallest mark is 0.1 cm, and 4.5 cm is read, then it is 4.5 (±0.1 cm) or 4.4 cm to 4.6 cm as to the smallest mark interval. However, in practice a measurement can usually be estimated by eye to closer than the interval between the ruler's smallest mark, e.g. in the above case it might be estimated as between 4.51 cm and 4.53 cm.
It is also possible that the overall length of a ruler may not be accurate to the degree of the smallest mark, and the marks may be imperfectly spaced within each unit. However assuming a normal good quality ruler, it should be possible to estimate tenths between the nearest two marks to achieve an extra decimal place of accuracy. Failing to do this adds the error in reading the ruler to any error in the calibration of the ruler.
Estimation in statistic
When estimating the proportion of individuals carrying some particular characteristic in a population, from a random sample of that population, the number of significant figures should not exceed the maximum precision allowed by that sample size.
Relationship to accuracy and precision in measurement
Traditionally, in various technical fields, "accuracy" refers to the closeness of a given measurement to its true value; "precision" refers to the stability of that measurement when repeated many times. Thus, it is possible to be "precisely wrong". Hoping to reflect the way in which the term "accuracy" is actually used in the scientific community, there is a recent standard, ISO 5725, which keeps the same definition of precision but defines the term "trueness" as the closeness of a given measurement to its true value and uses the term "accuracy" as the combination of trueness and precision. (See the accuracy and precision article for a full discussion.) In either case, the number of significant figures roughly corresponds to precision, not to accuracy or the newer concept of trueness.
In computing
Computer representations of floating-point numbers use a form of rounding to significant figures (while usually not keeping track of how many), in general with binary numbers. The number of correct significant figures is closely related to the notion of relative error (which has the advantage of being a more accurate measure of precision, and is independent of the radix, also known as the base, of the number system used).
Electronic calculators supporting a dedicated significant figures display mode are relatively rare.
Among the calculators to support related features are the Commodore M55 Mathematician (1976) and the S61 Statistician (1976), which support two display modes, where will give n significant digits in total, while will give n decimal places.
The Texas Instruments TI-83 Plus (1999) and TI-84 Plus (2004) families of graphical calculators support a Sig-Fig Calculator mode in which the calculator will evaluate the count of significant digits of entered numbers and display it in square brackets behind the corresponding number. The results of calculations will be adjusted to only show the significant digits as well.
For the HP 20b/30b-based community-developed WP 34S (2011) and WP 31S (2014) calculators significant figures display modes and (with zero padding) are available as a compile-time option. The SwissMicros DM42-based community-developed calculators WP 43C (2019) / C43 (2022) / C47 (2023) support a significant figures display mode as well.
See also
Benford's law (first-digit law)
Engineering notation
Error bar
False precision
Guard digit
IEEE 754 (IEEE floating-point standard)
Interval arithmetic
Kahan summation algorithm
Precision (computer science)
Round-off error
References
Further reading
ASTM E29-06b, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications
External links
Significant Figures Video by Khan academy
The Decimal Arithmetic FAQ — Is the decimal arithmetic ‘significance’ arithmetic?
Advanced methods for handling uncertainty and some explanations of the shortcomings of significance arithmetic and significant figures.
Significant Figures Calculator – Displays a number with the desired number of significant digits.
Measurements and Uncertainties versus Significant Digits or Significant Figures – Proper methods for expressing uncertainty, including a detailed discussion of the problems with any notion of significant digits.
Arithmetic
Numerical analysis | Significant figures | [
"Mathematics"
] | 5,937 | [
"Computational mathematics",
"Arithmetic",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Number theory"
] |
317,087 | https://en.wikipedia.org/wiki/Piperonyl%20butoxide | Piperonyl butoxide (PBO) is a pale yellow to light brown liquid organic compound used as an adjuvant component of pesticide formulations for synergy. That is, despite having no pesticidal activity of its own, it enhances the potency of certain pesticides such as carbamates, pyrethrins, pyrethroids, and rotenone.
It is a semisynthetic derivative of safrole and is produced from the condensation of the sodium salt of 2-(2-butoxyethoxy) ethanol and the chloromethyl derivative of hydrogenated safrole (dihydrosafrole);
or through 1,2-Methylenedioxybenzene.
History
PBO was developed in the late 1930s and early 1940s to enhance the performance of the naturally derived insecticide pyrethrum. Pyrethrum is a type of potent insecticide that kills mosquitoes and other disease-carrying vectors, thereby providing public health benefits, such as preventing malaria. Although exhibiting little intrinsic insecticidal activity of its own, PBO increases the effectiveness of pyrethrins, thus it is called a synergist. PBO was first patented in 1947 in the US by Herman Wachs.
There are 3 known manufacturer of PBO in the world, Endura, Tagros and Catasynth (Anthea) who manufacture PBO through the MDB route.
Uses
PBO was first registered in the United States in the 1950s. PBO is mainly used in combination with insecticides, such as natural pyrethrins or synthetic pyrethroids, in ratios (PBO: pyrethrins) ranging from 3:1 to 20:1. Appearing in over 1,500 United States EPA-registered products, PBO is one of the most commonly registered synergists as measured by the number of formulas in which it is present. It is approved for pre- and postharvest application to a wide variety of crops and commodities, including grain, fruits and vegetables. The application rates are low; the highest single rate is 0.5 lbs PBO/acre.
It is used extensively as an ingredient with insecticides to control insect pests in and around the home, in food-handling establishments such as restaurants, and for human and veterinary applications against ectoparasites (head lice, ticks, fleas). A wide variety of water-based PBO-containing products such as crack and crevice sprays, total release foggers, and flying insect sprays are produced for and sold to consumers for home use. PBO has an important public health role as a synergist used in pyrethrins and pyrethroid formulations used for mosquito control (e.g. space sprays, surface sprays and bed nets). Because of its limited, if any, insecticidal properties, PBO is never used alone.
Mechanism of action
PBO acts as an insecticide synergist by inhibiting the natural defense mechanisms of the insect, the most important of which is the mixed-function oxidase system, (MFOs) also known as the cytochrome P-450 system. The MFO system is the primary route of detoxification in insects, and causes the oxidative breakdown of insecticides such as pyrethrins and the synthetic pyrethroids – thus when PBO is added, higher insecticide levels remain in the insect to exercise their lethal effect. An important consequence of this property is that, by enhancing the activity of a given insecticide, less may be used to achieve the same result.
PBO does not appear to have a significant effect on the MFO system in humans. PBO is found to be an efficacious, low-potency, neutral antagonist of G-protein-coupled CB1 receptors.
Other synergists for pyrethroid insecticides include Sesamex and "Sulfoxide" (not to be confused with the functional group).
Regulatory
PBO is regulated in the United States and some other countries as a pesticide, even though PBO does not have this property. The United States Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), the law that gives United States EPA its authority to regulate pesticides, includes certain synergists in its definition of a “pesticide” and is thus subject to the same approval and registration as products that kill pests, like the insecticides with which PBO is formulated. Pesticide registration is the process through which United States EPA examines the ingredients of a pesticide, where and how the pesticide is used (e.g., whole room fogger, crack-and-crevice, etc.), and the specific use pattern (amount and frequency of its use). United States EPA also evaluates the pesticide to ensure that it will not have unreasonable adverse effects on humans, the environment and non-target species. The United States EPA must register pesticides before they may be sold or distributed in the United States. Registration is required for the pesticide itself, as well as for all products containing it. The World Health Organization recognizes the public health value of PBO when used in conjunction with the synthetic pyrethroids deltamethrin or permethrin used in mosquito nets.
Hazard assessment
Numerous toxicology studies have been conducted over the past 40 years on PBO examining the full range of potential toxic effects. These studies were conducted in accord with regulatory requirements put forth by the United States EPA or other international agencies. Many were conducted following United States EPA Good Laboratory Practices (GLPs), a system of processes and controls to ensure the consistency, integrity, quality, and reproducibility of laboratory studies conducted in support of pesticides registration. The following types of studies have been conducted in support of PBO registration:
Acute toxicity studies
Acute toxicity studies are designed to identify potential hazards from acute exposures. The studies usually employ a single or a few high doses over a short time period. The data are used for the development of appropriate precautionary statements for pesticide product labels. Acute studies identify:
Dermal toxicity
Eye irritation
Inhalation toxicity
Oral toxicity
Skin irritation
Skin sensitization
PBO has a low acute toxicity by oral, inhalation, and dermal routes in adults. It is minimally irritating to the eyes and skin. It is a not a skin sensitizer.
Dermal absorption
The available data indicate that less than 3% of the amount on the skin (forearm) is absorbed over an 8-hour period. Other studies with a pediculicide formulation indicate that about 2% crossed the skin and about 8% crossed the scalp.
Endocrine disruption
The Food Quality Protection Act (FQPA) of 1996 required the United States EPA to address the issue of endocrine disruption. Since the passage of the FQPA, the US EPA has developed a two-tiered endocrine disruptor screening program (EDSP) designed to examine potential effects of substances on the estrogenic, androgenic, and thyroid (EAT) hormone systems in both humans and wildlife. Tier 1 consists of 11 assays, and is designed to determine whether a substance has the potential to interact with the EAT hormone systems. If results indicate a relationship, the chemical progresses to Tier 2 testing. The purpose of Tier 2 is to determine whether a substance that interacts with the EAT hormone system exerts an adverse effect in humans or wildlife, and to develop a dose-response that, in association with exposure data, can be used to assess risk.
PBO is one of the chemicals selected by EPA to be part of the initial effort under the EDSP. The EPA issued its first list of chemicals for EDSP testing in 2009, consisting of over 60 pesticide chemicals, including the insecticide synergist PBO. The first list of chemicals for EDSP screening is not based on a potential for endocrine activity or a potential for adverse effects. Rather, the list is based on an EPA prioritization regarding exposure potential. PBO was added to this list because of its wide use pattern (1500 products registered with US EPA), and people may be exposed to low levels of PBO in their diets, from treated surfaces in their homes (e.g., carpet), and in certain occupations (e.g., pest control operators).
No evidence suggests that PBO disrupts the normal functioning of the endocrine system. This includes the recently developed data to assess the possible interaction of PBO with the endocrine system. The Piperonyl Butoxide Task Force II, a group of companies that produces or markets PBO-containing products, has conducted all 11 EDSP Tier 1 screens and has submitted all required documentation and study reports.
The US EPA intends to use a weight of evidence (WoE) approach for assessing EDSP Tier 1 results. While the agency issued WOE guidelines, no actual WOE assessments have yet been conducted and released to the registrants. The PBTFII has conducted a WoE analysis for PBO that is consistent with EPA’s guidelines. The WoE analysis for PBO examines each EDSP Tier 1 assay conducted for PBO. It discusses the purpose of the assay, and summarizes the study design and results and provides an overall conclusion for each assay. All 11 individual assays are then considered together to arrive at an overall conclusion for the outcome of the Tier 1 battery. For some assays, other scientifically relevant information is also considered as part of the assessment. The purpose of the WoE analysis is to determine whether PBO has the potential to interact with the endocrine system, as determined by EDSP Tier 1 assays, the Tier 1 battery as a whole and OSRI. A determination that a chemical has the potential to interact with the endocrine system would trigger a need for EDSP Tier 2 testing. The EPA is planning to issue their WOE assessment in late 2014 or early 2015.
Subchronic and chronic/carcinogenicity studies
Subchronic and chronic studies examine the toxicity of longer-term, repeated exposure to chemicals. They may range from 90 days for subchronic studies, to 12–24 months for full lifetime chronic studies, designed to determine potential for carcinogenesis. They are also intended to identify any noncancer effects, as well as a clear no observable adverse effect level (NOAEL) that is used for risk assessment. Studies conducted on PBO include:
90-day inhalation toxicology study
18-month chronic toxicity/carcinogenicity study in mice
24-month chronic toxicity/carcinogenicity study in rats
NOAELs were derived for PBO from both subchronic and chronic studies. These NOAELs are used by the EPA to conduct risk assessments for all individual uses of PBO to ensure that all registered products with PBO pose a reasonable certainty of no harm used according to the label directions.
PBO caused an increase in liver tumors in mice that ingested high levels of PBO in the diet for their entire lifetimes. The scientific identification and analysis of the key events leading to the formation of the mouse liver tumors suggest that the events are not likely to occur in humans.
The EPA classifies PBO as a group C carcinogen – "possibly carcinogenic to humans." Under the auspices of the United Nations, the Food and Agriculture Organization/World Health Organization (FAO/WHO) Joint Meeting on Pesticide Residues evaluated the entire body of toxicology of PBO several times since 1965. They concluded that, at doses up to internationally accepted standards for a maximum tolerated dose, PBO is not considered to be carcinogenic in the mouse or rat, thus leading to the conclusion that PBO is not carcinogenic to humans.
Developmental toxicity studies
PBO has been found to inhibit the Hedgehog signaling pathway, a critical regulator of brain and face development in all vertebrates, via antagonism of the protein Smoothened (SMO). PBO was found to be capable of causing dose-dependent brain and face malformations in mice exposed during early development, including the rare human birth defect holoprosencephaly. Even doses of PBO that did not cause overt holoprosencephaly associated facial abnormalities were found to cause subtle neuroanatomical defects, for which the cognitive or behavioral consequences are unknown.
An epidemiology study found that PBO exposure was correlated with dose-dependent reductions in neurocognitive development in 3-year old children.
Animal impacts
PBO is moderately to highly toxic to aquatic invertebrates, such as water fleas and shrimp. At lower, long-term doses, water flea reproduction was affected. PBO is highly toxic to amphibians in the tadpole stage.
Exposure assessment
Given the extensive non-dietary use of PBO, manufacturers of PBO and marketers of PBO-containing products formed the Non-Dietary Exposure Task Force (NDETF) in 1996 to develop a long-term research program to more fully understand the phenomenon of human exposure to insecticides used in the home. Most of the studies were conducted with formulations of pyrethrins/PBO and synthetic pyrethroids/PBO, and focused on the indoor use of fogger and aerosol products. Carpet and vinyl flooring surfaces were selected because of their different physical and chemical properties, and because they represent a significant percentage of the floor coverings used in homes in North America.
While the focus of the NDETF effort was on total-release foggers, a study was also conducted to determine both dispersion (air levels) and deposition (on flooring) of pyrethrins/PBO resulting from the use of a hand held aerosol spray can. Potential direct exposure of the user was also measured. Air sampling from the breathing zone of the applicator and analysis of residues on cotton gloves was performed. This data was submitted to the United States EPA and was key to the agency’s comprehensive risk assessment for PBO.
Risk assessment
The US EPA, in their re-registration eligibility decision, determined "no risks of concern" existed for householders mixing, loading, handling, or applying PBO-containing products.
References
Household chemicals
Insecticides
Ethers
Benzodioxoles
Glycol ethers | Piperonyl butoxide | [
"Chemistry"
] | 2,980 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
317,092 | https://en.wikipedia.org/wiki/Cliff%20Shaw | John Clifford Shaw (February 23, 1922 – February 9, 1991) was a systems programmer at the RAND Corporation. He is a coauthor of the first artificial intelligence program, the Logic Theorist, and was one of the developers of General Problem Solver (universal problem solver machine) and Information Processing Language (a programming language of the 1950s). Information Processing Language is considered the true "father" of the JOSS language. One of the most significant events that occurred in the programming was the development of the concept of list processing by Allen Newell, Herbert A. Simon and Cliff Shaw during the development of the language IPL-V. He invented the linked list, which remains fundamental in many strands of modern computing technology.
References
External links
Simon, Herbert A. Allen Newell - a referenced biography of Newell and Shaw at the National Academy of Sciences.
People in information technology
Artificial intelligence researchers
Carnegie Mellon University faculty
Place of birth missing
1922 births
1991 deaths | Cliff Shaw | [
"Technology"
] | 192 | [
"People in information technology",
"Information technology",
"Computer specialist stubs",
"Computing stubs"
] |
317,205 | https://en.wikipedia.org/wiki/Table%20saw | A table saw (also known as a sawbench or bench saw in England) is a woodworking tool, consisting of a circular saw blade, mounted on an arbor, that is driven by an electric motor (directly, by belt, by cable, or by gears). The drive mechanism is mounted below a table that provides support for the material, usually wood, being cut, with the blade protruding up through the table into the material.
In most modern table saws, the table is fixed and the blade position can be adjusted. Moving the blade up or down affects the depth of the cut by controlling how much of the blade is protruding above the table surface. Many saws also have an adjustable angle, where the blade can be tilted relative to the table. Some earlier saws instead had a fixed blade and the table could be adjusted for height (exposure of blade) and angle relative to the blade.
Types
The general types of table saws are compact, benchtop, jobsite, contractor, hybrid, cabinet, and sliding table saws.
Benchtop
Benchtop table saws are lightweight and are designed to be placed on a table or other support for operation. This type of saw is most often used by homeowners and DIYers. They almost always have a direct-drive (blade driven directly by the motor) universal motor. Some early models used small induction motors, which weren't very powerful, made the saw heavy, and caused a lot of vibration. Most modern saws can be lifted and carried by one person. These saws often have parts made of steel, aluminum and plastic and are designed to be compact and light.
Benchtop table saws are the least expensive (typically costing in the $100-$200 range) and least capable of the table saws; however, they can offer adequate ripping capacity and precision for most tasks. The universal motor is not as durable or as quiet as an induction motor, but it offers more power relative to its size and weight. The top of a benchtop table saw is narrower than those of the contractors and cabinet saws, so the width of stock that can be ripped is reduced. Another restriction results from the top being smaller from the front of the tabletop to the rear. This results in a shorter rip fence, which makes it harder to make a clean, straight cut when ripping. Also, there is less distance from the front edge of the tabletop to the blade, which makes cross cutting stock using a miter more difficult (the miter and/or stock may not be fully supported by the table in front of the blade). Benchtop saws are the smallest type of table saw and have the least mass, potentially resulting in increased vibration during a cut. Nowadays, these models are being phased out for more practical jobsite model saws.
Jobsite
Jobsite table saws are slightly larger than benchtop models, and usually are placed on a folding or stationary stand during operation. These saws are mostly used by carpenters, contractors, and tradesman on the jobsite (hence the name). Many of these saws are more expensive than benchtop saws (typically in the $300–$600 range). Most saws in this category have small but powerful 15-ampere universal motors. Many higher-end saws have gear-driven motors. Most of these saws are relatively light, and can be easily transported to a job location. Many of these saws are built more ruggedly and are generally more accurate than the entry-level benchtop models. The motors, gears, and cases are generally designed to better withstand the abuse of construction sites. When compared to benchtop saws, many jobsite models have miter slots, better fences, better overall alignment, sliding extension tables, larger rip capacities, and folding stands with wheels.
Compact
Compact table saws are much larger than portable saws, and sit on a stationary stand. The motor is still a universal type motor, however these are usually driven by small toothed belts. Some saws have cast iron tops, and are similar in appearance to larger contractor saws, although the tables are usually smaller and the build is of lighter construction. Some models even feature sliding-miter tables, with a built-in miter sled that could be tilted to many different angles.
Contractor
Contractor table saws (also sometimes referred to as open-stand saws) are heavier (200 - 400 lbs), larger saws that are attached to a stand or base, often with wheels. On these saws, the motor (Usually a induction-type motor) hinges off the rear of the saw on a pivoting bracket (although direct drive models have existed) and drives the blade with one, or rarely, two rubber v-belts. This is the type often used by hobbyists and homeowners because standard electrical circuits provide adequate power to run them, and because of their generally low cost when compared to larger saws. Because the motor hangs off the rear of the saw, dust collection is usually problematic or even ineffective.
Contractor saws were originally designed to be somewhat portable, often having wheels, and were frequently brought to job sites before the invention of smaller bench-top models in the early 1980s. Contractor saws are heavier than bench-top saws, but are still lightweight when compared to cabinet saws. Their larger size and greater power allows them to be used for larger projects and allows them to be more durable, accurate, and longer-lasting then bench-top saws.
Cabinet
Cabinet table saws are heavy (600 - 900 lbs), using large amounts of cast iron and steel, to minimize vibration and increase accuracy.
A cabinet saw is characterized by having an enclosed base. Cabinet saws usually have induction motors in the range, single-phase, but motors in the range, three-phase, are common in commercial/industrial sites. For home use, this type of motor typically requires that a heavy-duty circuit be installed. The motor is enclosed within the cabinet and drives the blade with one or more parallel V-belts, often "A" belts as "A" belts may be ganged without having to be specially selected (otherwise, specially selected sets of light-duty "4L" belts are used). Cabinet saws offer the following advantages over contractor saws: heavier construction for lower vibration and increased durability; a cabinet-mounted trunnion (the mechanism that incorporates the saw blade mount and allows for height and tilt adjustment); improved dust collection due to the totally enclosed cabinet and common incorporation of a dust collection port. Cabinet saws are designed for, and are capable of very high duty-cycles, such as are encountered in commercial/industrial applications. Where some of the advantages of a cabinet saw are desired in a home shop application, so-called "hybrid" saws have emerged to address this need.
Cabinet saws have an easily replaceable insert around the blade in the table top allowing the use of zero-clearance inserts, which greatly reduce tear out on the bottom of the workpiece. It is common for this type of saw to be equipped with a table extension that increases ripping capacity for sheet goods to . These saws are characterized by a cast iron top on a full-length steel base, generally square in section, with radiused corners. Two miter slots ( wide on the largest saws) are located parallel to the blade, one to the left of the blade and one to the right.
American-style cabinet saws generally follow the model of the Delta Unisaw, a design that has evolved since 1939. Saws of this general type are made in the US, Canada, Taiwan and China. The most common type of rip fence mounted to this type of saw is characterized by the standard model made by Biesemeyer (now a subsidiary of Delta). It has a sturdy, steel T-type fence mounted to a steel rail at the front of the saw and replaceable laminate faces. American cabinet saws are normally designed to accept a stacked dado blade in addition to a standard saw blade. The most common size of blade is in diameter with a blade arbor diameter of , but in diameter with a blade arbor diameter of are found in commercial/industrial sites. American saws normally include an anti-kickback device that incorporates a splitter or riving knife, toothed anti-kickback pawls, and a clear plastic blade cover. The saw blade can tilt to either the left side or right side of the saw, depending on the model of saw. The original Delta Unisaw and early cabinet saws based on it are all right-tilt units while newer Delta Unisaws and many competitive cabinet saws made after 2000 are left-tilt saws. The change to left-tilt was due to a lower perceived propensity for the cut piece to become trapped between the rip fence and blade and kick back when the blade tilts away from the rip fence (left-tilt saw) versus towards the rip fence (right-tilt saw.)
While conceptually simple in design, these saws are highly evolved and are capable of efficient, high volume, precision work.
Hybrid
Hybrid table saws are designed to compete in the market with high-end contractor table saws. They offer some of the advantages of cabinet saws at a lower price than traditional cabinet saws. Hybrid saws on the market today offer an enclosed cabinet to help improve dust collection. The cabinet can either be similar to a cabinet saw with a full enclosure from the table top to the floor or a shorter cabinet on legs. Some hybrid saws have cabinet-mounted trunnions and some have table-mounted trunnions. In general, cabinet-mounted trunnions are easier to adjust than table-mounted trunnions. Hybrid saws tend to be heavier than contractor saws and lighter than cabinet saws. Some hybrid saws offer a sliding table as an option to improve cross cutting capability. Hybrid saw drive mechanisms vary more than contractor saws and cabinet saws. Drive mechanisms can be a single v-belt, a serpentine belt or multiple v-belts. Hybrid saws have a motor and thus the ability to run on a standard 15- or 20-ampere 120-volt North American household circuit, while a cabinet saw's or larger motor requires a 240-volt supply.
Mini and micro
Mini and micro table saws have a blade diameter of 4 inches (100 mm) and under. Mini table saws are typically 4 inch, while micro table saws are less than 4 inch, although the naming of the saws is not well defined.
Mini and micro table saws are generally used by hobbyists and model builders, although the mini table saws (4 inch) have gained some popularity with building contractors that need only a small saw to cut small pieces (such as wood trim). Being a fraction of the size (and weight) of a normal table saw, they are much easier to carry and transport.
Being much smaller than a normal table saw, they are substantially safer to use when cutting very small pieces. Using blades that have a smaller kerf (cutting width) than normal blades, there is less material lost and the possibility of kickback is reduced as well.
Sliding
A sliding table saw, also called a European cabinet saw or panel saw, is a variation on the traditional cabinet saw. They are generally used to cut large panels and sheet goods, such as plywood or MDF. Sliding table saws have a sliding table on the left side of the blade, usually attached to a folding arm mounted under the table, that is used for cross cutting and ripping larger materials. Sliding table saws are the largest type of table saw, and are mostly used by large production cabinet shops. Most saws use 3–5, or even 7hp three- phase induction motors. Sliding table saws usually incorporate a riving knife to prevent kickback from occurring.
Sliding saws sometimes offer a scoring blade, which is a second, smaller diameter blade mounted in front of the regular saw blade. The scoring blade helps reduce splintering the lower face in certain types of stock, especially laminated stock. European models are sometimes available in multi-purpose tool configurations (Combination machine) that offer jointer, planer, shaper(Spindle moulder in Europe) or boring features. The blade arbor typically has a diameter of 30 mm, around twice that of a US saw. Many American woodworkers are likely to use a dado stack or wobble dado to cut dados (square sectioned grooves), while most European woodworkers would use a shaper or a router table for this task.
In recent years, European-style sliding table saws have had a small following in North America. They are usually either imported from European manufacturers such as Felder and its subsidiaries, Altendorf and Robland, Taiwanese companies such as Grizzly Industrial, or sold directly by U.S. based-companies such as Powermatic.
History
Table saws have been an integral part of woodworking for centuries, revolutionizing the way woodworkers manipulate wood to create intricate designs and structures. The table saw has had a profound impact on the field of woodworking by enabling woodworkers to achieve greater precision, efficiency, and versatility in their craft. With the ability to make a wide range of cuts, such as rip cuts, crosscuts, bevel cuts, and dado cuts, the table saw has become an indispensable tool in woodworking workshops worldwide.
The history of the table saw dates back to the early 18th century when the first known patent for a table saw was filed in 1777 by Samuel Miller who was an English scientist. Miller's design featured a circular saw blade mounted on an arbor with a table to support the wood being cut. This invention laid the foundation for the development of modern-day table saws.
Over the years, advancements in technology and design have led to the evolution of the table saw into various types, including benchtop, contractor, cabinet, and hybrid table saws etc. Each type offers different features and capabilities to meet the needs of woodworkers, from hobbyists to professionals.
A key figure in the development of the table saw is Wilhelm Altendorf, a German carpenter Altendorf revolutionized the design of table saws by introducing a sliding table that allowed woodworkers to make precise crosscuts and rip cuts with ease. This innovation set a new standard for accuracy and versatility in table saws.
Looking ahead, the future of the table saw will likely be influenced by new technology, like digital controls and sensors that can automate and improve cutting. Also, new blade designs and materials may make cutting even more precise and efficient.
Safety
Table saws are especially dangerous tools because the operator holds the material being cut, instead of the saw, making it easy to accidentally move hands into the spinning blade. When using other types of circular saws, the material remains stationary, as the operator guides the saw into the material. But a push stick, riving knife, and protective cover over the spinning saw can reduce the chances of accident.
Kickback
Kickback is the term for when a piece of wood is ripped, and either pinches the blade, or turns outward against the blade of the spinning saw and is propelled back towards the operator at a high speed. The two main injuries that occur from kickback are caused by wood striking the head, chest, or torso of the operator, or the wood moving so quickly that the operator's hands stay on the wood and get pulled across the saw blade.
Dust extractor
A dust extractor should be fitted if sawdust is prone to build up under the cutting blade. Through friction the spinning blade will quickly ignite the accumulated dust, and the smoke can be mistaken for an overheated blade. The extractor also reduces the risk of a dust explosion and facilitates a healthier working environment.
Magnetic featherboard
The magnetic feather board was developed in 1990. The patented Grip-Tite is held to a cast iron table top or steel sub fence by high strength permanent magnets. The advantage of a magnetic feather board is the fast setup time on any cast iron tool deck or steel faced fence. When used in conjunction with a steel faced rip fence, they are used to hold down ripped wood on any saw deck and prevent kickback. Feed wheels added to the Grip-Tite base pull ripped wood to the fence, allowing the operator to rip wood on any table saw with no hands near the blade.
Miter slot featherboard
When a table saw has a table top made of a material other than cast iron, like aluminum, then a miter slot featherboard should be utilized to keep pressure on the stock against the fence when otherwise your hand would be in dangerous proximity to the saw blade. Keep in mind that this style feather board does take more time to set up than the magnetic style when deciding on a tool purchase. If a safety device is more convenient then it may be more frequently utilized. Never place a feather board past the leading edge of the blade or else kickback will occur.
Safety precautions
Read the instruction manual: Always read and understand your table saw's manual before use.
Wear proper clothing: Avoid loose clothing, long sleeves and jewelry, and tie back long hair. Wear close toed shoes.
Use personal protective equipment (PPE): Wear safety glasses and earmuffs or earplugs.
Keep your work area tidy: Clear your work area of clutter and be sure there are no tripping hazards like power cords.
Minimize distractions: Avoid distractions such as TVs or phones that can divert your attention from operating the saw safely.
Disconnect power before blade changes: Unplug your table saw before changing blades to prevent accidental start-ups.
Avoid wearing gloves: Do not wear gloves while operating the table saw to maintain a secure grip and avoid hazards.
Blade height
There are two competing schools of thought when it comes to properly setting the height of the blade for sawing. The first is commonly expressed thus: "Only allow the blade to rise above the work by the amount of finger you wish to lose." That is, the blade should protrude above the piece as little as possible, to prevent the loss of a finger in case of a sawing accident.
Another competing view is that the saw functions at its best when the angle of the blade teeth arc relative to the top surface of the workpiece is as extreme as possible. This facilitates chip ejection, shortens the overall distance through which the teeth act on the part, reduces power consumption and heat generation, substantially reduces the peak pushing force required, thus improving control, and causes the blade's force on the wood to act mostly downward rather than largely horizontally.
Uses
Although the majority of table saws are used for cutting wood, table saws can also be used for cutting many materials, including hardwood, softwood, metal, and plastic. It is commonly used for ripping wood (cutting it to width), crosscutting (cutting it to length), kerfing (making small cuts to bend the wood), and cutting rabbets and grooves for joints. It can also make angled bevel cuts and other wood joints. This makes the table saw a versatile and essential tool in any woodworker's workshop.
Accessories
Outfeed tables: Table saws are often used to rip long boards or sheets of plywood or other sheet materials. The use of an out feed (or outfeed) table makes this process safer and easier. Many of these are shop built, while others are commercially available.
Infeed tables: Used to assist feeding long boards or sheets of plywood. In the past roller stands were pretty much the only option, but there are now commercially available infeed units that are more efficient and easier to use.
Downdraft tables: Used to draw harmful dust particles away from the user without obstructing the user's movement or productivity.
: Table saws commonly have a fence (guide) running from the front of the table (the side nearest the operator) to the back, parallel to the cutting plane of the blade. The distance of the fence from the blade can be adjusted, which determines where on the workpiece the cut is made. The fence is commonly called a "rip fence" referring to its use in guiding the workpiece during the process of making a rip cut. Most table saws come standard with a rip fence, but some high end saws are available without a fence so a fence of the user's choice can be purchased separately.
Featherboard: Featherboards are used to keep wood against the rip fence. They can be a single spring, or many springs, as made from wood in many shops. They are held in place by high strength magnets, clamps, or expansion bars in the miter slot.
Hold down: The circular sawblade of a tablesaw will pick up a piece of wood if not held down. Hold downs can be a vertical version of featherboards, attached to a fence with magnets or clamps. Another type of hold down uses wheels on a spring-loaded mechanism to push down on a workpiece as it is guided past the blade.
Sub fence: A piece of wood clamped to the rip fence allows a dado set to cut into the rip fence, allowing rabbet cuts with a dado blade.
Miter gauge: The table has one or two slots (grooves) running from front to back, also parallel to the cutting plane of the blade. These miter slots (or miter grooves) are used to position and guide either a miter gauge (also known as a crosscut fence) or crosscut sled. The miter gauge is usually set to be at 90 degrees to the plane of the blade's cut, to cause the cut made in the workpiece to be made at a right angle. The miter gauge can also be adjusted to cause the cut to be made at a precisely controlled angle (a so-called miter cut).
Crosscut sled: A crosscut sled is generally used to hold the workpiece at a fixed 90-degree angle to the blade, allowing precise repeatable cuts at the most commonly used angle. The sled is normally guided by a runner fastened under it that slides in a miter slot. This device is normally shop made, but can be purchased.
Tenon jig: A tenon jig is a device that holds the workpiece vertically so cuts can be made across the end. This allows tenons to be formed. Often this is a purchased item, but it can be shop made. The tenon jig is guided by a miter slot or a fence.
Stacked dado: Saws made for the US market are generally capable of using a stacked dado blade set. This is a kit with two outer blades and a number of inner "chip breakers" that can be used to cut dados (grooves in the workpiece) of any width up to the maximum (generally ). Stacked dado sets are available in diameters of . 8- and 10-inch stacked dado sets are not recommended for saws with or less. Although 10-inch stacked dado sets are available with a bore, these are recommended with a bore.
Inserts: Table saws have a changeable insert in the table through which the blade projects. Purchasable inserts are usually made out of metal. Zero-clearance inserts can be made of a sawable material such as plastic or wood. When a zero-clearance insert is initially inserted, the blade is raised through the insert creating the slot. This creates a slot with no gaps around the blade. The zero clearance insert helps prevent tearout by providing support for wood fibers right next to the blade thus helping to make a very clean cut. Other inserts can be bought or created in the same manner, such as a dado insert.
Splitter: A splitter or riving knife is a vertical projection located behind the saw blade. This can be a pin or a fin. It is slightly narrower in width than the blade and located directly in line with the blade. The splitter prevents the material being cut from being rotated thereby helping to prevent kickback. Splitters may incorporate pawls, a mechanism with teeth designed to bite into wood and preventing kickback. Splitters can take many forms, including being part of the blade guard that comes standard with the saw. Another type of splitter is simply a vertical pin or fin attached to an insert. Splitters are available commercially or can be made from wood, metal or plastic.
Anti-kickback pawls: Most modern US table saws are fitted with kickback pawls, a set of small spring-loaded metal teeth on a free-swinging pawl (usually attached to the guard) which help to put a strong downward force on a board. This can help to immobilize the board in the event of a kickback. However, these have sometimes been found to be somewhat ineffective when compared to a splitter.
Push stick: A handheld safety device used to safely maneuver a workpiece, keeping it flat against the machine table and fence while it is being cut.
References
Further reading
Jim Tolpin (2004). Table Saw Magic. Popular Woodworking Books imprint of F&W Publications.
Anthony, Paul. Taunton's complete illustrated guide to tablesaws. Newtown, CT: Taunton Press, 2009.
Saws
Woodworking machines
Power tools | Table saw | [
"Physics",
"Technology"
] | 5,235 | [
"Machines",
"Power tools",
"Physical quantities",
"Physical systems",
"Power (physics)",
"Woodworking machines"
] |
317,212 | https://en.wikipedia.org/wiki/Reactive%20center | A reactive center, also called a propagating center, in chemistry is a particular location, usually an atom, within a chemical compound that is the likely center of a reaction in which the chemical is involved. In chain-growth polymer chemistry, this is also the point of propagation for a growing chain. The reactive center is commonly radical, anionic, or cationic, but can also take other forms.
References
Polymer chemistry | Reactive center | [
"Chemistry",
"Materials_science",
"Engineering"
] | 89 | [
"Polymer stubs",
"Organic chemistry stubs",
"Materials science",
"Polymer chemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.