id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
61,295,438 | https://en.wikipedia.org/wiki/Catovirus | Catovirus (CatV) is a genus of giant double-stranded DNA-containing viruses (nucleocytoplasmic large DNA viruses). This genus was detected during the analysis of metagenome samples of bottom sediments of reservoirs at the wastewater treatment plant in Klosterneuburg, Austria.
New Klosneuvirus (KNV), Hokovirus and Indivirus genera (all found in these sewage waters) were also described together with Catovirus, building up a putative virus subfamily Klosneuvirinae (Klosneuviruses) with KNV as type genus.
Catovirus has a large genome of 1.53 million base pairs (1176 gene families). This is the second largest genome among known Klosneuviruses after KNV (1.57 million base pairs, 1272 gene families). GC content is 26.4 %
Classification of metagenome, made by analyzing 18S rRNA indicate that their hosts are relate to the simple Cercozoa.
Phylogenetic tree topology of Mimiviridae is still under discussion. Some authors (CNS 2018) like to put Klosneuviruses together with Cafeteria roenbergensis virus (CroV) and Bodo saltans virus (BsV) into a tentative subfamily called Aquavirinae. Another proposal is to put these all together with Mimiviruses into a subfamily Megavirinae.
See also
Nucleocytoplasmic large DNA viruses
Girus
Mimiviridae
References
Further reading
Mitch Leslie: Giant viruses found in Austrian sewage fuel debate over potential fourth domain of life. In: Science. 5. April 2017, doi:10.1126/science.aal1005.
Virus genera
Mimiviridae
Unaccepted virus taxa | Catovirus | Biology | 364 |
43,329,220 | https://en.wikipedia.org/wiki/The%20World%27s%20Online%20Festival | The World's Online Festival (WOLF) (formerly Palringo) is a British messaging and gaming platform with apps for iOS and Android. Users can create groups where they can send text, image, and short audio messages. Groups feature a Stage which provides five live microphone slots for users to chat. The app features a store where users can purchase in-app credits that can be used to buy additional features, utility chatbots and games, and to send in-app gifts to other users. Users have a reputation level that increases from actions such as playing chat games or purchasing credits.
WOLF is headquartered in London with branch office locations in Newcastle and London, UK, and Amman, Jordan. As of September 2014, the platform had 28 million registered accounts. Palringo offers a technology (Palringo Local) that allows users to establish and view the location of their peers by means of their manual position, GPS, triangulation or estimation on the internet network used at that moment. It uses the Google API Maps to show peer locations and also offers a "nearby" section in friends and group lists to show users who are nearby.
History
Palringo was developed by Martin Rosinski in 2006. In August 2006, Northstar Ventures invested £650,000.
In 2010, Palringo launched an enterprise version of the app, offering companies private group communication networks.
In August 2012, Palringo began a shift towards the consumer market. The current business model involves selling virtual products such as premium accounts, decorative profile stickers (called Charms), chat bots, and functional utilities.
In May 2014, Palringo acquired Swedish games developer Free Lunch Design (FLD).
In May 2015, Palringo acquired Finnish chat games developer Tribe Studios.
In late 2019, Palringo introduced Stages, which allows up to five users to broadcast live audio within their chat group.
In February 2020, Palringo officially re-branded to The World's Online Festival (WOLF).
Charity
In 2013, the company launched a charitable initiative aimed at their users in the Gulf region during the month of Ramadan, which enabled users to donate Palringo credits through a special charity Bot. Palringo users raised over US$230,000 for Charity Right and Islamic Relief. During Ramadan in 2015, Palringo raised more than $300,000 to be split between Action Against Hunger and Islamic Relief Worldwide.
References
Android (operating system) software
IOS software
Symbian software
Instant messaging clients
2006 software | The World's Online Festival | Technology | 510 |
47,375,191 | https://en.wikipedia.org/wiki/Dyadic%20space%20%28cell%20biology%29 | The dyadic space is the name for the volume of cytoplasm between pairs (dyads) of areas where the cell membrane and an organelle such as the endoplasmic reticulum (or sarcoplasmic reticulum) come into close contact (within 10-12 nanometers) of each other, creating what are known as dyadic clefts.
The space is important for ionic signalling. For example, the phenomenon of calcium-induced calcium release, when extracellular calcium enters the cell through ion channels in T-Tubules, leading to a rapidly increased calcium concentration in the dyadic space, triggering ryanodine receptors on the sarcoplasmic reticulum to release more calcium and trigger cardiac myocyte contraction - the heart beat.
References
Cell biology
Cell anatomy | Dyadic space (cell biology) | Biology | 171 |
52,129,980 | https://en.wikipedia.org/wiki/%CE%92-Fuoxymorphamine | β-Fuoxymorphamine is an opioid acting at μ-opioid receptors. It is used experimentally.
See also
β-Funaltrexamine
References
Opioids
Heterocyclic compounds with 5 rings
Oxygen heterocycles
Nitrogen heterocycles
Amides
Methyl esters
Fumarate esters
Hydroxyarenes | Β-Fuoxymorphamine | Chemistry | 76 |
47,404,165 | https://en.wikipedia.org/wiki/Leptopharsa%20tacanae | Leptopharsa tacanae is an extinct species of lace bug in the family Tingidae. The species is solely known from the Late Oligocene to Middle Miocene Mexican amber deposits. The species is the first lace bug described from Mexican amber.
History and classification
Leptopharsa tacanae is known from the holotype specimen, collection number TOT158.1, which is an inclusion in a transparent chunk of Mexican amber, also known as Chiapas amber. As of 2014, the type insect was part of the David Coty fossil collection provisionally housed at the Muséum National d’Histoire Naturelle, Paris, France. This amber predates a range from between 22.5 million years old, for the youngest sediments of the Balumtun Sandstone, and 26 million years, for the La Quinta Formation. This age range, which straddles the boundary between the Late Oligocene and Early Miocene, is complicated by both formations being secondary deposits for the amber; consequently, the given age range is only the youngest that the fossil might be. The L. tacanae fossil was recovered from amber deposits along the Yalbantuc River, near Totolapa in the Chiapas depression, distant from the major Mexican amber deposits in the Simojovel region. The geology of the Totolapa region is currently identified as Eocene in age, but the fauna of the amber is very similar to both the Simojovel fauna and to Dominican amber, indicating that a reassessment of the geology may be needed.
The holotype was first studied by paleoentomologists David Coty, Romain Garrouste and André Nel, of the Muséum National. Their type description of the species was published in the Annales de la Société Entomologique de France in 2014. The specific epithet tacanae derived from the Tacana volcano, which is on the border of Mexico and Guatemala, and the second highest volcano in Central America.
Leptopharsa tacana is the first lace bug to be described from Mexican amber fossils, while the related Dominican amber fauna is much more diverse with six described species as of 2014: Eocader balyrussus, Leptopharsa evsyunini, Leptopharsa frater, Leptopharsa poinari, Stephanitis rozanovi and Phymacysta stysi.
Description
The L. tacanae type specimen is a male that has an approximately long body, and is long with the wings included. The original coloration of the individual is not clear due to the amber, however the color patterning of light and dark is well preserved. The venation on the hemelytra has the typical thickening, and four of the cross veins in the costal area show a distinct darkened color tone. The flattened extensions along both the hemelytra and the abdomen are edged with small spines, each of which bear short, upright setae. The extensions are divided into two rows of subrectangular cells by a center vein. The antennae, nearly as long as the body is, are composed of four elongated segments and the last two segments are both covered with a dense, semi-erect covering of setae. The last antenna segment is also visibly darker in coloration. The head has five total spines, three spines located towards the front of the head, and two at the eyes. Two of the front spines are paired arising from the antennae bases, while the third front spine arises in between the two. The two occipital spines are curved and lay against the head capsule.
References
Tingidae
Oligocene insects of North America
Fossil taxa described in 2014
Miocene insects of North America
Mexican amber
Species known from a single specimen | Leptopharsa tacanae | Biology | 772 |
29,555,674 | https://en.wikipedia.org/wiki/BICEP%20and%20Keck%20Array | BICEP (Background Imaging of Cosmic Extragalactic Polarization) and the Keck Array are a series of cosmic microwave background (CMB) experiments. They aim to measure the polarization of the CMB; in particular, measuring the B-mode of the CMB. The experiments have had five generations of instrumentation, consisting of BICEP1 (or just BICEP), BICEP2, the Keck Array, BICEP3, and the BICEP Array. The Keck Array started observations in 2012 and BICEP3 has been fully operational since May 2016, with the BICEP Array beginning installation in 2017/18.
Purpose and collaboration
The purpose of the BICEP experiment is to measure the polarization of the cosmic microwave background. Specifically, it aims to measure the B-modes (curl component) of the polarization of the CMB. BICEP operates from Antarctica, at the Amundsen–Scott South Pole Station. All three instruments have mapped the same part of the sky, around the south celestial pole.
The institutions involved in the various instruments are Caltech, Cardiff University, University of Chicago, Center for Astrophysics Harvard & Smithsonian, Jet Propulsion Laboratory, CEA Grenoble (FR), University of Minnesota and Stanford University (all experiments); UC San Diego (BICEP1 and 2); National Institute of Standards and Technology (NIST), University of British Columbia and University of Toronto (BICEP2, Keck Array and BICEP3); and Case Western Reserve University (Keck Array).
The series of experiments began at the California Institute of Technology in 2002. In collaboration with the Jet Propulsion Laboratory, physicists Andrew Lange, Jamie Bock, Brian Keating, and William Holzapfel began the construction of the BICEP1 telescope which deployed to the Amundsen-Scott South Pole Station in 2005 for a three-season observing run. Immediately after deployment of BICEP1, the team, which now included Caltech postdoctoral fellows John Kovac and Chao-Lin Kuo, among others, began work on BICEP2. The telescope remained the same, but new detectors were inserted into BICEP2 using a completely different technology: a printed circuit board on the focal plane that could filter, process, image, and measure radiation from the cosmic microwave background. BICEP2 was deployed to the South Pole in 2009 to begin its three-season observing run which yielded the detection of B-mode polarization in the cosmic microwave background.
BICEP1
The first BICEP instrument (known during development as the "Robinson gravitational wave background telescope") observed the sky at 100 and 150 GHz (3 mm and 2 mm wavelength) with an angular resolution of 1.0 and 0.7 degrees. It had an array of 98 detectors (50 at 100 GHz and 48 at 150 GHz), which were sensitive to the polarisation of the CMB. A pair of detectors constitutes one polarization-sensitive pixel. The instrument, a prototype for future instruments, was first described in Keating et al. 2003 and started observing in January 2006 and ran until the end of 2008.
BICEP2
The second-generation instrument was BICEP2. Featuring a greatly improved focal-plane transition edge sensor (TES) bolometer array of 512 sensors (256 pixels) operating at 150 GHz, this 26 cm aperture telescope replaced the BICEP1 instrument, and observed from 2010 to 2012.
Reports stated in March 2014 that BICEP2 had detected B-modes from gravitational waves in the early universe (called primordial gravitational waves), a result reported by the four co-principal investigators of BICEP2: John M. Kovac of the Center for Astrophysics Harvard & Smithsonian; Chao-Lin Kuo of Stanford University; Jamie Bock of the California Institute of Technology; and Clem Pryke of the University of Minnesota.
An announcement was made on 17 March 2014 from the Center for Astrophysics Harvard & Smithsonian. The reported detection was of B-modes at the level of , disfavouring the null hypothesis () at the level of 7 sigma (5.9σ after foreground subtraction). However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported; the accepted and reviewed version of the discovery paper contains an appendix discussing the possible production of the signal by cosmic dust. In part because the large value of the tensor to scalar ratio, which contradicts limits from the Planck data, this is considered the most likely explanation for the detected signal by many scientists. For example, on June 5, 2014 at a conference of the American Astronomical Society, astronomer David Spergel argued that the B-mode polarization detected by BICEP2 could instead be the result of light emitted from dust between the stars in our Milky Way galaxy.
A preprint released by the Planck team in September 2014, eventually accepted in 2016, provided the most accurate measurement yet of dust, concluding that the signal from dust is the same strength as that reported from BICEP2. On January 30, 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.
BICEP2 has combined their data with the Keck Array and Planck in a joint analysis. A March 2015 publication in Physical Review Letters set a limit on the tensor-to-scalar ratio of .
The BICEP2 affair forms the subject of a book by Brian Keating.
Keck Array
Immediately next to the BICEP telescope at the Martin A. Pomerantz Observatory building at the South Pole was an unused telescope mount previously occupied by the Degree Angular Scale Interferometer. The Keck Array was built to take advantage of this larger telescope mount. This project was funded by $2.3 million from W. M. Keck Foundation, as well as funding from the National Science Foundation, the Gordon and Betty Moore Foundation, the James and Nelly Kilroy Foundation and the Barzan Foundation. The Keck Array project was originally led by Andrew Lange.
The Keck Array consists of five polarimeters, each very similar to the BICEP2 design, but using a pulse tube refrigerator rather than a large liquid helium cryogenic storage dewar.
The first three started observations in the austral summer of 2010–11; another two started observing in 2012. All of the receivers observed at 150 GHz until 2013, when two of them were converted to observe at 100 GHz. Each polarimeter consists of a refracting telescope (to minimise systematics) cooled by a pulse tube cooler to 4 K, and a focal-plane array of 512 transition edge sensors cooled to 250 mK, giving a total of 2560 detectors, or 1280 dual-polarization pixels.
In October 2018, the first results from the Keck Array (combined with BICEP2 data) were announced, using observations up to and including the 2015 season. These yielded an upper limit on cosmological B-modes of (95% confidence level), which reduces to in combination with Planck data.
In October 2021, new results were announced giving (at 95% confidence level) based on BICEP/Keck 2018 observation season combined with Planck and WMAP data.
BICEP3
Once the Keck array was completed in 2012, it was no longer cost-effective to continue to operate BICEP2. However, using the same technique as the Keck array to eliminate the large liquid helium dewar, a much larger telescope has been installed on the original BICEP telescope mount.
BICEP3 consists of a single telescope with the same 2560 detectors (observing at 95 GHz) as the five-telescope Keck array, but a 68 cm aperture, providing roughly twice the optical throughput of the entire Keck array. One consequence of the large focal plane is a larger 28° field of view, which will necessarily mean scanning some foreground-contaminated portions of the sky. It was installed (with initial configuration) at the pole in January 2015. It was upgraded for the 2015-2016 Austral summer season to a full 2560 detector configuration. BICEP3 is also a prototype for the BICEP Array.
BICEP Array
The Keck array is being succeeded by the BICEP array, which consists of four BICEP3-like telescopes on a common mount, operating at 30/40, 95, 150 and 220/270 GHz. Installation began between the 2017 and 2018 observing seasons. It is scheduled to be fully installed by the 2020 observing season.
According to the project website: "BICEP Array will measure the polarized sky in five frequency bands to reach an ultimate sensitivity to the amplitude of IGW [inflationary gravitational waves] of σ(r) < 0.005" and "This measurement will be a definitive test of slow-roll models of inflation, which generally predict a gravitational-wave signal above approximately 0.01."
See also
Cosmology
Inflation (cosmology)
Atacama Cosmology Telescope
South Pole Telescope
Cosmology Large Angular Scale Surveyor
POLARBEAR
LiteBIRD, space-based CMB B-mode polarization search project
Spider, balloon-based CMB B-mode polarization project
References
External links
BICEP2 winter-over (2009–2012) Steffen Richter (9 winters at the South Pole).
Keck winter-over (2010-current) Robert Schwarz (12 winters at the South Pole).
Radio telescopes
Physics experiments
Cosmic microwave background experiments
Astronomical experiments in the Antarctic
Inflation (cosmology) | BICEP and Keck Array | Physics | 1,988 |
51,090,247 | https://en.wikipedia.org/wiki/Lomas | Lomas (Spanish for "hills"), also called fog oases and mist oases, are areas of fog-watered vegetation in the coastal desert of Peru and northern Chile. About 100 lomas near the Pacific Ocean are identified between 5°S and 30°S latitude, a north–south distance of about . Lomas range in size from a small vegetated area to more than and their flora includes many endemic species. Apart from river valleys and the lomas the coastal desert is almost without vegetation. Scholars have described individual lomas as "an island of vegetation in a virtual ocean of desert."
In a nearly rainless desert, the lomas owe their existence to the moist dense fog and mist which rolls in from the Pacific. The fog is called garúa in Peru and Camanchaca in Chile.
Environment
According to the Köppen Climate Classification system, the coastal desert of Peru and the Atacama Desert of Chile feature a rare desert climate, that is abbreviated "BWn" on climate maps with the n denoting frequent fog. Temperatures are mild year round and precipitation is nearly non-existent, averaging to per year in most locations. Many years have no precipitation at all. The Atacama Desert of Chile is commonly known as the driest non-polar place in the world.
Arica, Chile, in the middle portion of the coastal desert, went a record 173 months without measurable precipitation in the early 20th century.
Occasional rainfall is caused by El Niño. For example, in March 2015, the desert in Chile received about in one day which caused flooding. In a phenomenon called the flowering desert, after the rare rains the desert briefly blooms with flowers. Normally, with the nearly non-existent precipitation, the coastal desert is almost devoid of vegetation except in lomas and along rivers which originate in the Andes and cross the desert to the Pacific.
The moisture for the vegetation in the lomas comes from fog which rolls in from the nearby Pacific Ocean and embraces mountains which come down near the sea. The cold waters of the Humboldt Current run offshore. During the austral winter thick stratus clouds, the garúa, creep inland to an altitude of most days from May until November. During this season the vegetation in the lomas is lush and green and many species of flowers bloom. In the austral summer from December to April, the weather is mostly sunny and the lomas become dryer. The moisturizing impact of the fog is increased by the mild temperatures throughout the year and high average humidity of the coastal deserts. For example, Lima, Peru, located at 12°S latitude has average monthly temperatures ranging from to , very cool for locations in the tropics. Lima's average humidity is 84 percent, more than double the average humidity in most deserts.
Lomas comprise less than two percent of the coastal desert areas of Chile and Peru. Peru has more than 40 lomas totaling in area less than out of a total desert area of . Chile has almost 50 lomas with an area of less than out of a total desert area of .
Climate change
Teetering on a narrow edge of survival, the lomas are sensitive to climate change. Radio-carbon dating has indicated that, prior to 3800 BCE, the Peruvian desert north of Lima (12° S latitude) received more seasonal precipitation and was mostly vegetated. Lomas—isolated fog oases—existed only south of Lima. This is evidenced by the uniformity of plant species in present-day lomas north of Lima while lomas south of Lima have more endemic plant species, indicating geographic isolation. The cause of the climatic change was probably the duration and strength of El Niño events.
Destruction
Lomas have been impacted, and in some cases destroyed, by centuries of unregulated grazing, wood-cutting, and mining. In Chile, the Huasco (28°26′ S) and Copiapó (27°22′ S) river valleys once supported dense stands of trees. In the 18th century, the city of Copiapó was known as San Francisco de la Selva (Saint Francis of the Forest) for its extensive forests.
As the branches of trees and bushes trap the fog and create more moisture for other plants, their absence reduces the viability for all the plant life in the lomas.
In many locations the lomas were over-exploited for agriculture and grazing. One example is that, in prehistoric times, north of Ilo, Peru, far from any other source of water, four lomas-fed springs permitted about of irrigated agriculture plus grazing for llamas and alpacas. Hundreds of people of the Chiribaya culture benefited from this unlikely agriculture in a rainless land. Later, during the 17th century, Spanish colonists pastured 200 mules in these lomas. As late as 1951, a few tara trees still lived although the lomas were by then nearly devoid of all vegetation and population.
Preservation
In Peru, the Reserva Nacional de Lachay (National Preserve of Lachay) (11°22′S) protects north of Lima.
The Lomas de Atiquipa (15°48′S) is the largest and the best preserved lomas forest in Peru, covering more than with some 350 plant species, including 44 endemics. The National University of Saint Augustine in Arequipa has partnered with Peruvian conservation groups and the Nature Conservancy to preserve and restore the environment of the lomas. Included in the project is the installation of fog-catching nets to capture water, thereby helping the 80 families who live within the area to expand agriculture, primarily of olives. Similar methods have been used for the conservation of lomas in Lima.
In Chile the Pan de Azúcar (26°09′S) and Llanos de Challe (28°10′) National Parks and the La Chimba National Reserve (23°32′S, 70°21′W) preserve lomas. The richest diversity of species of lomas flora in Chile, however, is near the village of Paposo (25°00′S). The fog oasis near Paposo occur at elevations of to with altitudes from to having the most abundant growth of vegetation. The Paposo area has been declared a Zone of Ecological Protection by the Government of Chile.
References
Deserts of Chile
Deserts of Peru
Oases of Chile
Atacama Desert
Ecoregions of Chile
Ecoregions of Peru
Deserts and xeric shrublands
Fog | Lomas | Physics | 1,327 |
610,989 | https://en.wikipedia.org/wiki/Diorama | A diorama is a replica of a scene, typically a three-dimensional model either full-sized or miniature. Sometimes it is enclosed in a glass showcase for a museum. Dioramas are often built by hobbyists as part of related hobbies such as military vehicle modeling, miniature figure modeling, or aircraft modeling.
In the United States around 1950 and onward, natural history dioramas in museums became less fashionable, leading to many being removed, dismantled, or destroyed.
Etymology
Artists Louis Daguerre and Charles Marie Bouton coined the name "diorama" for a theatrical system that used variable lighting to give a translucent painting the illusion of depth and movement. It derives from Greek δια- (through) + ὅραμα (visible image) = "see-through image." The first use in reference to museum displays is recorded in 1902, although such displays existed before.
Modern
The current, popular understanding of the term "diorama" denotes a partially three-dimensional, full-size replica or scale model of a landscape typically showing historical events, nature scenes, or cityscapes, for purposes of education or entertainment.
One of the first uses of dioramas in a museum was in Stockholm, Sweden, where the Biological Museum opened in 1893. It had several dioramas, over three floors. They were also implemented by the Grigore Antipa National Museum of Natural History from Bucharest Romania and constituted a source of inspiration for many important museums in the world (such as the American Museum of Natural History in New York and the Great Oceanographic Museum in Berlin) [reference below].
Miniature
Miniature dioramas are typically much smaller, and use scale models and landscaping to create historical or fictional scenes. Such a scale model-based diorama is used, for example, in Chicago's Museum of Science and Industry to display railroading. This diorama employs a common model railroading scale of 1:87 (HO scale). Hobbyist dioramas often use scales such as 1:35 or 1:48.
An early, and exceptionally large example was created between 1830 and 1838 by a British Army officer. William Siborne, and represents the Battle of Waterloo at about 7.45 pm, on 18 June 1815. The diorama measures and used around 70,000 model soldiers in its construction. It is now part of the collection of the National Army Museum in London.
Sheperd Paine, a prominent hobbyist, popularized the modern miniature diorama beginning in the 1970s.
Full-size
Modern museum dioramas may be seen in most major natural-history museums. Typically, these displays simulate a tilted plane effect to represent what would otherwise be a level surface, incorporating a painted background of distant objects. The displays often use false perspective, carefully modifying the scale of objects placed on the plane to reinforce an illusion through depth perception, in which objects of identical real-world size placed farther from the observer appear smaller than those closer. Often the distant painted background or sky will be painted upon a continuous curved surface so that the viewer is not distracted by corners, seams, or edges. All of these techniques are means of presenting a realistic-appearing view of a large scene in a compact space. A photograph or single-eye view of such a diorama can be especially convincing, since in this case there is no distraction by the binocular perception of depth.
Uses
Miniature dioramas may be used to represent scenes from historic events. A typical example of this type is the dioramas to be seen at Norway's Resistance Museum in Oslo, Norway.
Landscapes built around model railways can also be considered dioramas, even though they often have to compromise scale accuracy for better operating characteristics.
Hobbyists also build dioramas of historical or quasi-historical events using a variety of materials, including plastic models of military vehicles, ships or other equipment, along with scale figures and landscaping.
In the 19th and beginning 20th century, building dioramas of sailing ships had been a popular handcraft of mariners. Building a diorama instead of a normal model had the advantage that in the diorama, the model was protected inside the framework and could easily be stowed below the bunk or behind the sea chest. Nowadays, such antique sailing ship dioramas are valuable collectors' items.
One of the largest dioramas ever created was a model of the entire State of California built for the Panama–Pacific International Exposition of 1915 and that for a long time was installed in San Francisco's Ferry Building.
Dioramas are widely used in the American educational system, mostly in elementary and middle schools. They are often made to represent historical events, ecological biomes, cultural scenes, or to visually depict literature. They are usually made from a shoebox and contain a trompe-l'œil in the background contrasted with two or three-dimensional models in the foreground. In California elementary schools, a popular assignment has fourth graders making a Spanish mission diorama to learn about the California Spanish missions.
Burmese-Chinese brothers Aw Boon Haw and Aw Boon Par, the developers of Tiger Balm, opened Haw Par Villa in 1937 in Singapore, where statues and dioramas were commissioned to teach traditional Chinese values. Today, the site contains over 150 giant dioramas depicting scenes from Chinese Literature, folklore, legends, history, philosophy and statuary of key Chinese religions, Taoism, Buddhism and Confucianism. The best-known attraction in Haw Par Villa is the Ten Courts of Hell, which features gruesome depictions of Hell in Chinese mythology and in Buddhism. Other major attractions include dioramas of scenes from Journey to the West, Fengshen Bang, The Twenty-four Filial Exemplars and the 12 animals in the Chinese zodiac. The park was a major local attraction during the 1970s and 1980s; it is estimated that the park then welcomed at least 1 million annual visitors, and is considered as part of Singapore's cultural heritage.
Historic
Daguerre and Bouton
The Diorama was a popular entertainment that originated in Paris in 1822. An alternative to the also popular "Panorama" (panoramic painting), the Diorama was a theatrical experience viewed by an audience in a highly specialized theatre. As many as 350 patrons would file in to view a landscape painting that would change its appearance both subtly and dramatically. Most would stand, though limited seating was provided. The show lasted 10 to 15 minutes, after which time the entire audience (on a massive turntable) would rotate to view a second painting. Later models of the Diorama theater even held a third painting.
The size of the proscenium was wide by high (7.3 meters x 6.4 meters). Each scene was hand-painted on linen, which was made transparent in selected areas. A series of these multi-layered, linen panels were arranged in a deep, truncated tunnel, then illuminated by sunlight re-directed via skylights, screens, shutters, and colored blinds. Depending on the direction and intensity of the skillfully manipulated light, the scene would appear to change. The effect was so subtle and finely rendered that both critics and the public were astounded, believing they were looking at a natural scene.
The inventors and proprietors of the Diorama were Charles-Marie Bouton (1781– 1853), a Troubador painter who also worked at the Panorama under Pierre Prévost, and Louis Jacques Mandé Daguerre (1787–1851), formerly a decorator, manufacturer of mirrors, painter of Panoramas, and designer and painter of theatrical stage illusions. Daguerre would later co-invent the daguerreotype, the first widely used method of photography.
A second diorama in Regent's Park in London was opened by an association of British men (having bought Daguerre's tableaux) in 1823, a year after the debut of Daguerre's Paris original. The building was designed by Augustus Charles Pugin. Bouton operated the Regent's Park diorama from 1830 to 1840, when it was taken over by his protégé, the painter Charles-Caïus Renoux.
The Regent's Park diorama was a popular sensation, and spawned immediate imitations. British artists like Clarkson Stanfield and David Roberts produced ever-more elaborate (moving) dioramas through the 1830s; sound effects and even living performers were added. Some "typical diorama effects included moonlit nights, winter snow turning into a summer meadow, rainbows after a storm, illuminated fountains," waterfalls, thunder and lightning, and ringing bells. A diorama painted by Daguerre is currently housed in the church of the French town Bry-sur-Marne, where he lived and died.
Daguerre diorama exhibitions (R.D. Wood, 1993)
Exhibition venues : Paris (Pa.1822-28) : London (Lo.1823-32) : Liverpool (Li.1827-32) : Manchester (Ma.1825-27) : Dublin (Du.1826-28) : Edinburgh (Ed.1828-36)
The Valley of Sarnen :: (Pa.1822-23) : (Lo.1823-24) : (Li.1827-28) : (Ma.1825) : (Du.1826-27) : (Ed. 1828-29 & 1831)
The Harbour of Brest :: (Pa.1823) : (Lo.1824-25 & 1837) : (Li.1825-26) : (Ma.1826-27) : (Ed. 1834–35)
The Holyrood Chapel :: (Pa.1823-24) : (Lo.1825) : (Li.1827-28) : (Ma.1827) : (Du.1828) : (Ed.1829-30)
The Roslin Chapel :: (Pa.1824-25) : (Lo.1826-27) : (Li.1828-29) : (Du.1827-28) : (Ed.1835)
The Ruins in a Fog :: (Pa.1825-26) : (Lo.1827-28) : (Ed.1832-33)
The Village of Unterseen :: (Pa.1826-27) : (Lo.1828-29) : (Li.1832) : (Ed.1833-34 & 1838)
The Village of Thiers :: (Pa.1827-28) : (Lo.1829-30) : (Ed. 1838–39)
The Mont St. Godard :: (Pa.1828-29) : (Lo.1830-32) : (Ed.1835-36)
Gottstein
Until 1968, Britain boasted a large collection of dioramas. These collections were originally housed in the Royal United Services Institute Museum, (formerly the Banqueting House), in Whitehall. When the museum closed, the various exhibits and their 15 known dioramas were distributed to smaller museums throughout England and elsewhere, some ending up in Canada. These dioramas were the brainchild of the wealthy furrier Otto Gottstein (1892–1951) of Leipzig, a Jewish immigrant from Hitler's Germany, who was an avid collector and designer of flat model figures called flats. In 1930, Gottstein's influence is first seen at the Leipzig International Exhibition, along with the dioramas of Hahnemann of Kiel, Biebel of Berlin and Muller of Erfurt, all displaying their own figures, and those commissioned from such as Ludwig Frank in large diorama form.
In 1933, Gottstein left Germany, and in 1935 founded the British Model Soldier Society. Gottstein persuaded designer and painter friends in both Germany and France to help in the construction of dioramas depicting notable events in English history. But due to the war, many of the figures arrived in England incomplete. The task of turning Gottstein's ideas into reality fell to his English friends and those friends who had managed to escape from the Continent. Dennis (Denny) C. Stokes, a talented painter and diorama maker in his own right, was responsible for the painting of the backgrounds of all the dioramas, creating a unity seen throughout the whole series. Denny Stokes was given the overall supervision of the fifteen dioramas.
The Landing of the Romans under Julius Caesar in 55 B.C.
The Battle of Hastings
The Storming of Acre (figures by Muller)
The Battle of Crecy (figures by Muller)
The Field of the Cloth of Gold
Queen Elizabeth reviewing her troops at Tilbury
The Battle of Marston Moor
The Battle of Blenheim (painted by Douchkine)
The Battle of Plessey
The Battle of Quebec (engraved by Krunert of Vienna)
The Old Guard at Waterloo
The Charge of the Light Brigade
The Battle of Ulundi (figures by Ochel and Petrocochino/Paul Armont)
The Battle of Fleurs
The D-Day landings
Krunert, Schirmer, Frank, Frauendorf, Maier, Franz Rieche, and Oesterrich were also involved in the manufacture and design of figures for the various dioramas. Krunert (a Viennese), like Gottstein an exile in London, was given the job of engraving for The Battle of Quebec. The Death of Wolfe was found to be inaccurate and had to be redesigned. The names of the vast majority of painters employed by Gottstein are mostly unknown, most lived and worked on the continent, among them Gustave Kenmow, Leopold Rieche, L. Dunekate, M. Alexandre, A. Ochel, Honey Ray, and, perhaps Gottstein's top painter, Vladimir Douchkine (a Russian émigré who lived in Paris). Douchkine was responsible for painting two figures of the Duke of Marlborough on horseback for The Blenheim Diorama, one of which was used, the other, Gottstein being the true collector, was never released.
Denny Stokes painted all the backgrounds of all the dioramas, Herbert Norris, the Historical Costume Designer, whom J. F. Lovel-Barnes introduced to Gottstein, was responsible for the costume design of the Ancient Britons, the Normans and Saxons, some of the figures of The Field of the Cloth of Gold and the Elizabethan figures for Queen Elizabeth at Tilbury. J.F. Lovel-Barnes was responsible for The Battle of Blenheim, selecting the figures, and arrangement of the scene. Due to World War II, when flat figures became unavailable, Gottstein completed his ideas by using Greenwood and Ball's 20 mm figures. In time, a fifteenth diorama was added, using these 20 mm figures, this diorama representing the D-Day landings. When all the dioramas were completed, they were displayed along one wall in the Royal United Services Institute Museum. When the museum was closed the fifteen dioramas were distributed to various museums and institutions. The greatest number are to be found at the Glenbow Museum, (130-9th Avenue, S. E. Calgary, Alberta, Canada): RE: The Landing of the Romans under Julius Caesar in 55 BC, Battle Of Crecy, The Battle of Blenheim, The Old Guard at Waterloo and The Charge of the Light Brigade at Balaclava.
The state of these dioramas is one of debate; John Garratt (The World of Model Soldiers) claimed in 1968, that the dioramas "appear to have been partially broken up and individual figures have been sold to collectors". According to the Glenbow Institute (Barry Agnew, curator) "the figures are still in reasonable condition, but the plaster groundwork has suffered considerable deterioration". There are no photographs available of the dioramas. The Battle of Hastings diorama was to be found in the Old Town Museum, Hastings, and is still in reasonable condition. It shows the Norman cavalry charging up Senlac Hill toward the Saxon lines.
The Storming of Acre is in the Museum of Artillery at the Rotunda, Woolwich. John Garratt, in Encyclopedia of Model Soldiers, states that The Field of the Cloth of Gold was in the possession of the Royal Military School of Music, Kneller Hall; according to the curator, the diorama had not been in his possession since 1980, nor is it listed in their Accession Book, so the whereabouts of this diorama is unknown.
The Battle of Ulundi is housed in the Staffordshire Regiment Museum at Whittington near Lichfield in Staffordshire, UK
Wong
San Francisco, California artist Frank Wong (born 22 September 1932) created dioramas that depict the San Francisco Chinatown of his youth during the 1930s and 1940s. In 2004, Wong donated seven miniatures of scenes of Chinatown, titled "The Chinatown Miniatures Collection", to the Chinese Historical Society of America (CHSA). The dioramas are on permanent display in CHSA's Main Gallery:
"The Moon Festival"
"Shoeshine Stand"
"Chinese New Year"
"Chinese Laundry"
"Christmas Scene"
"Single Room"
"Herb Store"
Documentary
San Francisco filmmaker James Chan is producing and directing a documentary about Wong and the "changing landscape of Chinatown" in San Francisco. The documentary is tentatively titled, "Frank Wong's Chinatown".
Other
Painters of the Romantic era like John Martin and Francis Danby were influenced to create large and highly dramatic pictures by the sensational dioramas and panoramas of their day. In one case, the connection between life and diorama art became intensely circular. On 1 February 1829, John Martin's brother Jonathan, known as "Mad Martin," set fire to the roof of York Minster. Clarkson Stanfield created a diorama re-enactment of the event, which premiered on 20 April of the same year; it employed a "safe fire" via chemical reaction as a special effect. On 27 May, the "safe" fire proved to be less safe than planned: it set a real fire in the painted cloths of the imitation fire, which burned down the theater and all of its dioramas.
Nonetheless, dioramas remained popular in England, Scotland, and Ireland through most of the 19th century, lasting until 1880.
A small scale version of the diorama called the Polyrama Panoptique could display images in the home and was marketed from the 1820s.
Natural history
Natural history dioramas seek to imitate nature and, since their conception in the late 19th century, aim to "nurture a reverence for nature [with its] beauty and grandeur". They have also been described as a means to visually preserve nature as different environments change due to human involvement. They were extremely popular during the first half of the 20th century, both in the US and UK, later on giving way to television, film, and new perspectives on science.
Like historical dioramas, natural history dioramas are a mix of two- and three-dimensional elements. What sets natural history dioramas apart from other categories is the use of taxidermy in addition to the foreground replicas and painted background. The use of taxidermy means that natural history dioramas derive not only from Daguerre's work, but also from that of taxidermists, who were used to preparing specimens for either science or spectacle. It was only with the dioramas' precursors (and, later on, dioramas) that both these objectives merged. Popular diorama precursors were produced by Charles Willson Peale, an artist with an interest in taxidermy, during the early 19th century. To present his specimens, Peale "painted skies and landscapes on the back of cases displaying his taxidermy specimens". By the late 19th century, the British Museum held an exhibition featuring taxidermy birds set on models of plants.
The first habitat diorama created for a museum was constructed by taxidermist Carl Akeley for the Milwaukee Public Museum in 1889, where it is still held. Akeley set taxidermy muskrats in a three-dimensional re-creation of their wetland habitat with a realistic painted background. With the support of curator Frank M. Chapman, Akeley designed the popular habitat dioramas featured at the American Museum of Natural History. Combining art with science, these exhibitions were intended to educate the public about the growing need for habitat conservation. The modern AMNH Exhibitions Lab is charged with the creation of all dioramas and otherwise immersive environments in the museum.
A predecessor of Akeley, naturalist and taxidermist Martha Maxwell created a famous habitat diorama for the first World's Fair in 1876. The complex diorama featured taxidermied animals in realistic action poses, running water, and live prairie dogs. It is speculated that this display was the first of its kind [outside of a museum]. Maxwell's pioneering diorama work is said to have influenced major figures in taxidermy history who entered the field later, such as Akeley and William Temple Hornaday.
Soon, the concern for accuracy came. Groups of scientists, taxidermists, and artists would go on expeditions to ensure accurate backgrounds and collect specimens, though some would be donated by game hunters. Natural history dioramas reached the peak of their grandeur with the opening of the Akeley Hall of African Mammals in 1936, which featured large animals, such as elephants, surrounded by even larger scenery. Nowadays, various institutions lay different claims to notable dioramas. The Milwaukee Public Museum still displays the world's first diorama, created by Akeley; the American Museum of Natural History, in New York, has what might be the world's largest diorama: a life-size replica of a blue whale; the Biological Museum in Stockholm, Sweden is known for its three dioramas, all created in 1893, and all in original condition; the Powell-Cotton Museum, in Kent, UK, is known for having the world's oldest, unchanged, room-sized diorama, built in 1896.
Construction
Natural history dioramas typically consist of 3 parts:
The painted background
The foreground
Taxidermy specimens
Preparations for the background begin in the field, where an artist takes photographs and sketches references pieces. Once back at the museum, the artist has to depict the scenery with as much realism as possible. The challenge lies in the fact that the wall used is curved: this allows the background to surround the display without seams joining different panels. At times the wall also curves upward to meet the light above and form a sky. By having a curved wall, whatever the artist paints will be distorted by perspective; it is the artist's job to paint in such a way that minimises this distortion.
The foreground is created to mimic the ground, plants and other accessories to scenery. The ground, hills, rocks, and large trees are created with wood, wire mesh, and plaster. Smaller trees are either used in their entirety or replicated using casts. Grasses and shrubs can be preserved in solution or dried to then be added to the diorama. Ground debris, such as leaf litter, is collected on site and soaked in wallpaper paste for preservation and presentation in the diorama. Water is simulated using glass or plexiglass with ripples carved on the surface. For a diorama to be successful, the foreground and background must merge, so both artists have to work together.
Taxidermy specimens are usually the centrepiece of dioramas. Since they must entertain, as well as educate, specimens are set in lifelike poses, so as to convey a narrative of an animal's life. Smaller animals are usually made with rubber moulds and painted. Larger animals are prepared by first making a clay sculpture of the animal. This sculpture is made over the actual, posed skeleton of the animal, with reference to moulds and measurements taken on the field. A papier-mâché mannequin is prepared from the clay sculpture, and the animal's tanned skin is sewn onto the mannequin. Glass eyes substitute the real ones.
If an animal is large enough, the scaffolding that holds the specimen needs to be incorporated into the foreground design and construction.
Toy examples
Lego
Lego dioramas are dioramas that are built from Lego pieces. These dioramas range from small vignettes to large, table-sized displays, and are sometimes constructed in a collaboration of two or more people. Some engage in the building of Lego dioramas.
Playmobil
Playmobil dioramas are dioramas that are made of Playmobil pieces.
See also
Armor Modeling and Preservation Society
Cosmorama
Cyclorama
Model airport
Moving panorama
Myriorama
Nativity scene
Model figure
Tableau vivant
Toy
Toy soldier
Notes
References
Dioramas Muzeul National de Istorie Naturala Grigore Antipa
External links
R. D. Wood's Essays on the early history of photography and the Diorama
The world's largest collection of antique sailing ship dioramas
World War II Dioramas in 1:35 scale
Audiovisual introductions in 1822
Scale modeling
Figurines
Visual arts genres
Landscape art by medium
1820s neologisms | Diorama | Physics | 5,196 |
488,640 | https://en.wikipedia.org/wiki/List%20of%20engineering%20societies | An engineering society is a professional organization for engineers of various disciplines. Some are umbrella type organizations which accept many different disciplines, while others are discipline-specific. Many award professional designations, such as European Engineer, professional engineer, chartered engineer, incorporated engineer or similar. There are also many student-run engineering societies, commonly at universities or technical colleges.
Africa
Ghana
Ghana Institution of Engineers
Nigeria
Nigerian Society of Engineers
Council for the Regulation of Engineering in Nigeria
South Africa
South African Institute of Electrical Engineers
Engineering Council of South Africa
Zimbabwe
Zimbabwe Institution of Engineers
Americas
Canada
In Canada, the term "engineering society" sometimes refers to organizations of engineering students as opposed to professional societies of engineers. The Canadian Federation of Engineering Students, whose membership consists of most of the engineering student societies from across Canada (see below), is the national association of undergraduate engineering student societies in Canada.
Canada also has many traditions related to the calling of an engineer.
The Engineering Institute of Canada (French: l'Institut Canadien des ingénieurs) has the following member societies:
Institution of Mechanical Engineers (Canadian Branch of the IMechE)
Canadian Maritime Section of the Marine Technology Society
Canadian Nuclear Society
Canadian Society for Chemical Engineering
Canadian Society for Civil Engineering
Ontario
Professional Engineers Ontario
Engineering Society of Queen's University
Lassonde Engineering Society
United States
Asia
Association of Southeast Asian Nations
ASEAN Academy of Engineering and Technology
Azerbaijan
Caspian Engineers Society
Bangladesh
Bangladesh Computer Society
Institution of Engineers, Bangladesh
Professional Engineers of Bangladesh
China
Chinese Academy of Engineering
Chinese Academy of Sciences
China Association for Science and Technology
Chinese Society for Electrical Engineering
Hong Kong
Hong Kong Institution of Engineers
International Association of Engineers
India
Aeronautical Society of India
Computer Society of India
Engineering Council of India
Indian Institute of Chemical Engineers
Indian Institution of Industrial Engineering
Indian Society for Technical Education
Indian Science Congress Association
Institution of Electronics and Telecommunication Engineers
Institution of Engineers (India)
Institution of Mechanical Engineers (India)
Society of EMC Engineers (India)
Japan
Japan Society of Civil Engineers
Union of Japanese Scientists and Engineers
Jordan
Jordanian Engineers Association
Malaysia
Board of Engineers Malaysia
Pakistan
National Technology Council (Pakistan)
Pakistan Engineering Council
Philippines
In the Philippines, the Professional Regulation Commission is a three-man commission attached to the office of the president of the Philippines. Its mandate is to regulate and supervise the practice of professionals (except lawyers) who constitute the highly skilled manpower of the country. As the agency-in-charge of the professional sector, the PRC plays a strategic role in developing the corps of professionals for industry, commerce, governance and the economy.
Associations Accredited by the Professional Regulation Commission
Institute of Electronics Engineers of the Philippines
Philippine Institute of Civil Engineers
Society of Naval Architects and Marine Engineers
Saudi Arabia
Saudi Council of Engineers
Sri Lanka
Institution of Engineers, Sri Lanka
Institution of Incorporated Engineers, Sri Lanka
Europe
European Association for Structural Dynamics
European Federation of National Engineering Associations
European Society for Engineering Education
Azerbaijan
Caspian Engineers Society
France
Association Française de Mécanique
Germany
Verein Deutscher Ingenieure
Greece
Technical Chamber of Greece (Τεχνικό Επιμελητήριο Ελλάδας)
Ireland
Institution of Engineers of Ireland
Institute of Physics and Engineering in Medicine
Lithuania
Linpra
Portugal
Ordem dos Engenheiros
Romania
General Association of Engineers of Romania
Russia
Russian Union of Engineers
Turkey
Chamber of Computer Engineers of Turkey
Chamber of Electrical Engineers of Turkey
Union of chambers of Turkish engineers and architects
United Kingdom
In the United Kingdom, the Engineering Council is the regulatory body for the engineering profession. The Engineering Council was incorporated by Royal charter in 1981 and controls the award of chartered engineer, incorporated engineer, engineering technician, and information and communications technology technician titles, through licences issued to thirty six recognised Institutions. There are also 19 professional affiliate institutions, not licensed, but with close associations to the Engineering Council.
The Royal Academy of Engineering is the national academy for engineering.
Professional institutions licensed by the Engineering Council
Professional affiliate bodies of the Engineering Council
Association for Project Management
Chartered Association of Building Engineers
Chartered Institution of Civil Engineering Surveyors
Chartered Quality Institute
Institute of Mathematics and its Applications
Institute of Nanotechnology
International Council on Systems Engineering (UK Chapter)
Professional engineering bodies not affiliated to the Engineering Council
Cleveland Institution of Engineers
Institution of Engineers and Shipbuilders in Scotland
Institute of the Motor Industry
Society of Professional Engineers UK
Women's Engineering Society
Former bodies merged or defunct
Institution of Electrical Engineers
Institution of Incorporated Engineers
Institution of Nuclear Engineers
Society of Engineers UK
Oceania
Australia
Association of Professional Engineers, Scientists and Managers, Australia
Engineers Australia
New Zealand
Engineering New Zealand Te Ao Rangahau
University of Canterbury Engineering Society
International
Audio Engineering Society
International Association of Engineers
International Council of Academies of Engineering and Technological Sciences
International Council on Systems Engineering
International Federation of Engineering Education Societies
International Geodetic Student Organisation
International Society of Automation
International Society for Optical Engineering
Institute of Electrical and Electronics Engineers
National Society of Black Engineers
Society of Automotive Engineers
Society of Petroleum Engineers
Society of Professional Engineers UK
Society of Women Engineers
World Federation of Engineering Organizations
See also
Engineer
Engineering
Fields of engineering
Learned society
Standards organization
The Ritual of the Calling of an Engineer
References
Lists of professional associations
Engineering societies
Engineering-related lists | List of engineering societies | Engineering | 1,012 |
13,184,838 | https://en.wikipedia.org/wiki/Bosscha%20Observatory | Bosscha Observatory is the oldest modern observatory in Indonesia, and one of the oldest in Asia. The observatory is located in Lembang, West Bandung Regency, West Java, approximately north of Bandung. It is situated on a hilly six hectares of land and is above mean sea level plateau. The IAU observatory code for Bosscha is 299.
History
During the first meeting of the Nederlandsch-Indische Sterrekundige Vereeniging (Dutch-Indies Astronomical Society) in the 1920s, it was agreed that an observatory was needed to study astronomy in the Dutch East Indies. Of all locations in the Indonesia archipelago, a tea plantation in Malabar, a few kilometers north of Bandung in West Java was selected. It is on the hilly north side of the city with a non-obstructed view of the sky and with close access to the city that was planned to become the new capital of the Dutch colony, replacing Batavia (present-day Jakarta). The observatory is named after the tea plantation owner Karel Albert Rudolf Bosscha, son of the physicist Johannes Bosscha and a major force in the development of science and technology in the Dutch East Indies, who granted six hectares of his property for the new observatory.
Construction of the observatory began in 1923 and was completed in 1928. Since then a continuous observation of the sky was made. The first international publication from Bosscha was published in 1922. Observations from Bosscha were halted during World War II and after the war a major reconstruction was necessary. On 17 October 1951, the Dutch-Indies Astronomical Society handed over operation of the observatory to the government of Indonesia. In 1959 the observatory's operation was given to the Institut Teknologi Bandung and has been an integral part of the research and formal education of astronomy in Indonesia.
Facilities
Five large telescopes were installed in Bosscha:
The Zeiss double refractor
This telescope is mainly used to observe visual binary stars, conduct photometric studies on eclipsing binaries, image lunar craters, observe planets (Mars, Saturn and Jupiter) and to observe comet details and other heavy bodies. The telescope has two objective lenses with a diameter of each and a focal length of .
The Schmidt telescope (nicknamed the Bima Sakti, or "Milky Way" telescope)
This telescope is used to study galactic structure, stellar spectra, asteroid studies, supernovae, and to photograph heavy bodies. The main lens diameter is , the correcting bi-concave and convex lens is with a focal length of . It is also equipped with a spectral prism with a prime angle of 6.10 degrees for stellar spectra, a wedge sensitometer and a film recorder.
The Bamberg refractor (not to be mixed-up with the Bamberg-Refraktor in Berlin)
This telescope is used to determine stellar magnitude, stellar distance, and photometric studies of eclipsing stars, solar imaging, and others. It is equipped with a photoelectric photometer, has a lens diameter with meter of focal length.
The Cassegrain GOTO
This was a gift from the Japanese government. This computer controlled telescope can automatically view objects from a database and this was the first digital telescope at Bosscha. The telescope is also equipped with a photometer and spectrometer-spectrograph.
The Unitron refractor
This telescope is used for observing hilal, lunar eclipse, solar eclipse and sunspot photography, and also other objects. Lens diameter is and a focal length of .
See also
List of astronomical observatories
References
External links
Timau, SE Asia's largest telescope under construction in Timor, NTT, at similar elevation, due 2019.
Astronomy institutes and departments
Astronomical observatories in Indonesia
Buildings and structures in West Java
Bandung Institute of Technology | Bosscha Observatory | Astronomy | 768 |
24,510,877 | https://en.wikipedia.org/wiki/Intra-flow%20interference | Intra-flow interference is interference between intermediate routers sharing the same flow path.
Application
In wireless routing, routing protocol WCETT, MIC and iAWARE incorporate consideration to the intra-flow interference metric.
See also
Collision domain
Inter-flow interference
Interference (communication)
References
External links
– 1/ Ubox10 Pro Max
WiFi With WishareFi
Wi-Fi | Intra-flow interference | Technology | 74 |
58,971,397 | https://en.wikipedia.org/wiki/Parts%20kit | A parts kit is a collection of weapon (notably firearm) parts that, according to the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), "is designed to or may be readily be assembled, completed, converted, or restored to expel a projectile by the action of an explosive." As an example, the kit may not include a receiver or include an incomplete receiver. Under current U.S. law, kits that include finished receivers must be serialized and their buyers must receive a background check, but kits that include "unfinished" receivers are totally unregulated for purchase.
Receivers
US parts kit regulation is distinct from that of other countries, where a firearm's pressure bearing parts such as bolts, barrels, and gas pistons are the commonly regulated components. In the United States a serialized receiver can be purchased or manufactured from a state of incompleteness to create a firearm.
The National Firearms Act (NFA) restricts the possession of automatic firearms, so most parts kits end up used with a semi-automatic receiver. In addition, under US gun law, a receiver that is legally a machine gun cannot legally become semi-automatic. There is no federal restriction on the purchase and import of machine gun parts kits (minus the barrel), however.
Parts kits are available for many firearms including the AR-15 and AKM variants.
References
Firearm components | Parts kit | Technology | 279 |
23,510,734 | https://en.wikipedia.org/wiki/Aula%20regia | An aula regia (lat. for "royal hall"), also referred to as a palas hall, is a name given to the great hall in an imperial (or governor's) palace in the Ancient Roman architecture and in the derived medieval audience halls of emperors, kings or bishops as part of their palaces (for example the German Kaiserpfalz – Imperial palace). In the Middle Ages the term was also used as a synonym for the Pfalz (palace) itself.
Architecturally, the medieval aulas followed the model of the ancient Roman audience halls in the imperial and governor's palaces such as the Aula regia in the Flavian Palace on the Palatine Hill in Rome, completed in 92 AD or the Aula Palatina in Trier, Germany, completed in 311 under Constantine the Great. This emperors's mother Helena lived in Rome in the Sessorium Palace; She had its smaller, hall-shaped aula converted into the church of Santa Croce in Gerusalemme for the relics she had brought with her from Jerusalem, while of the palace's larger civil basilica, built in the style of a three-aisled columned basilica, only the apse remains as a free-standing ruin.
The aulas usually followed the single-nave building type, not that of the Basilica with a higher central nave flanked by two or more lower longitudinal aisles which was more commonly used for market halls in the Roman era. Both ancient types of halls became models for Christian church architecture.
The monumental aulae regiae served as venues for court ceremonies at audiences and receptions, and in the Middle Ages also for Hoftage, the irregular gatherings of the powerful of the Holy Roman Empire, as well as for coronation meals, wedding feasts or other banquets. The ruler's throne can be assumed to be in the apse (which became the chancel in churches); the entrance is opposite the throne. The Carolingian hall buildings, unlike the ancient ones, were usually not accessed across the longitudinal axis despite the apses that had been adopted, but - as in the traditional Franconian and Middle German house - via the transverse axis, on the long side of the building, like the aulae of the Palace of Aachen or the Palace of Goslar.
An example of a surviving aula regia of the early middle ages is the church of Santa María del Naranco near Oviedo, built around 850 as an aula regia for Ramiro I. Most Imperial Palaces, royal palaces or bishop's palaces of the early and high Middle Ages contained such an audience hall, for example the aula regia in the Palace of Aachenː it later became a part of the medieval Town Hall of Aachen. The royal hall of the Imperial Palace Ingelheim (c. 780) has been digitally reconstructed.
References
Medieval architecture
Rooms
Latin words and phrases | Aula regia | Engineering | 592 |
250,882 | https://en.wikipedia.org/wiki/Cat%27s%20Eye%20Nebula | The Cat's Eye Nebula (also known as NGC 6543 and Caldwell 6) is a planetary nebula in the northern constellation of Draco, discovered by William Herschel on February 15, 1786. It was the first planetary nebula whose spectrum was investigated by the English amateur astronomer William Huggins, demonstrating that planetary nebulae were gaseous and not stellar in nature. Structurally, the object has had high-resolution images by the Hubble Space Telescope revealing knots, jets, bubbles and complex arcs, being illuminated by the central hot planetary nebula nucleus (PNN).
It is a well-studied object that has been observed from radio to X-ray wavelengths. At the centre of the Cat's Eye Nebula is a dying Wolf Rayet star, the sort of which can be seen in the Webb Telescope's image of WR 124. The Cat's Eye Nebula's central star shines at magnitude +11.4. Hubble Space Telescope images show a sort of dart board pattern of concentric rings emanating outwards from the centre.
General information
NGC 6543 is a high northern declination deep-sky object. It has the combined magnitude of 8.1, with high surface brightness. Its small bright inner nebula subtends an average of 16.1 arcsec, with the outer prominent condensations about 25 arcsec. Deep images reveal an extended halo about 300 arcsec or 5 arcminutes across, that was once ejected by the central progenitor star during its red giant phase.
NGC 6543 is 4.4 minutes of arc from the current position of the north ecliptic pole, less than of the 45 arcminutes between Polaris and the current location of the Earth's northern axis of rotation. It is a convenient and accurate marker for the axis of rotation of the Earth's ecliptic, around which the celestial North Pole rotates. It is also a good marker for the nearby "invariable" axis of the solar system, which is the center of the circles which every planet's north pole, and the north pole of every planet's orbit, make in the sky. Since motion in the sky of the ecliptic pole is very slow compared to the motion of the Earth's north pole, its position as an ecliptic pole station marker is essentially permanent on the time-scale of human history, as opposed to the pole star, which changes every few thousand years.
Observations show the bright nebulosity has temperatures between and , whose densities average of about particles per cubic centimetre. Its outer halo has the higher temperature around , but is of much lower density. Velocity of the fast stellar wind is about , where spectroscopic analysis shows the current rate of mass loss averages solar masses per year, equivalent to twenty trillion tons per second (20 Eg/s).
Surface temperature for the central PNN is about , being 10,000 times as luminous as the sun. Stellar classification is O7 + [WR]-type star. Calculations suggest the PNN is over one solar mass, from a theoretical initial 5 solar masses. The central Wolf–Rayet star has a radius of (452,000 km). The Cat's Eye Nebula, given in some sources, lies about three thousand light-years from Earth.
Observations
The Cat's Eye was the first planetary nebula to be observed with a spectroscope by William Huggins on August 29, 1864. Huggins' observations revealed that the nebula's spectrum was non-continuous and made of a few bright emission lines, first indication that planetary nebulae consist of tenuous ionised gas. Spectroscopic observations at these wavelengths are used in abundance determinations, while images at these wavelengths have been used to reveal the intricate structure of the nebula.
Infrared observations
Observations of NGC 6543 at far-infrared wavelengths (about 60 μm) reveal the presence of stellar dust at low temperatures. The dust is believed to have formed during the last phases of the progenitor star's life. It absorbs light from the central star and re-radiates it at infrared wavelengths. The spectrum of the infrared dust emission implies that the dust temperature is about 85 K, while the mass of the dust is estimated at solar masses.
Infrared emission also reveals the presence of un-ionised material such as molecular hydrogen (H2) and argon. In many planetary nebulae, molecular emission is greatest at larger distances from the star, where more material is un-ionised, but molecular hydrogen emission in NGC 6543 seems to be bright at the inner edge of its outer halo. This may be due to shock waves exciting the H2 as ejecta moving at different speeds collide. The overall appearance of the Cat's Eye Nebula in infrared (wavelengths 2–8 μm) is similar in visible light.
Optical and ultraviolet observations
The Hubble Space Telescope image produced here is in false colour, designed to highlight regions of high and low ionisation. Three images were taken, in filters isolating the light emitted by singly ionised hydrogen at 656.3 nm, singly ionised nitrogen at 658.4 nm and doubly ionised oxygen at 500.7 nm. The images were combined as red, green and blue channels respectively, although their true colours are red, red and green. The image reveals two "caps" of less ionised material at the edge of the nebula.
X-ray observations
In 2001, observations at X-ray wavelengths by the Chandra X-ray Observatory revealed the presence of extremely hot gas within NGC 6543 with the temperature of . It is thought that the very hot gas results from the violent interaction of a fast stellar wind with material previously ejected. This interaction has hollowed out the inner bubble of the nebula. Chandra observations have also revealed a point source at the position of the central star. The spectrum of this source extends to the hard part of the X-ray spectrum, to 0.5–. A star with the photospheric temperature of about would not be expected to emit strongly in hard X-rays, and so their presence is something of a mystery. It may suggest the presence of a high temperature accretion disk within a binary star system. The hard X-ray data remain intriguing more than ten years later: the Cat's Eye was included in a 2012 Chandra survey of 21 central stars of planetary nebulae (CSPNe) in the solar neighborhood, which found: "All but one of the X-ray point sources detected at CSPNe display X-ray spectra that are harder than expected from hot (~) central star photospheres, possibly indicating a high frequency of binary companions to CSPNe. Other potential explanations include self-shocking winds or PN mass fallback."
Distance
Planetary nebulae distances like NGC 6543 are generally very inaccurate and not well known. Some recent Hubble Space Telescope observations of NGC 6543 taken several years apart determine its distance from the angular expansion rate of 3.457 milliarcseconds per year. Assuming a line of sight expansion velocity of 16.4 km·s−1, this implies that NGC 6543's distance is parsecs ( or light-years) away from Earth. Several other distance references, like what is quoted in SIMBAD in 2014 based on Stanghellini, L., et al. (2008) suggest the distance is parsecs ( light-years).
Age
The angular expansion of the nebula can also be used to estimate its age. If it has been expanding at a constant rate of 10 milliarcseconds a year, then it would take to reach a diameter of 20 arcseconds. This may be an upper limit to the age, because ejected material will be slowed when it encounters material ejected from the star at earlier stages of its evolution, and the interstellar medium.
Composition
Like most astronomical objects, NGC 6543 consists mostly of hydrogen and helium, with heavier elements present in small quantities. The exact composition may be determined by spectroscopic studies. Abundances are generally expressed relative to hydrogen, the most abundant element.
Different studies generally find varying values for elemental abundances. This is often because spectrographs attached to telescopes do not collect all the light from objects being observed, instead gathering light from a slit or small aperture. Therefore, different observations may sample different parts of the nebula.
However, results for NGC 6543 broadly agree that, relative to hydrogen, the helium abundance is about 0.12, carbon and nitrogen abundances are both about , and the oxygen abundance is about . These are fairly typical abundances for planetary nebulae, with the carbon, nitrogen and oxygen abundances all larger than the values found for the sun, due to the effects of nucleosynthesis enriching the star's atmosphere in heavy elements before it is ejected as a planetary nebula.
Deep spectroscopic analysis of NGC 6543 may indicate that the nebula contains a small amount of material which is highly enriched in heavy elements; this is discussed below.
Kinematics and morphology
The Cat's Eye Nebula is structurally a very complex nebula, and the mechanism or mechanisms that have given rise to its complicated morphology are not well understood. The central bright part of the nebula consists of the inner elongated bubble (inner ellipse) filled with hot gas. It, in turn, is nested into a pair of larger spherical bubbles conjoined together along their waist. The waist is observed as the second larger ellipse lying perpendicular to the bubble with hot gas.
The structure of the bright portion of the nebula is primarily caused by the interaction of a fast stellar wind being emitted by the central PNN with the visible material ejected during the formation of the nebula. This interaction causes the emission of X-rays discussed above. The stellar wind, blowing with the velocity as high as , has 'hollowed out' the inner bubble of the nebula, and appears to have burst the bubble at both ends.
It is also suspected that the central WR:+O7 spectral class PNN star, HD 164963 / BD +66 1066 / PPM 20679 of the nebula may be generated by a binary star. The existence of an accretion disk caused by mass transfer between the two components of the system may give rise to astronomical jets, which would interact with previously ejected material. Over time, the direction of the jets would vary due to precession.
Outside the bright inner portion of the nebula, there are a series of concentric rings, thought to have been ejected before the formation of the planetary nebula, while the star was on the asymptotic giant branch of the Hertzsprung–Russell diagram. These rings are very evenly spaced, suggesting that the mechanism responsible for their formation ejected them at very regular intervals and at very similar speeds. The total mass of the rings is about 0.1 solar masses. The pulsations that formed the rings probably started 15,000 years ago and ceased about years ago, when the formation of the bright central part began (see above).
Further, a large faint halo extends to large distances from the star. The halo again predates the formation of the main nebula. The mass of the halo is estimated as 0.26–0.92 solar masses.
See also
List of largest nebulae
Notes
References
Cited sources
External links
Cat's Eye Nebula Release at ESA/Hubble
Cat's Eye Nebula images at ESA/Hubble
Chandra X-Ray Observatory Photo Album: NGC 6543
Astronomy Picture of the Day
The Cat's Eye Nebula October 31, 1999
Halo of the Cat's Eye 2010 May 9
The Cat's Eye Nebula 2016 July 3
Hubble Probes the Complex History of a Dying Star—HubbleSite article about the Cat's Eye Nebula.
NGC6543 The Cats Eye Nebula
Hubble's Color Toolbox: Cat's Eye Nebula—article showing image composite process used to produce an image of the nebula
Cat's Eye Nebula at Constellation Guide
SEDS – NGC 6543
006b
Draco (constellation)
NGC objects
Planetary nebulae
Astronomical objects discovered in 1786
Discoveries by William Herschel | Cat's Eye Nebula | Astronomy | 2,487 |
71,748,142 | https://en.wikipedia.org/wiki/Fr%C3%A9d%C3%A9rique%20Battin-Leclerc | Frédérique Battin-Leclerc (born 1964) is a French chemist who studies combustion, particularly gas-phase combustion of hydrocarbons including biofuels, in order to develop cleaner-burning automotive fuels. She is a director of research for the French National Centre for Scientific Research (CNRS), affiliated with the Laboratoire Réactions et Génie des Procédés in Nancy, France.
Education and career
Battin-Leclerc was born in 1964. She earned an engineering degree from the École nationale supérieure des industries chimiques in Nancy in 1987, completed a Ph.D. at the National Polytechnic Institute of Lorraine in Nancy in 1991, and earned a habilitation at the National Polytechnic Institute of Lorraine in 1997.
She has been a researcher for the CNRS since 1991.
Recognition
Battin-Leclerc won the CNRS Silver Medal in 2010, and was named as a knight in the Ordre national du Mérite in 2012.
She was elected to the inaugural 2018 class of Fellows of The Combustion Institute, "for innovative research on the formulation of detailed chemical mechanisms for complex practical fuels".
She was the 2022 recipient of the Polanyi Medal of the Royal Society of Chemistry.
References
External links
Home page
1964 births
Living people
French chemists
French women chemists
Fellows of the Combustion Institute | Frédérique Battin-Leclerc | Chemistry | 271 |
65,947,405 | https://en.wikipedia.org/wiki/Chloroacetonitrile | Chloroacetonitrile is the organic compound with the formula ClCH2CN. A colorless liquid, it is derived from acetonitrile (CH3CN) by replacement of one H with Cl. In practice, it is produced by dehydration of chloroacetamide. The compound is an alkylating agent, and as such is handled cautiously.
Chloroacetonitrile is also generated in situ by the reaction of acetonitrile with sulfur monochloride. A second chlorination gives dichloroacetonitrile, which undergoes cycloaddition with sulfur monochloride to give 4,5-dichloro-1,2,3-dithiazolium chloride:
Cl2CHCN + S2Cl2 → [S2NC2Cl2]Cl + HCl
See also
halogenation
References
Nitriles | Chloroacetonitrile | Chemistry | 200 |
1,686,413 | https://en.wikipedia.org/wiki/Hydrometallurgy | Hydrometallurgy is a technique within the field of extractive metallurgy, the obtaining of metals from their ores. Hydrometallurgy involve the use of aqueous solutions for the recovery of metals from ores, concentrates, and recycled or residual materials. Processing techniques that complement hydrometallurgy are pyrometallurgy, vapour metallurgy, and molten salt electrometallurgy. Hydrometallurgy is typically divided into three general areas:
Leaching
Solution concentration and purification
Metal or metal compound recovery
Leaching
Leaching involves the use of aqueous solutions to extract metal from metal-bearing materials which are brought into contact with them. In China in the 11th and 12th centuries, this technique was used to extract copper; this was used for much of the total copper production. In the 17th century it was used for the same purposes in Germany and Spain.
The lixiviant solution conditions vary in terms of pH, oxidation-reduction potential, presence of chelating agents and temperature, to optimize the rate, extent and selectivity of dissolution of the desired metal component into the aqueous phase. By using chelating agents, one can selectively extract certain metals. These agents are typically amines of Schiff bases.
The five basic leaching reactor configurations are in-situ, heap, vat, tank and autoclave.
In-situ leaching
In-situ leaching is also called "solution mining". This process initially involves drilling of holes into the ore deposit. Explosives or hydraulic fracturing are used to create open pathways within the deposit for solution to penetrate into. Leaching solution is pumped into the deposit where it makes contact with the ore. The solution is then collected and processed. The Beverley uranium deposit is an example of in-situ leaching.
Heap leaching
In heap leaching processes, crushed (and sometimes agglomerated) ore is piled in a heap which is lined with an impervious layer. Leach solution is sprayed over the top of the heap, and allowed to percolate downward through the heap. The heap design usually incorporates collection sumps, which allow the "pregnant" leach solution (i.e. solution with dissolved valuable metals) to be pumped for further processing. An example is gold cyanidation, where pulverized ores are extracted with a solution of sodium cyanide, which, in the presence of air, dissolves the gold, leaving behind the nonprecious residue.
Vat leaching
Vat leaching involves contacting material, which has usually undergone size reduction and classification, with leach solution in large vats.
Tank leaching
Stirred tank, also called agitation leaching, involves contacting material, which has usually undergone size reduction and classification, with leach solution in agitated tanks. The agitation can enhance reaction kinetics by enhancing mass transfer. Tanks are often configured as reactors in series.
Autoclave leaching
Autoclave reactors are used for reactions at higher temperatures, which can enhance the rate of the reaction. Similarly, autoclaves enable the use of gaseous reagents in the system.
Solution concentration and purification
After leaching, the leach liquor must normally undergo concentration of the metal ions that are to be recovered. Additionally, undesirable metal ions sometimes require removal.
Precipitation is the selective removal of a compound of the targeted metal or removal of a major impurity by precipitation of one of its compounds. Copper is precipitated as its sulfide as a means to purify nickel leachates.
Cementation is the conversion of the metal ion to the metal by a redox reaction. A typical application involves addition of scrap iron to a solution of copper ions. Iron dissolves and copper metal is deposited.
Solvent Extraction
Ion exchange
Gas reduction. Treating a solution of nickel and ammonia with hydrogen affords nickel metal as its powder.
Electrowinning is a particularly selective if expensive electrolysis process applied to the isolation of precious metals. Gold can be electroplated from its solutions.
Solvent extraction
In the solvent extraction a mixture of an extractant in a diluent is used to extract a metal from one phase to another. In solvent extraction this mixture is often referred to as the "organic" because the main constituent (diluent) is some type of oil.
The PLS (pregnant leach solution) is mixed to emulsification with the stripped organic and allowed to separate. The metal will be exchanged from the PLS to the organic they are modified. The resulting streams will be a loaded organic and a raffinate. When dealing with electrowinning, the loaded organic is then mixed to emulsification with a lean electrolyte and allowed to separate. The metal will be exchanged from the organic to the electrolyte. The resulting streams will be a stripped organic and a rich electrolyte. The organic stream is recycled through the solvent extraction process while the aqueous streams cycle through leaching and electrowinning processes respectively.
Ion exchange
Chelating agents, natural zeolite, activated carbon, resins, and liquid organics impregnated with chelating agents are all used to exchange cations or anions with the solution. Selectivity and recovery are a function of the reagents used and the contaminants present.
Metal recovery
Metal recovery is the final step in a hydrometallurgical process, in which metals suitable for sale as raw materials are produced. Sometimes, however, further refining is needed to produce ultra-high purity metals. The main types of metal recovery processes are electrolysis, gaseous reduction, and precipitation. For example, a major target of hydrometallurgy is copper, which is conveniently obtained by electrolysis. Cu2+ ions are reduced to Cu metal at low potentials, leaving behind contaminating metal ions such as Fe2+ and Zn2+.
Electrolysis
Electrowinning and electrorefining respectively involve the recovery and purification of metals using electrodeposition of metals at the cathode, and either metal dissolution or a competing oxidation reaction at the anode.
Precipitation
Precipitation in hydrometallurgy involves the chemical precipitation from aqueous solutions, either of metals and their compounds or of the contaminants. Precipitation will proceed when, through reagent addition, evaporation, pH change or temperature manipulation, the amount of a species present in the solution exceeds the maximum determined by its solubility.
References
External links
Hydrometallurgy, BioMineWiki
Chemical processes
Metallurgy
Metallurgical processes | Hydrometallurgy | Chemistry,Materials_science,Engineering | 1,334 |
31,184,781 | https://en.wikipedia.org/wiki/Social%20media%20use%20in%20the%20Wisconsin%20protests | Since the proliferation of the internet, social media has increasingly played a direct role in modern political issues like civil rights. An example of this is the use of social media during the 2011 Wisconsin protests over public unions that occurred between February and March 2011. Both sides of Wisconsin labor debate used social media networks to galvanize support, though social media use was particularly prevalent amongst the pro-labor side. Many organizations around the country also joined this debate by using social media to voice their own opinions about labor issues in Wisconsin. Through the use of social media, Wisconsin's labor protests escalated from a local state issue into one of national and even international significance.
Background
The 2011 Wisconsin protests began as the newly sworn-in Wisconsin Governor Scott Walker outlined his plans for substantial reform on labor unions in Assembly Bill 11. On February 11, 2011, his bill included legislation that would significantly restrict the power of labor unions by limiting the collective bargaining rights of labor and limiting the subjects of union discussions to only basic wage talks. The bill also pushed public workers to pay more out-of-pocket for health insurance. These restrictions and cuts were intended to help alleviate Wisconsin's $3.6 billion deficit and better control the state budget. Despite these goals of fiscal responsibility, the people most active in the protests were those most directly impacted by the bill's labor reforms - particularly the limitations imposed on the bargaining rights of unions. These affected people included teachers, clerical workers, prison guards, government employees, and others who benefit from labor unions. The breadth of people who would receive lower wages and pay-cuts made Scott Walker's bill controversial in Wisconsin, and led to the emergence of significant labor protests across Wisconsin.
Positions
The protest movement's spread through social media
The pro-union side used many forms of social media to sway public opinion and organize protests. Every segment of the population made use of social media's accessibility to contribute to the pro-labor protest movement. For one group of college students, after a student meeting they decided to use social media to push for their teachers' rights. They formed a group called the "Wisconsin Students for Solidarity" and organized a student walkout using Facebook. Their student walkout that started as a local Wisconsin affair grew to become a nationwide event called the "Nationwide Student Walkout" on Facebook. Students also used social media to coordinate the Valentine's Day protests, during which more than 1,000 people, including hundreds of UW-Madison students, utilized Twitter and Facebook for real-time mobilization and outreach.
In another example, during the first night of the teacher strikes in February, 2011, social media was used to connect the protestors to a global community. A local pro-labor pizza restaurant, Ian's Pizza, used Facebook and Twitter to raise awareness and donations for the teachers striking in Wisconsin. Highlighting the speed by which information travels through social media, Ian Pizza's use of Facebook, Twitter, and reaching out to local news stations resulted in Ian's Pizza receiving donations from people from 14 different countries - including Korea, Finland, New Zealand, Egypt, Denmark, Australia, Canada, Germany, China, England, Netherlands, Turkey, Switzerland, Italy - and all 50 states in the United States. These donations sent thousands of free pizzas and slices to the protestors throughout the duration of the protests.
In further examples of how Facebook organizing was frequently utilized by the Wisconsin protestors, numerous Facebook event pages like "Recall Wisconsin Governor Scott Walker", "540,000 To See Scott Walker out of WI, January 2012", and "Recall Scott Walker", were established and shared on Facebook to organize pro-labor supporters and generate popular support. Similarly, satirical Twitter accounts were created like "Not Scott Walker", frequently writing humorous tweets during the protests to attract viral attention to the Wisconsin issue. Wisconsin union ironworkers also used Facebook to coordinate protests. In a quote from one of the union ironworkers, "[Facebook organizing] grew to a network, so that whenever Governor Walker traveled someplace, we would find people wherever he was going that would meet him and protest." In all of these examples, Facebook and other social media platforms were key to the popular success of the protests.
In a specific example, an attorney named Bill Mahler also decided to close his law firm early on March 11 in a show of support for the unions. Mahler used both his blog and Twitter account to put pressure on Governor Walker, and encourage other law firms to do the same. Showing the geographic range of supporters that the Wisconsin labor issue attracted, Mahler and his legal firm were not located in Wisconsin, but in Seattle, Washington. This fact accentuates how social media enabled the wide, rapid movement of information and ideas beyond where an issue is centralized. The widespread effectiveness of social media in organizing protestors also influenced the legislation of other states. After the Wisconsin protests drew international support through social media, Indiana Republicans dropped a similar bill to Assembly Bill 11, under pressure from Indiana Democrats and other union supporters.
The news also played a role in driving support for the protests on social media. The Associated Press published a story with private quotes from Governor Walker threatening to use the National Guard to forcefully shut down the protests. Left-leaning networks and labor groups shared this story on social media to galvanize support for the pro-labor movement in Wisconsin. News of the Arab Spring in Egypt was also used by pro-labor movements to support the Wisconsin protestors and draw parallels between the two protests in both literal and metaphorical ways. An Egyptian man posted on Facebook a photo of himself holding a sign saying, "EGYPT Supports Wisconsin Workers: One World, One Pain", during the Arab Spring. According to Andy Kroll, a writer for The Nation, the popularity of the Arab Spring on social media helped push young people around the world, including in Wisconsin, to similarly use Facebook and Twitter to organize and protest for a cause they believed in.
Social media use by Governor Walker and his supporters
On the other side of the protests, Governor Scott Walker used Twitter to articulate his ideas on labor, communicate with those who agreed with him. and allow people who disagree with him to respond. Governor Scott Walker was one of the first politicians in Wisconsin to actively use social media to connect with both his followers and his opponents. Some of the tweets have even been looked at as actively negotiating with the Unions, by some writers. Following the lead of the pro-labor protestors, Governor Walker's supporters used the same mediums of social media to build support in not only Wisconsin, but in the entire nation. One website, "americansforprosperity.org," had a petition set up on their website for Governor Walker's supporters to sign and directly show their popular support for Governor Walker's reforms. This site also had Twitter updates and a link to a Facebook page. Other major political figures have also used the internet to voice their support for the governor. Former Minnesota governor Tim Pawlenty posted a video and started a petition on his website. Also, former Speaker of the House, Newt Gingrich, posted a public appeal on a conservative website in support of Governor Walker. While social media use was most prevalent amongst the protestors, supporters of Assembly Bill 11 also utilized social media to a significant degree.
Impact of social media on the 2011 Wisconsin protests
The 2011 Wisconsin protests were some of the first instances in which social media played a significant role in mobilizing protestors. According to the Milwaukee Journal Sentinel, it allowed for an unprecedented mobilization of people and ideas. Without the use of social media, it is possible that a movement as large as the 2011 Wisconsin protests would not have reached such an international scale, nor emerged so quickly.
Social media use also contributed to the overnight occupation of the Wisconsin state capitol by protestors. When Republican lawmakers were returning to Wisconsin during the night to vote on and pass Assembly Bill 11, pro-union news reporters and organizers used Twitter to quickly mobilize protestors to the capitol on a short notice. Since it was during the night, no news stations were reporting on the issue. However, social media in the form of Twitter and Facebook, supported by the wide proliferation of smartphones, enabled the protestors to quickly organize without mobilization support from traditional public media like the news.
Media
On Twitter, people used the hashtag "#wiunion" to tag tweets related to the protests.
References
Politics of Wisconsin
2011 in Wisconsin
Wisconsin | Social media use in the Wisconsin protests | Technology | 1,719 |
52,093,913 | https://en.wikipedia.org/wiki/Frog%20thermometer | The frog thermometer or - as the Cimento academicians defined it, the botticino [small-toad] thermometer - contained small glass spheres of different density, which were immersed in alcohol. The device was used as a clinical thermometer, tied to the wrist or the arm of the patient with the head of the frog facing upward. The variations in body temperature were registered by the movement of the spheres. The rise in temperature causes an increase in the volume of the alcohol, reflected in the movement of the small spheres (first the less dense, then the more dense). Because of the spheres' sluggish motion, this thermometer was also called infingardo [slothful, slow]. The invention of this model is attributed to Ferdinand II de' Medici.
See also
References
External links
Thermometers | Frog thermometer | Technology,Engineering | 175 |
61,222,807 | https://en.wikipedia.org/wiki/Professional%20wargaming | A wargame, generally, is a type of strategy game which realistically simulates warfare. A professional wargame, specifically, is a wargame that is used by military organizations to train officers in tactical and strategic decision-making, to test new tactics and strategies, or to predict trends in future conflicts. This is in contrast to recreational wargames, which are designed for fun and competition.
Overview
Definition
The exact definition of "wargame" varies from one writer to the next and one organization to the next. To prevent confusion, this section will establish the general definition employed by this article.
A wargame simulates an armed conflict, be it a battle, a campaign, or an entire war. "Business wargames" do not simulate armed conflict and are therefore outside the scope of this article.
A wargame is adversarial. There must be at least two opposing sides whose players react intelligently to each other's decisions.
A wargame must have human players.
A wargame does not involve the use of actual troops and armaments. This definition is used by the US Naval War College. Some writers use the term "live wargames" to refer to games that use actual troops in the field, but this article shall instead refer to these as field exercises.
A wargame is about tactical or strategic decision-making. A game that exercises only the player's technical skills, such as a combat flight simulator, is not a wargame.
Still, some professional wargamers feel that the term "game" trivializes what they see as a serious, professional tool. One of these was Georg von Reisswitz, the creator of Kriegsspiel and the father of professional wargaming, but he stuck with the word "game" because he could not think of a better term. In the US Army, many preferred the term "map maneuvers" (in contrast to "field maneuvers"). At the US Naval War College, some preferred the terms "chart maneuvers" (when simulating campaigns) and "board maneuvers" (when simulating battles), although the term "war game" was never officially proscribed.
Professional wargames vs commercial wargames
Professional wargames tend to have looser rules and simpler models than recreational wargames, with an umpire arbitrating situations based on personal knowledge. If the umpire is highly knowledgeable about warfare (perhaps they are a veteran), then such wargames can achieve a higher degree of realism than wargames with rigid rulesets. In a recreational wargame, such looseness would lead to concerns over fairness, but the point of a professional wargame is education, not competition. Having simple, loose rules also keeps the learning curve small, which is convenient since most officers have little or no wargaming experience.
Likewise, there is less concern regarding "balance" when determining each player's armaments and strategic advantages. In a commercial wargame, the rules are usually designed to ensure that the players' armies are of equal strength to ensure fairness, but professional wargames will tailor each side's capabilities to the scenario to be investigated, and warfare in the real world is rarely fair.
As professional wargames are used to prepare officers for actual warfare, there is naturally a strong emphasis on realism and current events. Historical wargames are wargames set in the distant past, such as World War II or the Napoleonic Wars— simulating these wars realistically may be of interest to historians, but are of little use to the military. Recreational wargames may take some creative liberties with reality, such as simplifying models to make them more enjoyable, or adding fictional armaments and units such as orcs and wizards, making them of little use to officers who must fight in the real world.
Military organizations are typically secretive about their current wargames, and this makes designing a professional wargame a challenge. Secrecy makes it harder to disseminate corrections if the wargame has already been delivered to the clients. Whereas a commercial wargame might have thousands or even millions of players, professional wargames tend to have small player bases, which makes it harder for the designers to acquire feedback. As a consequence, errors in wargame models tend to persist.
Although commercial wargame designers take consumer trends and player feedback into account, their products are usually designed and sold with a take-it-or-leave-it approach. Professional wargames, by contrast, are typically commissioned by the military that plans to use them. If a wargame is commissioned by several clients, then the designer will have to juggle their competing demands. This can lead to great complexity, high development costs, and a compromised product that satisfies nobody.
Commercial wargames are under more pressure to deliver an enjoyable experience for the players, who expect a user-friendly interface, a reasonable learning curve, exciting gameplay, and so forth. By contrast, military organizations tend to see wargaming as a tool and a chore, and players are often bluntly obliged to use whatever is provided to them.
Design concepts
Models
The term "model" can mean two things in wargaming. One is the conceptual models that describe the properties, capabilities, and behaviors of the things the wargame attempts to simulate (weapons, vehicles, troops, terrain, weather, etc.). The other meaning, from miniature wargaming (a form of recreational wargaming), is physical models, i.e. sculptures of soldiers, vehicles, and terrain; which generally serve an aesthetic purpose and have little if any consequence on the simulation. Professional wargames rarely use physical models because aesthetics aren't important to the military and the scale at which professional wargames typically play make physical models impractical. Therefore, this article will focus on conceptual models.
A wargame is about decision-making, not about learning the technical capabilities of a particular weapon or vehicle. Therefore, a well-designed model will not describe something beyond what a player needs to know to make effective decisions. Players should not be burdened with cumbersome calculations, because this slows down the game and distracts the players. If a player makes a bad decision, it should only be because of poor strategic thinking, not some forgotten rule or arithmetic error, otherwise the game will yield less reliable insights. If the wargame is computer-assisted, then sophisticated models are feasible because they can be written into the software and processed quickly by the computer. For manual wargames, simplicity is paramount.
Level of war
In a tactical-level wargame, the scope of the simulated conflict is a single battle. Kriegsspiel, the original professional wargame, is an example of a tactical-level wargame. The wargames of the Western Approaches Tactical Unit (see below) were also tactical-level, simulating submarine attacks on a merchant convoy.
In a strategic-level wargame, the scope of the simulated conflict is a campaign or even an entire war. An example is the "Chart Maneuvers" practiced by the US Naval War College during the 1920s and 1930s, which most often simulated a hypothetical war in the Pacific against Japan. Another example is the Sigma wargames played in the 1960s to test proposed strategies for fighting the Vietnam War. Battles are resolved through simple computation. The players concern themselves with higher-level, strategic concerns such as logistics and diplomacy.
Utility
General strengths and limitations
In comparison to field exercises, wargames save time and money. They can be organized quickly and cheaply as they do not require the mobilization of thousands of men, their armaments, and logistics systems.
Some wargames can be completed more quickly than the conflicts they simulate by compressing time. In a naval wargame, the players need not wait days for their fleets to sail across the ocean, they could just advance the time-frame to the next decision they must make. This is particularly advantageous for strategic-level games, in which the simulated conflict might last months. A tactical-level wargame that has very cumbersome computations might take longer to play out than the battle it represents (this problem afflicted the original Kriegsspiel).
Wargamers can experiment with assets that their military does not actually possess, such as alliances that their country does not have, armaments that they have yet to acquire, and even hypothetical technologies that have yet to be invented.
For example: After World War I, Germany was forced to downsize its armed forces and outright give up certain weapons such as planes, tanks, and submarines. This made it difficult, if not impossible, for German officers to develop their doctrines through field exercises. The Germans greatly expanded their use of wargaming to compensate. When Germany began openly rearming in 1934, its officers already had fairly well-developed theories on what armaments to buy and what organizational reforms to implement.
Wargames cannot be used to predict the progression and outcome of a war as one might predict the weather. Human behavior is too difficult to predict for that. Wargames cannot provoke the anxiety, anger, stress, fatigue, etc. that a commander will experience in actual combat and thus cannot foresee the effects of these emotions on his decision-making. That said, no training tool can replicate the emotional experience of war, so this is not a specific flaw. Another issue that can produce "wrong" predictions is that a commander may do things differently in the field precisely because he was dissatisfied with the decisions he made in the wargames.
Education
Wargames are a cost-effective way of giving officers the experience (or something resembling experience) of making decisions as a leader in an armed conflict. This is the oldest application of wargaming. The actual effectiveness of wargaming in this regard—turning a bad strategist into a good one—is difficult to measure because officers use many tools to hone their decision-making skills and the effect of wargaming is difficult to isolate.
In this context, wargames are used to help players understand the decision-making process of wartime command. Wargames can help players master through practice certain routine skills such as how to discuss ideas, share intel, and communicate orders. Wargames can present the players with intellectual challenges that they cannot receive from books or in the classroom: an enemy who reacts unpredictably and intelligently to the player's decisions,
Wargames train players to evaluate situations and make decisions faster. They teach players how to discuss ideas, and the protocols for sharing intel and communicating orders. They teach the players how to cope with incomplete, delayed, incorrect, or superfluous information. They teach the player how to cope with an unpredictable foe who reacts intelligently to their decisions.
Wargames can also help familiarize the players with the geography of areas where they might eventually have to fight in. This was an oft-cited justification for wargaming at the US Naval War College.
Research and planning
Wargames can be used to prepare grand strategic plans and develop doctrine with a low risk of the enemy becoming aware of these developments and adapting. A problem that any military faces when learning through hard experience (actual warfare) is that as it gets better at fighting the enemy, the enemy will adapt in turn, modifying their own armaments and tactics to maintain their edge. Live exercises have a similar weakness as the enemy can spy on them to learn what is being tested. But wargames can be done in good secrecy, so the enemy cannot know what ideas are being developed.
Wargames can help a military determine what armaments and infrastructure it should acquire (there is substantial historical evidence to support this particular assertion).
For instance: In the 1920s, American military planners believed that America could win a war with Japan quickly by simply sailing an armada across the Pacific and knocking out the Japanese navy in a few decisive battles. But when this strategy was tested in wargames, it routinely failed. Japan held off the assault until the American armada exhausted itself, and then counter-attacked. The wargames foretold that a war with Japan would instead be a prolonged war of attrition, and America would need advance bases in the western Pacific where its warships could get resupplied and repaired. Such an infrastructure would require making alliances with friendly countries such as Australia, New Zealand, and the British Empire.
Wargames can also be used to develop the potential of new technology. In order to wield a new technology optimally, it is not enough for a military to merely have it, but also develop good tactics and know how to organize around it. If the enemy isn't exploring the same issues in their own wargames, then one can gain a significant edge over the enemy when war breaks out by deploying a more mature doctrine.
An example is German submarine doctrine in the World Wars. In World War I, submarines were a new thing and nobody knew how best to use them, and Germany developed its submarine doctrine on the go. The German navy at the time did not use wargames and tested new ideas immediately against the British. Consequently, for every incremental innovation in submarine warfare that the Germans deployed, the British quickly developed a counter-measure and kept pace, and this limited the impact of submarines in World War I. During the inter-war years, the German navy experimented extensively with new submarine tactics in wargames (in tandem with field exercises) and developed the "wolf-pack" doctrine to defeat the anti-submarine counter-measures that had been developed during World War I (notably the convoy system). The British, by contrast, did not experiment with submarines in their own wargames because they thought that their established counter-measures were sufficient. The Germans entered the war with a whole bag of new tricks, and it took some time for the British to catch up.
History
The Reisswitzian wargame
Around the turn of the 19th century, a number of European inventors created wargames based on chess. These games used pieces that represented real army units (infantry, artillery, etc.) and the squares on the board were color-coded to represent different terrain types (rivers, marshes, mountains, etc.). Basing these games on chess made them attractive and accessible to chess players, but also made them too unrealistic to be taken seriously by the army. The grid forced the terrain into unnatural forms, such as rivers flowing in straight lines and bending at right angles; and only a single piece could occupy a square at a time, even if that square represented a square mile.
In 1824, a Prussian army officer named Georg von Reisswitz presented to the Prussian General Staff a wargame that he and his father had developed over the years. It was a highly realistic wargame designed strictly for use as a professional tool of training, and not for leisure. Instead of a chess-like grid, this game was played on accurate paper maps of the kind the Prussian army used. This allowed the game to model terrain naturally and simulate battles in real locations. The pieces could be moved across the map in a free-form manner, subject to terrain obstacles. The pieces, each of which represented some kind of army unit (an infantry battalion, a cavalry squadron, etc.), were little rectangular blocks made of lead. The pieces were painted either red or blue to indicate the faction it belonged to. The blue pieces were used to represent the Prussian army and red was used to represent some foreign enemy—since then it has been the convention in professional wargaming to use blue to represent the faction to which the players actually belong to. The game used dice to add a degree of randomness to combat. The scale of the map was 1:8000 and the pieces were made to the same proportions as the units they represented, such that each piece occupied the same relative space on the map as the corresponding unit did on the battlefield.
The game modeled the capabilities of the units realistically using data gathered by the Prussian army during the Napoleonic Wars and various field exercises. Reisswitz's manual provided tables that listed how far each unit type could move in a round according to the terrain it was crossing and whether it was marching, running, galloping, etc.; and accordingly the umpire used a ruler to move the pieces across the map. The game used dice to determine combat results and inflicted casualties, and the casualties inflicted by firearms and artillery decreased over distance. Unlike chess pieces, units in Reisswitz's game could suffer partial losses before being defeated, which were tracked on a sheet of paper (recreational gamers might call this "hitpoint tracking"). The game also had some rules that modeled morale and exhaustion.
Reisswitz's game also used an umpire. The players did not directly control the pieces on the game map. Rather, they wrote orders for their virtual troops on pieces of paper, which they submitted to the umpire. The umpire then moved the pieces across the game map according to how he judged the virtual troops would interpret and carry out their orders. When the troops engaged the enemy on the map, it was umpire who rolled the dice, computed the effects, and removed defeated units from the map. The umpire also managed secret information so as to simulate the fog of war. The umpire placed pieces on the map only for those units which he judged both sides could see. He kept a mental track of where the hidden units were, and only placed their pieces on the map when he judged they came into view of the enemy.
Earlier wargames had fixed victory conditions, such as occupying the enemy's fortress. By contrast, Reisswitz's wargame was open-ended. The umpire decided what the victory conditions were, if there were to be any, and they typically resembled the goals an actual army in battle might aim for. The emphasis was on the experience of decision-making and strategic thinking, not on competition. As Reisswitz himself wrote: "The winning or losing, in the sense of a card or board game, does not come into it."
In the English-speaking world, Reisswitz's wargame and its variants are called Kriegsspiel, which is the German word for "wargame".
German professional wargaming (1824–1914)
Reisswitz showed his wargame to the Prussian king and his General Staff in 1824. They were greatly impressed. General Karl von Mueffling wrote: "It’s not a game at all! It's training for war. I shall recommend it enthusiastically to the whole army." The king decreed that every regiment should play Kriegsspiel, and by the end of the decade every regiment had purchased materials for it. By the 1850s it had become very popular in the army. Kriegsspiel was therefore the first wargame to be treated as a serious tool of training and research by a military organization.
Aside from official military venues, Kriegsspiel was also played in a number of private clubs around the country, which were mainly patronized by officers but also had civilian members, so Kriegsspiel was certainly being played in a recreational context. The first such club was the Berlin Wargame Association. In 1828, General von Moltke the Elder joined the Magdeburg Club and became its manager.
Over the years, other officers updated Reisswitz's game to reflect changes in technology and doctrine. A particularly noteworthy variant was free Kriegsspiel, developed in 1876 by General Julius von Verdy du Vernois. Vernois was frustrated by the cumbersome rules of traditional rigid Kriegsspiel. They took a lot of time to learn and prevented experienced officers from applying their own expertise. The computations also slowed down the game; sometimes, a session would take longer to play than the actual battle it represented. Vernois advocated dispensing with the rules altogether and allowing the umpire to determine the outcomes of player decisions as he saw fit. Dice, rulers, computations, etc. were optional. This rules-free variant, of course, depended more heavily on the competence and impartiality of the umpire. The relative merits and drawbacks of rules-heavy and freeform wargaming are still debated to this day.
Wargaming spreads around the world
Prussian wargaming attracted little attention outside Prussia before 1870. Prussia was considered a second-rate power and wargaming an unproven novelty. That changed in 1870, when Prussia defeated France in the Franco-Prussian War. Many credited Prussia's victory to its wargaming tradition. The Prussian army did not have any significant advantage in weaponry, numbers, or troop quality, but it was the only army in the world that practiced wargaming. Civilians and military forces around the world now took a keen interest in German wargames, which foreigners referred to as Kriegsspiel (the German word for "wargame"). The first Kriegsspiel manual in English, based on the system of Wilhelm von Tschischwitz, was published in 1872 for the British army and received a royal endorsement. The world's first recreational wargaming club was the University Kriegspiel [sic] Club, founded in 1873 at Oxford University in England. In the United States, Charles Adiel Lewis Totten published Strategos, the American War Game in 1880, and William R. Livermore published The American Kriegsspiel in 1882, both heavily inspired by Prussian wargames. In 1894, the US Naval War College made wargaming a regular tool of instruction.
Wargaming at the US Naval War College (1919–1941)
The US Naval War College is a staff college where American officers of all ranks go to receive postgraduate training. Since 1894, wargaming has been a regular tool of instruction there. Wargaming was brought to the Naval War College by William McCarty Little, a retired Navy lieutenant who had likely been inspired after reading The American Kriegsspiel by W.R. Livermore. Livermore was stationed nearby at Fort Adams, and he and Little cooperated to translate the ideas behind Kriegsspiel to naval warfare.
After World War I, the Navy suffered severe budget cuts that prevented it from upgrading and expanding its fleet. This limited its ability to conduct naval exercises. Wargaming thus became a vital means of testing hypothetical strategies and tactics. Another problem was that by the time America entered World War II in 1941, none of the Navy's senior officers had any meaningful combat experience because the Navy had not been involved in any war for over 20 years. However, almost all of them had participated in wargames at the Naval War College, so they had plenty of virtual combat experience. The fact that America defeated Japan in World War II despite these shortcomings, is evidence for the value of the wargaming. After the war, Admiral Nimitz said that the wargames predicted every tactic the Japanese used except for the kamikazes (a somewhat hyperbolic assertion).
The Naval War College organized two broad classes of wargames: "chart maneuvers", which were strategic-level games; and "board maneuvers", which were tactical-level games. The chart maneuvers were about fleet movements, scouting and screening operations, and supply lines. The board maneuvers simulated battles in detail, with the aid of model ships. Most of the wargames were played on the floors of lecture halls, as they needed more space than any table could provide.
The two most frequently played scenarios were a war with Japan and a war with Britain. Japan was code-named ORANGE, Britain was code-named RED, and America was code-named BLUE. Neither the students nor the staff at the Naval War College expected a war with Britain. It's possible that the US Navy didn't imagine getting into any sort of serious naval conflict in the Atlantic with anyone, and that it simulated wars against Britain simply because it saw the Royal Navy as its role model. A war with Japan, on the other hand, was a real concern, and as the years passed the wargames were increasingly played against ORANGE.
In case of a war with Japan, the US Navy's grand strategy was to send an armada straight across the Pacific and quickly defeat the Japanese navy in one or two decisive battles. The wargamers at the College tested this strategy extensively, and it routinely failed. In 1933, the Navy's Research Department reviewed the wargames played from 1927 to 1933 and concluded that the fundamental problem was that the armada over-extended its supply lines. The BLUE armada would exhaust itself, and ORANGE would recover and counter-attack. After this, the wargamers at the College abandoned the old doctrine and instead developed a more progressive strategy, which involved building a logistics infrastructure in the western Pacific and making alliances with regional countries. By the mid-1930s, the wargames resembled very much what the Navy later experienced in the Pacific War.
The wargames also produced tactical innovations, most notably the "circular formation". In this formation, as it was used in World War II, an aircraft carrier was surrounded by concentric circles of cruisers and destroyers. This formation concentrated anti-aircraft fire, and also was easier to maneuver than a line of battle because all the ships could turn at once with a signal from the central ship. The circular formation was first proposed in September 1922 by Commander Roscoe C. MacFall. Initially, the wargamers at the College used a battleship as the central ship, but this was eventually supplanted by the aircraft carrier. Chester Nimitz, who was a fellow student that same year, was impressed by what the circular formation could do, and Nimitz played a pivotal role in making it Navy doctrine.
On the other hand, the wargamers at the Naval War College failed to develop good submarine doctrine. They didn't have a good understanding of what submarines could do. Unlike the German navy, the US Navy had no significant experience with submarine warfare. Most of the time, the players used submarines as a screening force that sailed ahead of the main formation. Players rarely used submarines in independent operations, and never to attack commercial shipping as German wargamers were doing at the time.
For a few years after the end of World War II, wargaming almost ceased in America. At the Naval War College, wargaming dropped to about 10% of its pre-war level.
German wargaming after World War I
The Treaty of Versailles greatly restricted the size of Germany's armed forces and outright banned certain weapons such as planes, tanks, and submarines. This made it difficult if not impossible for the German military to develop their doctrines through field exercises. The Germans greatly expanded their use of wargaming to compensate, and between 1919 and 1939, the German military used wargaming more heavily than any other in the world. By the time Germany began openly rearming in 1934, its officers already had fairly well-developed theories on what armaments to buy and what organizational reforms to implement.
German wargaming at this time was restricted to tactical and operational-level play. Hitler discouraged strategic-level games, as he was confident enough in his own ability to make strategic judgments. Over the course of the war, Germany fought well at the tactical and operational level but made many bad strategic decisions.
During World War 1, the British learned to protect their ships from German submarines by moving them in convoys which were escorted by submarine-hunting ships. The convoy system proved effective against German submarines, which typically operated alone. During the inter-war years, the German navy developed the "wolf-pack" doctrine by which German submarines would attack convoys in groups to confuse and overwhelm the escorts. These ideas were tested in a combination of wargames and naval exercises. Karl Doenitz, who would later command German submarine operations during World War II, organized a series of wargames held during the winter of 1938-39, and from the results he concluded that it would be best for a wolf-pack attack to be coordinated by a designated command submarine rather than a commander onshore. He also concluded that Germany needed 300 submarines to effectively destroy British shipping, and that Germany's existing submarine fleet would at most inflict "pin-pricks".
After World War II, wargaming ceased in Germany, as well as in the other Axis powers. Germany didn't even have an army until 1955, so they saw little need to wargame. When West Germany established its new army in 1955, they had so few officers with wargaming experience that the German War College asked the US Air Force to provide it an officer with wargaming experience.
British naval wargaming during World War II
In January 1942, the British Royal Navy established a naval tactical analysis unit called the Western Approaches Tactical Unit (WATU), which was tasked with developing ways to counter the German submarine "wolf-packs" that were devastating shipping convoys in the Atlantic. It was based in Liverpool, directed by Captain Gilbert Roberts, and staffed mainly by young women from the Women's Royal Naval Service. Their primary analytical tool was wargaming.
The staff at WATU used wargames to test various hypothetical submarine tactics against virtual convoys, and if a certain tactic proved consistently effective and produced outcomes similar to what the actual convoys were reporting, WATU assumed that is what the Germans were in fact doing. The staff at WATU would then design counter-measures and test them in wargames. WATU operated week-long courses wherein naval captains would be instructed in anti-submarine tactics through wargames.
It's uncertain exactly how many German submarines were sunk thanks to WATU's tactics, but at the close of the war, several British admirals asserted that WATU had played a decisive role in Germany's defeat. Had the German submarine threat to merchant shipping not been thwarted, Britain would have been forced to capitulate to the Germans for lack of food and other necessary imports.
What makes WATU a remarkable episode in the history of wargaming is that it was the first time in which wargames were used to analyze scenarios that were occurring in an ongoing war and develop solutions that were deployed immediately in the field. This was made possible by communications technologies such as telephone and radio.
Soviet Union
The Soviets inherited their wargaming techniques from tsarist officers, who favored the rigid form of wargaming pioneered by Reisswitz. Interestingly, the Soviets typically played wargames not on flat maps, but on three-dimensional model battlefields. Soviet wargames typically comprised only a single turn. The players would describe their plan to the umpires, who would then adjudicate the battle all the way to conclusion. This meant the players could not react to what the enemy was doing. This approach was optimal for decision-support but poor for developing the players' thinking skills.
Immediately after the end of World War II, there was a precipitous drop in wargaming in armed forces all over the world. The exception was the Soviet Union. The Soviets actually expanded their wargaming and made them more rigorous. The Soviets launched a massive effort to compile data from the war on the Eastern Front to make their wargames more valid.
During the Cold War, the Soviets allowed officers from other communist countries to attend its military schools, and wargaming was part of the curriculum. Using techniques learned in the Soviet Union, North Vietnamese officers wargamed their attacks against South Vietnam and her allies, and were able to coordinates complicated attacks without the need for radio communications by memorizing timetables.
The Navy Electronic Warfare Simulator (1958)
The first computerized wargaming system was the Navy Electronic Warfare Simulator, which became operational in 1958 at the US Naval War College. The computer system, being from the pre-microchip era, spanned three floors. The game rooms were designed to the resemble the command centers where the Navy coordinated its fleets. When the system was first made operational in 1958, the Navy discovered that it could not model recent advances in military technology. For instance, it could not model ships moving faster than 500 knots. The system had taken 13 years to develop and, like most computers from that era, was difficult to reprogram or upgrade (it predated punch-cards). A variety of improvisational gimmicks were required to run wargames for the contemporary era.
SIGMA war games (United States, 1962–1967)
Between 1962 and 1967, the US military conducted a series of strategic-level wargames known as the Sigma war games to test proposed strategies for fighting the Vietnam War.
The Sigma I-64 and II-64 games, conducted in 1964, were designed to test the proposed strategy of gradually escalating pressure on North Vietnam until it gave up out of economic self-interest. Graduated escalation was supposed to avoid accidentally provoking an intervention by China or the Soviet Union. It would also avoid making President Johnson look like a warmonger. This "graduated pressure" would primarily involve bombing North Vietnam and sending troops into South Vietnam.
The wargames predicted that this strategy would be ineffective. In the simulations, the bombings did not diminish North Vietnam's capacity nor its desire to support the Viet Cong. The Viet Cong did not require much in the way of supplies anyway, and they got most of their supplies from captured villages within South Vietnam. North Vietnam's economy was almost entirely agricultural, so the loss of what little industry it had caused little political turmoil. They preferred to seek revenge, and so sent more troops into South Vietnam. This forced America into a protracted ground war, which led to erosion of public support that eventually forced America's withdrawal.
The findings of the 1964 wargames were ignored by policymakers. One reason was that Secretary of Defense Robert McNamara did not appreciate the methodology of the games, which relied on subjective evaluations by the umpires (even though these men were seasoned officers and diplomats). McNamara preferred mathematical and statistical analysis. He therefore did not bring the findings to President Johnson's attention. Another reason was that Johnson's strategists did not like the proposed alternatives. Escalating the pressure too much could have drawn the Soviet Union or China into the war, and abandoning the war would have humiliated America.
The Johnson administration went on to apply their strategy of graduated pressure in Vietnam, and the outcome of the war proved very similar to what the wargames had foretold. In their post-mortems of the Vietnam War, numerous historians have cited the dismissal of the Sigma wargames as one of many important failures in planning that led to America's defeat.
References
Footnotes
Bibliography
(translation by Bill Leeson, 1989)
Sigma II-64 final report (1964), Joint Chiefs of Staff
Wargames
Military exercises and wargames
Combat modeling | Professional wargaming | Mathematics | 7,066 |
3,573,306 | https://en.wikipedia.org/wiki/Vibrational%20partition%20function | The vibrational partition function traditionally refers to the component of the canonical partition function resulting from the vibrational degrees of freedom of a system. The vibrational partition function is only well-defined in model systems where the vibrational motion is relatively uncoupled with the system's other degrees of freedom.
Definition
For a system (such as a molecule or solid) with uncoupled vibrational modes the vibrational partition function is defined by
where is the absolute temperature of the system, is the Boltzmann constant, and is the energy of the jth mode when it has vibrational quantum number . For an isolated molecule of N atoms, the number of vibrational modes (i.e. values of j) is for linear molecules and for non-linear ones. In crystals, the vibrational normal modes are commonly known as phonons.
Approximations
Quantum harmonic oscillator
The most common approximation to the vibrational partition function uses a model in which the vibrational eigenmodes or normal modes of the system are considered to be a set of uncoupled quantum harmonic oscillators. It is a first order approximation to the partition function which allows one to calculate the contribution of the vibrational degrees of freedom of molecules towards its thermodynamic variables. A quantum harmonic oscillator has an energy spectrum characterized by:
where j runs over vibrational modes and is the vibrational quantum number in the jth mode, is the Planck constant, h, divided by and
is the angular frequency of the jth mode. Using this approximation we can derive a closed form expression for the vibrational partition function.
where is total vibrational zero point energy of the system.
Often the wavenumber, with units of cm−1 is given instead of the angular frequency of a vibrational mode and also often misnamed frequency. One can convert to angular frequency by using where c is the speed of light in vacuum. In terms of the vibrational wavenumbers we can write the partition function as
It is convenient to define a characteristic vibrational temperature
where is experimentally determined for each vibrational mode by taking a spectrum or by calculation. By taking the zero point energy as the reference point to which other energies are measured, the expression for the partition function becomes
References
See also
Partition function (mathematics)
Partition functions | Vibrational partition function | Physics | 473 |
14,145,348 | https://en.wikipedia.org/wiki/Corticotropin-releasing%20hormone%20receptor%201 | Corticotropin-releasing hormone receptor 1 (CRHR1) is a protein, also known as CRF1, with the latter (CRF1) now being the IUPHAR-recommended name. In humans, CRF1 is encoded by the CRHR1 gene at region 17q21.31, beside micrototubule-associated protein tau MAPT.
Structure
The human CRHR1 gene contains 14 exons over 20 kb of DNA, and its full gene product is a peptide composed of 444 amino acids. Excision of exon 6 yields in the mRNA for the primary functional CRF1, which is a peptide composed of 415 amino acids, arranged in seven hydrophobic alpha-helices.
The CRHR1 gene is alternatively spliced into a series of variants. These variants are generated through deletion of one of the 14 exons, which in some cases causes a frame-shift in the open reading frame, and encode corresponding isoforms of CRF1. Though these isoforms have not been identified in native tissues, the mutations of the splice variants of mRNA suggest the existence of alternate CRF receptors, with differences in intracellular loops or deletions in N-terminus or transmembrane domains. Such structural changes suggest that the alternate CRF1 receptors have different degrees of capacity and efficiency in binding CRF and its agonists. Though the functions of these CRF1 receptors is yet unknown, they are suspected to be biologically significant.
CRF1 is 70% homologous with the second human CRF receptor family, CRF2; the greatest divergence between the two lies at the N-terminus of the protein.
Mechanism of activation
CRF1 is activated through the binding of CRF or a CRF-agonist. The ligand binding and subsequent receptor conformational change depends on three different sites in the second and third extracellular domains of CRF1.
In the majority of tissues, CRF1 is coupled to a stimulatory G-protein that activates the adenylyl cyclase signaling pathway, and ligand-binding triggers an increase in cAMP levels. However, the signal can be transmitted along multiple signal transduction cascades, according to the structure of the receptor and the region of its expression. Alternate signaling pathways activated by CRF1 include PKC and MAPK. This wide variety of cascades suggests that CRF1 mediates tissue-specific responses to CRF and CRF-agonists.
Tissue distribution
CRF1 is expressed widely throughout both the central and peripheral nervous systems. In the central nervous system, CRF1 is particularly found in the cortex, cerebellum, amygdala, hippocampus, olfactory bulb, ventral tegmental area, brainstem areas, paraventricular hypothalamus, and pituitary. In the pituitary, CRF1 stimulation triggers the activation of the POMC gene, which in turn causes the release of ACTH and β-endorphins from the anterior pituitary. In the peripheral nervous system, CRF1 is expressed at low levels in a wide variety of tissues, including the skin, spleen, heart, liver, adipose tissue, placenta, ovary, testis, and adrenal gland.
In CRF1 knockout mice, and mice treated with a CRF1 antagonist, there is a decrease in anxious behavior and a blunted stress response, suggesting that CRF1 mechanisms are anxiogenic. However, the effect of CRF1 appears to be regionally specific and cell-type specific, likely due to the wide variety of cascades and signaling pathways activated by the binding of CRF or CRF-agonists. In mice, offspring born to CRF1 -/- knockout mothers typically die within a few days of birth from lung dysplasia, likely due to low glucocorticoid levels. In the central nervous system, CRF1 activation mediates fear learning and consolidation in the extended amygdala, stress-related modulation of memory formation in the hippocampus, and brainstem regulation of arousal.
Function
The corticotropin-releasing hormone receptor binds corticotropin-releasing hormone, a potent mediator of endocrine, autonomic, behavioral, and immune responses to stress.
CRF1 receptors in mice mediate ethanol enhancement of GABAergic synaptic transmission.
Postpartum function
Postpartum CRF1 knockout mice spend less time nursing and less time licking and grooming their offspring than their wildtype counterparts during the first few days postpartum. These pups weighed less as a result. This pattern of maternal behavior indicates that CRF1 may be needed for early postpartum mothers to display typical mothering behaviors. Maternal aggression is attenuated by increases in CRF and urocortin 2, which bind to CRF1.
Evolution
Corticotrophin releasing hormone (CRH) evolved ~ in an organism that subsequently gave rise to both chordates and arthropods. The binding site for this was single CRH like receptor. In vertebrates this gene was duplicated leading to the extant CRH1 and CRH2 forms. Additionally four paralogous ligands developed including CRH, urotensin-1/urocortin, urocortin II and urocortin III.
Clinical significance
Variations in the CRHR1 gene is associated with enhanced response to inhaled corticosteroid therapy in asthma.
CRF1 triggers cells to release hormones that are linked to stress and anxiety [original reference missing]. Hence CRF1 receptor antagonists are being actively studied as possible treatments for depression and anxiety.
Variations in CRHR1 are associated with persistent pulmonary hypertension of the newborn.
Interactions
Corticotropin-releasing hormone receptor 1 has been shown to interact with Corticotropin-releasing hormone and urocortin.
See also
Corticotropin-releasing hormone
Corticotropin-releasing hormone receptor
Corticotropin-releasing hormone antagonist
Antalarmin
Pexacerfont
Verucerfont
References
Further reading
External links
G protein-coupled receptors
Corticotropin-releasing hormone | Corticotropin-releasing hormone receptor 1 | Chemistry | 1,293 |
3,150,389 | https://en.wikipedia.org/wiki/Tissue%20factor%20pathway%20inhibitor | Tissue factor pathway inhibitor (or TFPI) is a single-chain polypeptide which can reversibly inhibit factor Xa (Xa). While Xa is inhibited, the Xa-TFPI complex can subsequently also inhibit the FVIIa-tissue factor complex.
TFPI contributes significantly to the inhibition of Xa in vivo, despite being present at concentrations of only 2.5 nM.
Genetics
The gene for TFPI is located on chromosome 2q31-q32.1, and has nine exons which span 70 kb. A similar gene, termed TFPI2, has been identified on chromosome 7, at locus 7q21.3; in addition to TFPI activity, its product also has retinal pigment epithelial cell growth-promoting properties.
Protein structure
TFPI has a relative molecular mass of 34,000 to 40,000 depending on the degree of proteolysis of the C-terminal region.
TFPI consists of a highly negatively charged amino-terminus, three tandemly linked Kunitz domains, and a highly positively charged carboxy-terminus. With its Kunitz domains, TFPI exhibits significant homology with human inter-alpha-trypsin inhibitor and bovine basic pancreatic trypsin inhibitor.
Interactions
Tissue factor pathway inhibitor has been shown to interact with Factor X.
See also
Hemostasis
References
External links
(TFPI1), (TFPI2)
Further reading
Coagulation system | Tissue factor pathway inhibitor | Chemistry | 311 |
936,085 | https://en.wikipedia.org/wiki/Green%20chemistry | Green chemistry, similar to sustainable chemistry or circular chemistry, is an area of chemistry and chemical engineering focused on the design of products and processes that minimize or eliminate the use and generation of hazardous substances. While environmental chemistry focuses on the effects of polluting chemicals on nature, green chemistry focuses on the environmental impact of chemistry, including lowering consumption of nonrenewable resources and technological approaches for preventing pollution.
The overarching goals of green chemistry—namely, more resource-efficient and inherently safer design of molecules, materials, products, and processes—can be pursued in a wide range of contexts.
History
Green chemistry emerged from a variety of existing ideas and research efforts (such as atom economy and catalysis) in the period leading up to the 1990s, in the context of increasing attention to problems of chemical pollution and resource depletion. The development of green chemistry in Europe and the United States was linked to a shift in environmental problem-solving strategies: a movement from command and control regulation and mandated lowering of industrial emissions at the "end of the pipe," toward the active prevention of pollution through the innovative design of production technologies themselves. The set of concepts now recognized as green chemistry coalesced in the mid- to late-1990s, along with broader adoption of the term (which prevailed over competing terms such as "clean" and "sustainable" chemistry).
In the United States, the Environmental Protection Agency played a significant early role in fostering green chemistry through its pollution prevention programs, funding, and professional coordination. At the same time in the United Kingdom, researchers at the University of York contributed to the establishment of the Green Chemistry Network within the Royal Society of Chemistry, and the launch of the journal Green Chemistry.
Principles
In 1998, Paul Anastas (who then directed the Green Chemistry Program at the US EPA) and John C. Warner (then of Polaroid Corporation) published a set of principles to guide the practice of green chemistry. The twelve principles address a range of ways to lower the environmental and health impacts of chemical production, and also indicate research priorities for the development of green chemistry technologies.
The principles cover such concepts as:
the design of processes to maximize the amount of raw material that ends up in the product;
the use of renewable material feedstocks and energy sources;
the use of safe, environmentally benign substances, including solvents, whenever possible;
the design of energy efficient processes;
avoiding the production of waste, which is viewed as the ideal form of waste management.
The twelve principles of green chemistry are:
Prevention: Preventing waste is better than treating or cleaning up waste after it is created.
Atom economy: Synthetic methods should try to maximize the incorporation of all materials used in the process into the final product. This means that less waste will be generated as a result.
Less hazardous chemical syntheses: Synthetic methods should avoid using or generating substances toxic to humans and/or the environment.
Designing safer chemicals: Chemical products should be designed to achieve their desired function while being as non-toxic as possible.
Safer solvents and auxiliaries: Auxiliary substances should be avoided wherever possible, and as non-hazardous as possible when they must be used.
Design for energy efficiency: Energy requirements should be minimized, and processes should be conducted at ambient temperature and pressure whenever possible.
Use of renewable feedstocks: Whenever it is practical to do so, renewable feedstocks or raw materials are preferable to non-renewable ones.
Reduce derivatives: Unnecessary generation of derivatives—such as the use of protecting groups—should be minimized or avoided if possible; such steps require additional reagents and may generate additional waste.
Catalysis: Catalytic reagents that can be used in small quantities to repeat a reaction are superior to stoichiometric reagents (ones that are consumed in a reaction).
Design for degradation: Chemical products should be designed so that they do not pollute the environment; when their function is complete, they should break down into non-harmful products.
Real-time analysis for pollution prevention: Analytical methodologies need to be further developed to permit real-time, in-process monitoring and control before hazardous substances form.
Inherently safer chemistry for accident prevention: Whenever possible, the substances in a process, and the forms of those substances, should be chosen to minimize risks such as explosions, fires, and accidental releases.
Trends
Attempts are being made not only to quantify the greenness of a chemical process but also to factor in other variables such as chemical yield, the price of reaction components, safety in handling chemicals, hardware demands, energy profile and ease of product workup and purification. In one quantitative study, the reduction of nitrobenzene to aniline receives 64 points out of 100 marking it as an acceptable synthesis overall whereas a synthesis of an amide using HMDS is only described as adequate with a combined 32 points.
Green chemistry is increasingly seen as a powerful tool that researchers must use to evaluate the environmental impact of nanotechnology. As nano materials are developed, the environmental and human health impacts of both the products themselves and the processes to make them must be considered to ensure their long-term economic viability. There is a trend of nano material technology in the practice, however, people ignored the potential nanotoxicity. Therefore, people need to address further consideration on legal, ethical, safety, and regulatory issues associated with nanomaterials,
Examples
Green solvents
The major application of solvents in human activities is in paints and coatings (46% of usage). Smaller volume applications include cleaning, de-greasing, adhesives, and in chemical synthesis. Traditional solvents are often toxic or are chlorinated. Green solvents, on the other hand, are generally less harmful to health and the environment and preferably more sustainable. Ideally, solvents would be derived from renewable resources and biodegrade to innocuous, often a naturally occurring product. However, the manufacture of solvents from biomass can be more harmful to the environment than making the same solvents from fossil fuels. Thus the environmental impact of solvent manufacture must be considered when a solvent is being selected for a product or process. Another factor to consider is the fate of the solvent after use. If the solvent is being used in an enclosed situation where solvent collection and recycling is feasible, then the energy cost and environmental harm associated with recycling should be considered; in such a situation water, which is energy-intensive to purify, may not be the greenest choice. On the other hand, a solvent contained in a consumer product is likely to be released into the environment upon use, and therefore the environmental impact of the solvent itself is more important than the energy cost and impact of solvent recycling; in such a case water is very likely to be a green choice. In short, the impact of the entire lifetime of the solvent, from cradle to grave (or cradle to cradle if recycled) must be considered. Thus the most comprehensive definition of a green solvent is the following: "a green solvent is the solvent that makes a product or process have the least environmental impact over its entire life cycle."
By definition, then, a solvent might be green for one application (because it results in less environmental harm than any other solvent that could be used for that application) and yet not be a green solvent for a different application. A classic example is water, which is a very green solvent for consumer products such as toilet bowl cleaner but is not a green solvent for the manufacture of polytetrafluoroethylene. For the production of that polymer, the use of water as solvent requires the addition of perfluorinated surfactants which are highly persistent. Instead, supercritical carbon dioxide seems to be the greenest solvent for that application because it performs well without any surfactant. In summary, no solvent can be declared to be a "green solvent" unless the declaration is limited to a specific application.
Synthetic techniques
Novel or enhanced synthetic techniques can often provide improved environmental performance or enable better adherence to the principles of green chemistry. For example, the 2005 Nobel Prize for Chemistry was awarded to Yves Chauvin, Robert H. Grubbs and Richard R. Schrock, for the development of the metathesis method in organic synthesis, with explicit reference to its contribution to green chemistry and "smarter production." A 2005 review identified three key developments in green chemistry in the field of organic synthesis: use of supercritical carbon dioxide as green solvent, aqueous hydrogen peroxide for clean oxidations and the use of hydrogen in asymmetric synthesis. Some further examples of applied green chemistry are supercritical water oxidation, on water reactions, and dry media reactions.
Bioengineering is also seen as a promising technique for achieving green chemistry goals. A number of important process chemicals can be synthesized in engineered organisms, such as shikimate, a Tamiflu precursor which is fermented by Roche in bacteria. Click chemistry is often cited as a style of chemical synthesis that is consistent with the goals of green chemistry. The concept of 'green pharmacy' has recently been articulated based on similar principles.
Carbon dioxide as blowing agent
In 1996, Dow Chemical won the 1996 Greener Reaction Conditions award for their 100% carbon dioxide blowing agent for polystyrene foam production. Polystyrene foam is a common material used in packing and food transportation. Seven hundred million pounds are produced each year in the United States alone. Traditionally, CFC and other ozone-depleting chemicals were used in the production process of the foam sheets, presenting a serious environmental hazard. Flammable, explosive, and, in some cases toxic hydrocarbons have also been used as CFC replacements, but they present their own problems. Dow Chemical discovered that supercritical carbon dioxide works equally as well as a blowing agent, without the need for hazardous substances, allowing the polystyrene to be more easily recycled. The CO2 used in the process is reused from other industries, so the net carbon released from the process is zero.
Hydrazine
Addressing principle #2 is the peroxide process for producing hydrazine without cogenerating salt. Hydrazine is traditionally produced by the Olin Raschig process from sodium hypochlorite (the active ingredient in many bleaches) and ammonia. The net reaction produces one equivalent of sodium chloride for every equivalent of the targeted product hydrazine:
NaOCl + 2 NH3 → H2N-NH2 + NaCl + H2O
In the greener peroxide process hydrogen peroxide is employed as the oxidant and the side product is water. The net conversion follows:
2 NH3 + H2O2 → H2N-NH2 + 2 H2O
Addressing principle #4, this process does not require auxiliary extracting solvents. Methyl ethyl ketone is used as a carrier for the hydrazine, the intermediate ketazine phase separates from the reaction mixture, facilitating workup without the need of an extracting solvent.
1,3-Propanediol
Addressing principle #7 is a green route to 1,3-propanediol, which is traditionally generated from petrochemical precursors. It can be produced from renewable precursors via the bioseparation of 1,3-propanediol using a genetically modified strain of E. coli. This diol is used to make new polyesters for the manufacture of carpets.
Lactide
In 2002, Cargill Dow (now NatureWorks) won the Greener Reaction Conditions Award for their improved method for polymerization of polylactic acid . Unfortunately, lactide-base polymers do not perform well and the project was discontinued by Dow soon after the award. Lactic acid is produced by fermenting corn and converted to lactide, the cyclic dimer ester of lactic acid using an efficient, tin-catalyzed cyclization. The L,L-lactide enantiomer is isolated by distillation and polymerized in the melt to make a crystallizable polymer, which has some applications including textiles and apparel, cutlery, and food packaging. Wal-Mart has announced that it is using/will use PLA for its produce packaging. The NatureWorks PLA process substitutes renewable materials for petroleum feedstocks, doesn't require the use of hazardous organic solvents typical in other PLA processes, and results in a high-quality polymer that is recyclable and compostable.
Carpet tile backings
In 2003 Shaw Industries selected a combination of polyolefin resins as the base polymer of choice for EcoWorx due to the low toxicity of its feedstocks, superior adhesion properties, dimensional stability, and its ability to be recycled. The EcoWorx compound also had to be designed to be compatible with nylon carpet fiber. Although EcoWorx may be recovered from any fiber type, nylon-6 provides a significant advantage. Polyolefins are compatible with known nylon-6 depolymerization methods. PVC interferes with those processes. Nylon-6 chemistry is well-known and not addressed in first-generation production. From its inception, EcoWorx met all of the design criteria necessary to satisfy the needs of the marketplace from a performance, health, and environmental standpoint. Research indicated that separation of the fiber and backing through elutriation, grinding, and air separation proved to be the best way to recover the face and backing components, but an infrastructure for returning postconsumer EcoWorx to the elutriation process was necessary. Research also indicated that the postconsumer carpet tile had a positive economic value at the end of its useful life. EcoWorx is recognized by MBDC as a certified cradle-to-cradle design.
Transesterification of fats
In 2005, Archer Daniels Midland (ADM) and Novozymes won the Greener Synthetic Pathways Award for their enzyme interesterification process. In response to the U.S. Food and Drug Administration (FDA) mandated labeling of trans-fats on nutritional information by January 1, 2006, Novozymes and ADM worked together to develop a clean, enzymatic process for the interesterification of oils and fats by interchanging saturated and unsaturated fatty acids. The result is commercially viable products without trans-fats. In addition to the human health benefits of eliminating trans-fats, the process has reduced the use of toxic chemicals and water, prevents vast amounts of byproducts, and reduces the amount of fats and oils wasted.
Bio-succinic acid
In 2011, the Outstanding Green Chemistry Accomplishments by a Small Business Award went to BioAmber Inc. for integrated production and downstream applications of bio-based succinic acid. Succinic acid is a platform chemical that is an important starting material in the formulations of everyday products. Traditionally, succinic acid is produced from petroleum-based feedstocks. BioAmber has developed process and technology that produces succinic acid from the fermentation of renewable feedstocks at a lower cost and lower energy expenditure than the petroleum equivalent while sequestering CO rather than emitting it. However, lower prices of oil precipitated the company into bankruptcy and bio-sourced succinic acid is now barely made.
Laboratory chemicals
Several laboratory chemicals are controversial from the perspective of Green chemistry. The Massachusetts Institute of Technology created a "Green" Alternatives Wizard to help identify alternatives. Ethidium bromide, xylene, mercury, and formaldehyde have been identified as "worst offenders" which have alternatives. Solvents in particular make a large contribution to the environmental impact of chemical manufacturing and there is a growing focus on introducing Greener solvents into the earliest stage of development of these processes: laboratory-scale reaction and purification methods. In the Pharmaceutical Industry, both GSK and Pfizer have published Solvent Selection Guides for their Drug Discovery chemists.
Legislation
The EU
In 2007, The EU put into place the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) program, which requires companies to provide data showing that their products are safe. This regulation (1907/2006) ensures not only the assessment of the chemicals' hazards as well as risks during their uses but also includes measures for banning or restricting/authorising uses of specific substances. ECHA, the EU Chemicals Agency in Helsinki, is implementing the regulation whereas the enforcement lies with the EU member states.
United States
The United States formed the Environmental Protection Agency (EPA) in 1970 to protect human and environmental health by creating and enforcing environmental regulation. Green chemistry builds on the EPA’s goals by encouraging chemists and engineers to design chemicals, processes, and products that avoid the creation of toxins and waste.
The U.S. law that governs the majority of industrial chemicals (excluding pesticides, foods, and pharmaceuticals) is the Toxic Substances Control Act (TSCA) of 1976. Examining the role of regulatory programs in shaping the development of green chemistry in the United States, analysts have revealed structural flaws and long-standing weaknesses in TSCA; for example, a 2006 report to the California Legislature concludes that TSCA has produced a domestic chemicals market that discounts the hazardous properties of chemicals relative to their function, price, and performance. Scholars have argued that such market conditions represent a key barrier to the scientific, technical, and commercial success of green chemistry in the U.S., and fundamental policy changes are needed to correct these weaknesses.
Passed in 1990, the Pollution Prevention Act helped foster new approaches for dealing with pollution by preventing environmental problems before they happen.
Green chemistry grew in popularity in the United States after the Pollution Prevention Act of 1990 was passed. This Act declared that pollution should be lowered by improving designs and products rather than treatment and disposal. These regulations encouraged chemists to reimagine pollution and research ways to limit the toxins in the atmosphere. In 1991, the EPA Office of Pollution Prevention and Toxics created a research grant program encouraging the research and recreation of chemical products and processes to limit the impact on the environment and human health. The EPA hosts The Green Chemistry Challenge each year to incentivize the economic and environmental benefits of developing and utilizing green chemistry.
In 2008, the State of California approved two laws aiming to encourage green chemistry, launching the California Green Chemistry Initiative. One of these statutes required California's Department of Toxic Substances Control (DTSC) to develop new regulations to prioritize "chemicals of concern" and promote the substitution of hazardous chemicals with safer alternatives. The resulting regulations took effect in 2013, initiating DTSC's Safer Consumer Products Program.
Scientific journals specialized in green chemistry
Green Chemistry (RSC)
Green Chemistry Letters and Reviews (Open Access) (Taylor & Francis)
ChemSusChem (Wiley)
ACS Sustainable Chemistry & Engineering (ACS)
Contested definition
There are ambiguities in the definition of green chemistry and how it is understood among broader science, policy, and business communities. Even within chemistry, researchers have used the term "green chemistry" to describe a range of work independently of the framework put forward by Anastas and Warner (i.e., the 12 principles). While not all uses of the term are legitimate (see greenwashing), many are, and the authoritative status of any single definition is uncertain. More broadly, the idea of green chemistry can easily be linked (or confused) with related concepts like green engineering, environmental design, or sustainability in general. Green chemistry's complexity and multifaceted nature makes it difficult to devise clear and simple metrics. As a result, "what is green" is often open to debate.
Awards
Several scientific societies have created awards to encourage research in green chemistry.
Australia's Green Chemistry Challenge Awards overseen by The Royal Australian Chemical Institute (RACI).
The Canadian Green Chemistry Medal.
In Italy, Green Chemistry activities center around an inter-university consortium known as INCA.
In Japan, The Green & Sustainable Chemistry Network oversees the GSC awards program.
In the United Kingdom, the Green Chemical Technology Awards are given by Crystal Faraday.
In the US, the Presidential Green Chemistry Challenge Awards recognize individuals and businesses.
See also
Bioremediation – a technique that generally falls outside the scope of green chemistry
Environmental engineering science
Green Chemistry (journal) – published by the Royal Society of Chemistry
Green chemistry metrics
Green computing – a similar initiative in the area of computing
Green engineering
Substitution of dangerous chemicals
Sustainable engineering
References
Environmental chemistry
Chemistry
Waste minimisation | Green chemistry | Chemistry,Engineering,Environmental_science | 4,174 |
9,466,823 | https://en.wikipedia.org/wiki/Wildlife%20of%20India | India is one of the most biodiverse regions and is home to a large variety of wildlife. It is one of the 17 megadiverse countries and includes three of the world's 36 biodiversity hotspots – the Western Ghats, the Eastern Himalayas, and the Indo-Burma hotspot.
About 24.6% of the total land area is covered by forests. It has various ecosystems ranging from the high altitude Himalayas, tropical evergreen forests along the Western Ghats, desert in the north-west, coastal plains and mangroves along the peninsular region. India lies within the Indomalayan realm and is home to about 7.6% of mammal, 14.7% of amphibian, 6% of bird, 6.2% of reptilian, and 6.2% of flowering plant species.
Human encroachment, deforestation and poaching are significant challenges that threaten the existence of certain fauna and flora. Government of India established a system of national parks and protected areas in 1935, which have been subsequently expanded to nearly 1022 protected areas by 2023. India has enacted the Wildlife Protection Act of 1972 and special projects such as Project Tiger, Project Elephant and Project Dolphin for protection of critical species.
Fauna
India has an estimated 92,873 species of fauna, roughly about 7.5% of the species available worldwide. Insects form the major category with 63423 recorded species. India is home to 423 mammals, 1233 birds, 526 reptiles, 342 amphibians, 3022 fish apart from other species which form 7.6% of mammal, 14.7% of amphibian, 6% of bird, 6.2% of reptilian species worldwide. Among Indian species, only 12.6% of mammals and 4.5% of birds are endemic, contrasting with 45.8% of reptiles and 55.8% of amphibians.
The Indian subcontinent was formerly an island landmass (Insular India) that split away from Gondwana around 125 million years ago, during the Early Cretaceous. Late Cretaceous Insular Indian faunas were very similar to those found on Madagascar due to their shared connection until around 90 million years ago. The Cretaceous-Paleogene extinction event around 66 million years ago caused the extinction of many animals native to Insular India, such as its titanosaurian and abelisaurid dinosaurs. During the early Cenozoic era, around 55-50 million years ago, the Indian subcontinent collided with Laurasia, allowing animals from Asia to migrate into the Indian subcontinent. Some elements of India's modern fauna, such as the frog family Nasikabatrachidae and the caecillian family Chikilidae, are suggested to have been present in India prior to its collision with Asia.
Four species of megafauna (large animals) native to India became extinct during the Late Pleistocene around 10,000-50,000 years ago as part of a global wave of megafauna extinctions, these include the very large elephant Palaeoloxodon namadicus (possibly the largest land mammal to have ever lived), the elephant relative Stegodon, the hippopotamus Hexaprotodon, and the equine Equus namadicus. These extinctions are thought to have been after the arrival of modern humans on the Indian subcontinent. Ostriches were also formerly native to India, but also became extinct during the Late Pleistocene.
India is home to several well-known large animals, including the Indian elephant, Indian rhinoceros, and Gaur. India is the only country where the big cats tiger and lion exist in the wild. Members of the cat family include Bengal tiger, Asiatic lion, Indian leopard, snow leopard, and clouded leopard. Representative and endemic species include blackbuck, nilgai, bharal, barasingha, Nilgiri tahr, and Nilgiri langur.
There are about 31 species of aquatic mammals including dolphins, whales, porpoises, and dugong. Reptiles include the gharial, the only living members of Gavialis and saltwater crocodiles. Birds include peafowl, pheasants, geese, ducks, mynas, parakeets, pigeons, cranes, hornbills, and sunbirds. Endemic bird species include great Indian hornbill, great Indian bustard, nicobar pigeon, ruddy shelduck, Himalayan monal, and Himalayan quail.
Flora
About 24.6% of the total land area is covered by forests. It has various ecoregions ranging from the high altitude Himalayas, tropical evergreen forests along the Western Ghats, desert in the north-west, coastal plains and mangroves along the peninsular region. India's climate has become progessively drier since the late Miocene, reducing forest cover in northern India in favour of grassland.
There are about 29,015 species of plants including 17,926 species of flowering plants. This is about 9.1% of the total plant species identified worldwide and 6,842 species are endemic to India. Other plant species include 7,244 algae, 2,504 bryophytes, 1,267 pteridophytes and 74 gymnosperms. One-third of the fungal diversity of the world exists in India with over 27,000 recorded species, making it the largest biotic community after insects.
Conservation
India harbors 172 (2.9%) IUCN-designated threatened species. These include 39 species of mammals, 72 species of birds, 17 species of reptiles, three species of amphibians, two species of fish, and a number of insects including butterflies, moths, and beetles.
Human encroachment, deforestation and poaching are significant challenges that threaten the existence of certain fauna and flora. Government of India established a system of national parks and protected areas in 1935, which have been subsequently expanded to nearly 1022 protected areas by 2023. Various laws have been enacted such as Indian Forest Act, 1927 and Wildlife Protection Act of 1972 and special projects such as Project Tiger, Project Elephant and Project Dolphin have been initiated for the protection of forests, wildlife and critical species.
As of 2023, there are 1022 protected areas including 106 national parks, 573 wildlife sanctuaries, 220 conservation reserves and 123 community reserves. In addition, there are 55 tiger reserves, 18 biosphere reserves and 32 elephant reserves.
National symbols
See also
List of birds of India
List of mammals of India
List of reptiles of South Asia
Wildlife population of India
References
Further reading
Saravanan, Velayutham. Environmental History of Modern India: Land, Population, Technology and Development (Bloomsbury Publishing India, 2022) online review
External links
Official website of: Government of India, Ministry of Environment & Forests
"Legislations on Environment, Forests, and Wildlife" from the Official website of: Government of India, Ministry of Environment & Forests
"India's Forest Conservation Legislation: Acts, Rules, Guidelines", from the official website of the Government of India, Ministry of Environment & Forests
Wildlife Legislations, including - "The Indian Wildlife (Protection) Act" from the Official website of: Government of India, Ministry of Environment & Forests
India
Biota of India | Wildlife of India | Biology | 1,483 |
21,714,312 | https://en.wikipedia.org/wiki/Scavenger%20system | A scavenger system is a medical device used in hospitals. It is used to gather anaesthetic gases, after it is exhaled from the patient or left the area of the patient, and transport it to the atmosphere, skipping the closed environment of the operating room. Often used to collect anesthesia, it can also be used to collect any type of gas or aerosolized medicine that is intended only for the patient and should not be breathed in by any other medical personnel.
In the operating room the anaesthetic gas scavenging system collects and removes waste gases from the patient breathing circuit and the patient ventilation circuit. In most jurisdictions, there is a legal requirement to scavenge waste gases to maintain the level of waste gases in the operating room below the legally acceptable limit. For example, in the UK the limits are typically 100ppm for nitrous oxide and 50ppm for halogenated volatile anaesthetic agents (except halothane which is 10ppm). Other jurisdictions have different requirements for local environmental contamination, for example, nitrous oxide maximum 25ppm and halogenated volatile gases maximum 2ppm. In addition to the legal requirement there is an occupational health requirement to maintain a safe workplace and limit exposure to potentially harmful gases.
The basic functional components of an anaesthetic gas scavenging system are as follows:
A collecting assembly / shroud with a relief valve by which the waste gas leaves the breathing or ventilation circuit.
A transfer system of tubing to conduct waste gases to the Scavenging Interface.
The scavenging interface, and
A disposal line to conduct the waste gas to a passive evacuation system, or a medical vacuum system via a station outlet.
References
Medical equipment | Scavenger system | Biology | 350 |
5,075,031 | https://en.wikipedia.org/wiki/Rack%20lift | A rack lift is a type of elevator which consists of a cage attached to vertical rails affixed to the walls of a tower or shaft and which is propelled up and down by means of an electric motor which drives a pinion gear that engages a rack gear which is also attached to the wall between the rails.
References
Elevators | Rack lift | Engineering | 66 |
36,607,703 | https://en.wikipedia.org/wiki/PyLadies | PyLadies is an international mentorship group which focuses on helping more women become active participants in the Python open-source community. It is part of the Python Software Foundation. It was started in Los Angeles in 2011. The mission of the group is to create a diverse Python community through outreach, education, conferences and social gatherings. PyLadies also provides funding for women to attend open source conferences. The aim of PyLadies is increasing the participation of women in computing. PyLadies became a multi-chapter organization with the founding of the Washington, D.C., chapter in August 2011.
History
The organization was created in Los Angeles in April 2011 by seven women: Audrey Roy Greenfeld, Christine Cheung, Esther Nam, Jessica Venticinque (Stanton at the time), Katharine Jarmul, Sandy Strong, and Sophia Viklund. Around 2012, the organization filed for nonprofit status.
As of March 2024, PyLadies has 129 chapters.
Organization
PyLadies has conducted outreach events for both beginners and experienced users. PyLadies has conducted hackathons, social nights and workshops for Python enthusiasts.
Each chapter is free to run themselves as they wish as long as they are focused on the goal of empowering women and other marginalized genders in tech. Women make up the majority of the group, but membership is not limited to women and the group is open to helping people who identify as other gender identities as well.
In the past, PyLadies has also collaborated with other organizations, for instance R-Ladies.
References
External links
PyLadies Website
Mentorships
Women in computing
Free and open-source software organizations
Organizations for women in science and technology
Software developer communities
Python (programming language) | PyLadies | Technology | 351 |
54,980,110 | https://en.wikipedia.org/wiki/Nanofilter%20Tanzania | NanoFilter is the water filter developed as the result of innovation by the Tanzanian senior-lecturer and chemical engineer, Dr. Askwar Hilonga from The Nelson Mandela African Institution of Science and Technology (NM-AIST)- Arusha in Tanzania. After spending almost five years from 2010 to continue refining the nanomaterials to the nanoFilter. It was very difficult to come up to final water filter with this idea as it was too difficult to protect it in all these years. But Dr. Hilonga used the appropriate Intellectual property Rights (IPRs) strategy to protect this idea up to the end, and in fact the product of this innovation worn the Africa Prize for Engineering Innovation, this is especially the lesson to young scientists in developing countries.
NanoFilter Innovation
As Dr. Askwar Hilonga grew up in the remote area as the party of community, he witnessed sufferings experienced majority of people in the villages in Tanzania and probably this is the common problem to all the poorest countries in the world. Problems like water borne diseases real kill people everyday due to the use of unsafe water.
Nevertheless, especially in the northern part of the beautiful country of Tanzania. The water which normal people use seem to contain fluoride while bacteria remain the content of water in most of the villages in the developing countries.
The nanoFilter idea resulted from the visit this African Scientist made to his parents' village found in Arusha where people were still drinking dirty water which scares him, that his people were going to continue suffering from water-borne disease.
How the NanoFilter works
It has a slow sand filter with a combination of nanomaterials made from sodium silicate and silver to eliminate toxic heavy metals such as copper, fluoride, or other chemical contaminants depending on a particular geographical area. Water first passes through the sand and then through the nanomaterials.
It then absorbs all the contaminants leaving water very clean and safe and can later be removed manually.
Conclusion
NanoFilter is the filter which can filtrate water for about 99.999% and making water free from bacteria, microorganisms, and Viruses hence making the drinking water safe for the domestic usage.
References
Innovation
Nanomaterials
World Intellectual Property Organization | Nanofilter Tanzania | Materials_science | 467 |
27,271,262 | https://en.wikipedia.org/wiki/C6H14N2O | {{DISPLAYTITLE:C6H14N2O}}
The molecular formula C6H14N2O may refer to:
N-Acetylputrescine
Methyl-n-amylnitrosamine | C6H14N2O | Chemistry | 48 |
9,183,439 | https://en.wikipedia.org/wiki/Hawking%20%282004%20film%29 | Hawking is a 2004 biographical drama television film directed by Philip Martin and written by Peter Moffat. Starring Benedict Cumberbatch, it chronicles Stephen Hawking's early years as a PhD student at the University of Cambridge, following his search for the beginning of time, and his struggle against motor neurons disease. It premiered in the UK in April 2004.
The film received positive reviews, with critics particularly lauding Cumberbatch's performance as Hawking. It received two British Academy Television Awards nominations: Best Single Drama and Best Actor (Cumberbatch). Cumberbatch won the Golden Nymph for Best Performance by an Actor in a TV Film or Miniseries.
Cumberbatch's portrayal of Hawking was the first portrayal of the physicist on screen not by himself.
Plot
At Stephen Hawking's 21st birthday party he meets a new friend, Jane Wilde. There is a strong attraction between the two and Jane is intrigued by Stephen's talk of stars and the universe, but realises that there is something very wrong with Stephen when he suddenly finds that he is unable to stand up. A stay in hospital results in a distressing diagnosis. Stephen has motor neurone disease and doctors don't expect him to survive for more than two years. Stephen returns to Cambridge where the new term has started without him. But he cannot hide from the reality of his condition through work because he can't find a subject for his PhD. While his colleagues throw themselves into academic and college life, Stephen's life seems to have been put on hold. He rejects the help of his supervisor Dennis Sciama and sinks into a depression. It is only Stephen's occasional meetings with Jane and her faith in him that seem to keep him afloat. The prevailing theory in cosmology at the time is Steady State, which argues that the universe had no beginning – it has always existed, and always will – and Steady State is dominated by Professor Fred Hoyle, a plain-speaking Yorkshireman, and one of the first science TV pundits.
Stephen gets an early glimpse of a paper by Hoyle that is to be presented at a Royal Society lecture. He works through the calculations, identifies a mistake, and publicly confronts Hoyle after he has finished speaking. The row causes a stir in the department but, more importantly, it seems to give Stephen the confidence to get started on his own work. At almost the same time Stephen is introduced to a new way of thinking about his subject by another physicist, Roger Penrose. Topology is an approach that uses concepts of shape rather than equations to think about the nature of the universe, and this proves to be the perfect tool for Stephen, who is starting to find it very difficult to write. Penrose's great passion is the fate of dying stars. When a star comes to the end of its life, it begins to collapse in on itself. His calculations suggest something extraordinary. The collapse of the dying star appears to continue indefinitely, until the star is infinitely dense, forming a black hole in space. And at the heart of this black hole, Penrose shows, is something scientists call a singularity. It is this which leads Stephen to his PhD subject. He has always had a niggling scepticism about Steady State Theory, and now he can begin to see a way of explaining the revolutionary and highly controversial idea that the universe might have had a beginning. Sciama is sceptical but supportive – glad to see his student fired up and ready to work. Meanwhile, Stephen's condition continues to decline, he writes and walks with difficulty and his speech is starting to slur. But he now has a focus for his energies and, with the support of Jane, enters a new phase. He also commits to his relationship with her, asking her to marry him and in doing so exhibiting a defiant determination to survive.
With his mind fired up, Stephen begins to work away at the implications of Penrose's discovery and starts to home in on the idea of a singularity. With remarkable insight – a real Eureka moment – he asks himself: what would happen if you ran Penrose's maths backwards? Instead of something collapsing into nothingness, what if nothingness exploded into something? And what if you applied this not to a star but to the whole universe? Answer: the universe really could have originated in a big bang. At last, Stephen enters a period of feverish academic work. He applies Penrose's theorems for collapsing stars to the universe itself. Justifying Sciama's faith in him, he produces a PhD of real brilliance and profound implications. In theory, at least, the big bang could have happened. Two years after his initial diagnosis, Stephen is not only still very much alive, but has played a part in a great scientific breakthrough which revolutionises the way people think about the universe. Today, the scientific consensus is that the universe started with a big bang: billions of years ago, a cosmic explosion brought space and time into existence.
A secondary, interwoven storyline follows a different but connected scientific quest. Unbeknownst to Hawking, just as he was being diagnosed in 1963, two American scientists were embarking on their own scientific mission. Their research was to produce hard evidence to support Hawking's theoretical work. Arno Allan Penzias and Robert Woodrow Wilson are encountered in a hotel room in Stockholm in 1978. They are being interviewed about their discovery on the eve of receiving the Nobel Prize for Physics. They describe how, in the hills above New Jersey, they scanned the skies with a radio-telescope, and began to pick up a strange radio signal from space. In time, the two scientists came to realise that they had detected the left-over heat of the first, ancient explosion that had created the universe. They had found the physical proof of the big bang.
Cast
Benedict Cumberbatch as Stephen Hawking
Michael Brandon as Arno Allan Penzias
Tom Hodgkins as Robert Woodrow Wilson
Lisa Dillon as Jane Wilde
Phoebe Nicholls as Isobel Hawking
Adam Godley as Frank Hawking
Peter Firth as Fred Hoyle
Tom Ward as Roger Penrose
John Sessions as Dennis Sciama
Matthew Marsh as Dr. John Holloway
Alice Eve as Martha Guthrie
Rohan Siva as Jayant Narlikar
Reception
Accolades
Hawking received two nominations at the 2005 British Academy Television Awards: Best Single Drama and Best Actor (Cumberbatch). At the Monte-Carlo Television Festival, Benedict Cumberbatch won the Golden Nymph for Best Performance by an Actor in a TV film or miniseries.
Notes
References
External links
2004 television films
2004 films
2000s British films
BBC television dramas
British drama television films
British docudrama films
Cultural depictions of Stephen Hawking
Films about people with paraplegia or tetraplegia
Films directed by Philip Martin (director)
Science docudramas | Hawking (2004 film) | Astronomy | 1,400 |
22,111,939 | https://en.wikipedia.org/wiki/2-Methyl-2-butene | 2-Methyl-2-butene, 2m2b, 2-methylbut-2-ene, beta-isoamylene, or Trimmethylethylene is an alkene hydrocarbon with the molecular formula C5H10.
Used as a free radical scavenger in trichloromethane (chloroform) and dichloromethane (methylene chloride). It is also used to scavenge hypochlorous acid (HOCl) in the Pinnick oxidation.
John Snow, the English physician, experimented with it in the 1840s as an anesthetic, but stopped using it for unknown reasons.
As a crucial fact, it is a flammable material, an irritant, can result in health hazards and environmental hazards.
See also
Pentene
References
Hydrocarbons
Alkenes | 2-Methyl-2-butene | Chemistry | 181 |
30,871,821 | https://en.wikipedia.org/wiki/South%20magnetic%20pole | The south magnetic pole, also known as the magnetic south pole, is the point on Earth's Southern Hemisphere where the geomagnetic field lines are directed perpendicular to the nominal surface. The Geomagnetic South Pole, a related point, is the south pole of an ideal dipole model of the Earth's magnetic field that most closely fits the Earth's actual magnetic field.
For historical reasons, the "end" of a freely hanging magnet that points (roughly) north is itself called the "north pole" of the magnet, and the other end, pointing south, is called the magnet's "south pole". Because opposite poles attract, Earth's south magnetic pole is physically actually a magnetic north pole (see also ).
The south magnetic pole is constantly shifting due to changes in Earth's magnetic field.
As of 2005 it was calculated to lie at , placing it off the coast of Antarctica, between Adélie Land and Wilkes Land. In 2015 it lay at (est). That point lies outside the Antarctic Circle. Due to polar drift, the pole is moving northwest by about per year. Its current distance from the actual Geographic South Pole is approximately . The nearest permanent science station is Dumont d'Urville Station. While the north magnetic pole began wandering very quickly in the mid 1990s, the movement of the south magnetic pole did not show a matching change of speed.
Expeditions
Early unsuccessful attempts to reach the magnetic south pole included those of French explorer Jules Dumont d'Urville (1837–1840), American Charles Wilkes (expedition of 1838–1842) and Briton James Clark Ross (expedition of 1839–1843).
The first calculation of the magnetic inclination to locate the magnetic South Pole was made on 23 January 1838 by the hydrographer , a member of the Dumont d'Urville expedition in Antarctica and Oceania on the corvettes and in 1837–1840, which discovered Adélie Land.
On 16 January 1909 three men (Douglas Mawson, Edgeworth David, and Alistair Mackay) from Sir Ernest Shackleton's Nimrod Expedition claimed to have found the south magnetic pole, which was at that time located on land. They planted a flagpole at the spot and claimed it for the British Empire. However, there is now some doubt as to whether their location was correct. The approximate position of the pole on 16 January 1909 was .
Fits to global data sets
The south magnetic pole has also been estimated by fits to global sets of data such as the World Magnetic Model (WMM) and the International Geomagnetic Reference Field (IGRF). For earlier years back to about 1600, the model GUFM1 is used, based on a compilation of data from ship logs.
South geomagnetic pole
Earth's geomagnetic field can be approximated by a tilted dipole (like a bar magnet) placed at the center of Earth. The south geomagnetic pole is the point where the axis of this best-fitting tilted dipole intersects Earth's surface in the southern hemisphere. As of 2005 it was calculated to be located at , near the Vostok Station. Because the field is not an exact dipole, the south geomagnetic pole does not coincide with the south magnetic pole. Furthermore, the south geomagnetic pole is wandering for the same reason its northern geomagnetic counterpart wanders.
See also
North magnetic pole
Polar alignment
References
External links
Australian Antarctic Division
Polar regions of the Earth
East Antarctica
Geography of Antarctica
Geomagnetism
Orientation (geometry) | South magnetic pole | Physics,Mathematics | 721 |
6,854,663 | https://en.wikipedia.org/wiki/KinetX | KinetX, Inc. (also known as KinetX Aerospace) is a privately held Tempe, Arizona based aerospace engineering, technology, software development and business consulting firm specializing in spaceflight systems. KinetX's main area of expertise is in the areas of interplanetary navigation, satellite systems engineering, and ground system software development.
KinetX is the first and only private company to ever provide navigation services for NASA interplanetary missions. Their Space Navigation and Flight Dynamics (SNAFD) division, based in Simi Valley, California, has provided mission navigation for the MESSENGER mission to Mercury, the New Horizons mission to Pluto and the Kuiper Belt, and the OSIRIS-REx asteroid sample-return mission. They are also providing mission navigation for the Emirates Mars Mission and NASA's upcoming Lucy mission to the Trojan asteroids.
Company history
KinetX, Inc. was founded in 1992 and was approached shortly thereafter by Motorola for assistance in developing and implementing the Iridium satellite constellation ground system. In early 1993, several members of KinetX began working on the systems engineering for the Iridium command and control system. KinetX later provided engineering support and software development for companies such as Lockheed Martin, Boeing, General Dynamics, Aerojet, Spectrum Astro, and TRW.
The Space Navigation and Flight Dynamics (SNAFD) division was founded in 2001 by Dr. Bobby Williams and has since successfully navigated multiple interplanetary NASA missions, making KinetX the first privately held company to do so.
References
External links
Aerospace companies of the United States
Private spaceflight companies
Companies based in Tempe, Arizona
Technology companies established in 1992
1992 establishments in Arizona | KinetX | Astronomy | 344 |
32,927,433 | https://en.wikipedia.org/wiki/Neutron%20stimulated%20emission%20computed%20tomography | Neutron stimulated emission computed tomography (NSECT) uses induced gamma emission through neutron inelastic scattering to generate images of the spatial distribution of elements in a sample.
Clinical Applications
NSECT has been shown to be effective in detecting liver iron overload disorders and breast cancer. Due to its sensitivity in measuring elemental concentrations, NSECT is currently being developed for cancer staging, among other medical applications.
NSECT mechanism
A given atomic nucleus, defined by its proton and neutron numbers, is a quantized system with a set of characteristic higher energy levels that it can occupy as a nuclear isomer. When the nucleus in its ground state is struck by a fast neutron with kinetic energy greater than that of its first excited state, it can undergo an isomeric transition to one of its excited states by receiving the necessary energy from the fast neutron through inelastic scatter. Promptly (on the order of picoseconds, on average) after excitation, the excited nuclear isomer de-excites (either directly or through a series of cascades) to the ground state, emitting a characteristic gamma ray for each decay transition with energy equal to the difference in the energy levels involved (see induced gamma emission). After irradiating the sample with neutrons, the measured number of emitted gamma rays of energy characteristic to the nucleus of interest is directly proportional to the number of such nuclei along the incident neutron beam trajectory. After repeating the measurement for neutron beam incidence at positions around the sample, an image of the distribution of the nuclei in the sample can be reconstructed as done in tomography.
References
Further reading
NSECT at Ravin Advanced Imaging Laboratories, Duke University
Floyd CE, Bender JE, Sharma AC, Kapadia A, Xia J, and Harrawood B, Tourassi GD, Lo JY, Crowell A, and Howell C. "Introduction to neutron stimulated emission computed tomography," Physics in Medicine and Biology. 51:3375. 2006.
Sharma AC, Harrawood BP, Bender JE, Tourassi GD, and Kapadia AJ. "Neutron stimulated emission computed tomography: a Monte Carlo simulation approach,"Physics in Medicine and Biology. 52:6117. 2007.
Floyd CE, Kapadia, AJ, et al. "Neutron-stimulated emission computed tomography of a multi-element phantom," Physics in Medicine and Biology. 53:2313. 2008.
Medical imaging
Neutron scattering | Neutron stimulated emission computed tomography | Chemistry | 497 |
514,037 | https://en.wikipedia.org/wiki/Quebec%20Bridge | The Quebec Bridge () is a road, rail, and pedestrian bridge across the lower Saint Lawrence River between Sainte-Foy (a former suburb that in 2002 became the arrondissement Sainte-Foy–Sillery–Cap-Rouge in Quebec City) and Lévis, in Quebec, Canada. The project failed twice during its construction, in 1907 and 1916, at the cost of 88 lives and additional people injured. The bridge eventually opened in 1919.
The Quebec Bridge is a riveted steel truss structure and is long, wide, and high. Cantilever arms long support a central structure, for a total span of , still the longest cantilever bridge span in the world. (It was the all-categories longest span in the world until the Ambassador Bridge was completed in 1929.) It is the easternmost (farthest downstream) complete crossing of the Saint Lawrence River.
The bridge accommodates three highway lanes (there were none until 1929, when one was added; another was added in 1949 and a third in 1993), one rail line (two until 1949), and a pedestrian walkway (originally two). At one time, it also carried a streetcar line. Since 1993, it has been owned by the Canadian National Railway.
On May 15 2024, the Quebec Bridge was purchased by the Federal Government for a symbolic $1.
The Quebec Bridge was designated a National Historic Site in 1995.
Background
Before the Quebec Bridge was built, the only way to travel from the south shore of the St. Lawrence in Lévis to the north shore at Quebec City was to take a ferry or to use the wintertime ice bridge. The construction of a bridge over the St. Lawrence River at Quebec was considered as early as 1852. It was further discussed in 1867, 1882, and 1884. After a period of political instability during which Canada had four prime ministers in five years, Wilfrid Laurier, the Member of Parliament for the federal riding of Quebec East, was elected on a Liberal platform in 1896 and led the push to build the Quebec Bridge until he left office in 1911.
A March 1897 article in the Quebec Morning Chronicle noted:
The bridge question has again been revived after many years of slumber, and business men in Quebec seem hopeful that something will come of it, though the placing of a subsidy on the statute book is but a small part of the work to be accomplished, as some of its enthusiastic promoters will, ere long, discover. Both Federal and Provincial Governments seem disposed to contribute towards the cost, and the City of Quebec will also be expected to do its share. Many of our people have objected to any contribution being given by the city unless the bridge is built opposite the town, and the CHRONICLE like every other good citizen of Quebec would prefer to see it constructed at Diamond Harbor, and has contended in the interests of the city for this site as long as there seemed to be any possibility of securing it there. It would still do so if it appeared that our people could have it at that site. A bridge at Diamond Harbor would, it estimated, cost at least eight millions. It would be very nice to have, with its double track, electric car track, and roads for vehicles and pedestrians, and would no doubt create a goodly traffic between the two towns, and be one of the show works of the continent.
First design and collapse of August 29, 1907
The Quebec Bridge was included in the National Transcontinental Railway project, undertaken by the federal government. The Quebec Bridge Company was first incorporated by Act of Parliament under the government of Sir John A. Macdonald in 1887, later revived in 1891, and revived for good in 1897 by the government of Wilfrid Laurier, who granted them an extension of time in 1900.
In 1903, the bond issue was increased to $6,000,000 and power to grant preference shares was authorised, along with a name change to the Quebec Bridge and Railway Company (QBRC). An Act of Parliament the same year was necessary to guarantee the bonds by the public purse. Laurier was the MP for Quebec East riding, while the president of the QBRC, Simon-Napoleon Parent, was Quebec City's mayor from 1894 to 1906 and simultaneously served as Premier of Quebec from 1900 to 1905.
Edward A. Hoare was selected as Chief Engineer for the Company throughout this time, while Collingwood Schreiber was the Chief Engineer of the Department of Railways and Canals in Ottawa. Hoare had never worked on a cantilever bridge structure longer than . Schreiber was assisted until July 9, 1903 by Department bridge engineer R.C. Douglas, at which time Douglas was deposed for his opposition to the calculations that were submitted by the contractors. Schreiber subsequently requested the support of another qualified bridge engineer, but was effectively overruled by the Cabinet on August 15, 1903. Thereafter, QBRC consulting engineer Theodore Cooper was completely in charge of the works. On July 1, 1905, Schreiber was demoted and replaced as deputy minister and chief engineer by Matthew J Butler.
By 1904, the southern half of the structure was taking shape. However, preliminary calculations made early in the planning stages were never properly checked when the design was completed. The bridge’s own weight was far in excess of its carrying capacity. The dead load was too heavy. All went well until the bridge was nearing completion in the summer of 1907, when the QBRC site engineering team under Norman McLure began noticing increasing distortions of key structural members already in place.
McLure became increasingly concerned and wrote repeatedly to QBRC consulting engineer Theodore Cooper, who at first replied that the problems were minor. The Phoenix Bridge Company officials claimed that the beams must already have been bent before they were installed, but by August 27 it had become clear to McLure that this was wrong. A more experienced engineer might have telegraphed Cooper, but McLure wrote him a letter, and went to New York to meet with him two days later. Cooper agreed that the issue was serious, and promptly telegraphed to the Phoenix Bridge Company: "Add no more load to bridge till after due consideration of facts." The two engineers went to the Phoenix offices.
But, Cooper's message was not passed on to Quebec before it was too late. Near quitting time on the afternoon of August 29, after four years of construction, the south arm and part of the central section of the bridge collapsed into the St. Lawrence River in 15 seconds. Of the 86 workers on the bridge that day, 75 were killed and the rest were injured, making it the world's worst bridge construction disaster. Of these victims, 33 (some sources say 35) were Mohawk steelworkers from the Kahnawake reserve near Montreal; they were buried at Kahnawake under crosses made of steel beams.
On August 30, 1907, a Royal Commission of inquiry into the disaster was provisionally appointed by the Deputy Minister in charge of the Department of Railways and Canals (Butler), with the concurrence of the Minister. The Royal Commission, which was granted by Edward VII by advice of his Governor General, Albert Grey, on August 31, 1907, consisted of three members, who were all engineers of good standing: Henry Holgate, of Montreal, JGG Kerry, of Campbellford, Ontario, also an instructor at McGill University, and Professor John Galbraith, then dean of the Faculty of Applied Science and Engineering at the University of Toronto. The Commission document conferred upon the commissioners full powers to summon witnesses and documents, and to express "any opinion they may see to express thereon".
The Commissioners presented their Report in full on February 20, 1908, issued 15 conclusions, and included the hindsight work of consulting bridge engineer C.C. Schneider, of Philadelphia (a fulfillment of the 1903 request of Schreiber, supra).
The Commissioners attributed responsibility for the failure to two men, consulting engineer Theodore Cooper and Peter L. Szlapka, Chief Designing Engineer for Phoenix Bridge Company:
(c) The design of the chords that failed was made by Mr. P.L. Szlapka, the designing engineer of the Phoenix Bridge Company
(d) This design was examined and officially approved by Mr. Theodore Cooper, consulting engineer of the Quebec Bridge and Railway Company.
(e) The failure cannot be attributed directly to any cause other than errors in judgment on the part of these two engineers.
Cooper escaped penal sanction. It is presumed that Szlapka escaped as well. The Commissioners also found that:
(k) The failure on the part of the Quebec Bridge and Railway Company to appoint an experienced bridge engineer to the position of chief engineer was a mistake. This resulted in a loose and inefficient supervision of all parts of the work on the part of the Quebec Bridge and Railway Company.
The abortive construction of the Quebec Bridge spanned the careers of two Ministers of Railways and Canals, and one temporary replacement, who was on the job for five months immediately preceding the disaster. A popular myth is that the iron and the steel from the collapsed bridge, which could not be reused for construction, was used to forge the early Iron Rings that started to be worn by graduates of Canadian engineering schools in 1925.
Second design and collapse of September 11, 1916
After a Royal Commission of Inquiry into the collapse, construction started on a second bridge. Three engineers were appointed: H. E. Vautelet, a former engineer for the Canadian Pacific Railways, Maurice FitzMaurice from Britain, who worked on the construction of the Forth Bridge, and Ralph Modjeski from Chicago, Illinois. Vautelet was President and Chief Engineer. The new design was again for a bridge with a single long cantilever span but with a more massive structure.
On September 11, 1916, when the central span was being raised into position, it fell into the river, killing 13 workers. The chief engineer had been made aware of the problem six weeks before the collapse. The chief engineer had been alerted to a problem by Frants Lichtenberg, the engineer responsible for the construction of the centre section. Lichtenberg was also working as an inspector for the federal government of Canada at the time. Fears of German sabotage were reported because the Great War had begun, but it became apparent that the central span had collapsed because of the failure of a casting in the erection equipment.
Re-construction began almost immediately after the accident, and the government granted special permission for the bridge builders to acquire the needed steel. It was in high demand because of the War effort. The fallen central span still lies at the bottom of the river. After the bridge was completed in 1917, special passes were required for those wanting to cross the bridge. Armed soldiers, and later Dominion Police, guarded the structure and checked passes until the end of the War.
Completion
Construction was ultimately completed in September 1917 at a total cost of $23 million and the lives of 88 bridgeworkers. On the 17 October, the first train crossed the bridge from Quebec to Lévis and on December 3, 1917, the Quebec Bridge officially opened for rail traffic, after almost two decades of construction. Its centre span of 549 m (1800 ft) remains the longest cantilevered bridge span in the world and is considered a major engineering feat. The Quebec Bridge was declared an International Historic Civil Engineering Landmark in 1987 by the Canadian and American Society of Civil Engineers.
Post-completion history
The bridge was built and designed primarily as a railway bridge, but the streetcar lines (used by Quebec Railway, Light & Power Company) and one of the two railway tracks were converted into automobile and pedestrian/cycling lanes in subsequent years. In 1970, the Pierre Laporte Suspension Bridge opened just upstream to accommodate freeway traffic on Autoroute 73.
On November 24, 1995. the bridge was declared a National Historic Site.
The bridge has been featured on two commemorative postage stamps, one issued by the Post Office Department in 1929, and another by Canada Post in 1995.
The bridge was built as part of the National Transcontinental Railway, which was merged into the Canadian Government Railways and later became part of the Canadian National Railway (CN). The Canadian Government Railways company was maintained by the federal government until 1993, when a Privy Council order dated July 22 authorized the sale of Canadian Government Railways to the Crown corporation CN for one Canadian dollar. On that date, the Quebec Bridge also came under complete ownership of CN. CN was privatized in November 1995, making the bridge privately owned.
Despite its private ownership, CN received federal and provincial funding to undertake repairs and maintenance on the structure. Its railway designation is mile 0.2 subdivision Bridge.
Aftermath of the collapse
The disaster showed the power an engineer could have in a project that was improperly supervised. As one result, Galbraith and others formed around 1925 what are now recognized as organizations of Professional Engineers (P.Engs). Professional Engineers are under different rules and regulations based on the organization to which they belong. General guidelines include that an engineer must pass an ethical examination, be able to show good character through the use of character witnesses, and have applicable engineering experience (in Canada that constitutes a minimum of four years' practice under a certified Professional Engineer). Moreover, engineers must be registered in the province in which they work. These engineering organizations are regulated by the respective provinces and the title "Professional Engineer" (or "Ingénieur" in Quebec) is reserved only to members who belong to this organization.
On August 29, 2006, a year-long commemoration was begun in the Kahnawake Reserve for the lives of the 33 Mohawk men who died in 1907. One year later, on August 29, 2007, memorial services were held to dedicate a concrete structure displaying the victims' names on the Lévis side of the bridge, and to unveil a steel replica of the bridge in Kahnawake.
Corrosion and maintenance
In 2015, the Quebec Bridge was included in a list of the 10 most endangered historic sites in Canada by the National Trust of Canada because of long-overdue paint and repair work.
In May 2016, Jean-Yves Duclos, the Canadian federal cabinet minister in charge of the Quebec region, revealed that a lease agreement between the CN and the federal government indicated that the CN would not be required to pay more than $10 million towards the paint work until the lease expires in 2053. The Canadian government is now proposing to invest $75 million to paint the bridge and is asking the Quebec provincial government to step in and invest an estimated additional $275 million to complete the work. The mayor of Quebec City, Regis Labeaume, accused the federal government of breaching a promise made during the 2015 electoral campaign to act upon the maintenance of the bridge
Government repurchase
On May 10, 2024, the Canadian Government and CN announced an agreement for the sale of the bridge for the symbolic sum of $1. The government committed to spending $1 billion over 25 years on repairs and maintenance. CN and the Quebec government will share ownership of the rails and roadway that cross the bridge.
See also
List of crossings of the Saint Lawrence River
List of bridges in Canada
High Steel, a 1966 documentary on Mohawk high steel workers that also documents the 1907 collapse
References
External links
Pont de Québec timeline
The Collapse of the Quebec City Bridge
Ritual of the Calling of an Engineer
The Iron Ring (archive)
Photo Centre Span Collapse
Article on "Bridge Collapse Cases/Quebec Bridge" at MatDL
3D model
Canadian National Railway bridges in Canada
Railway bridges in Quebec
Bridges in Quebec City
Bridges completed in 1917
Cantilever bridges in Canada
Truss bridges in Canada
Road-rail bridges
Bridge disasters in Canada
Bridge disasters caused by engineering error
Bridge disasters caused by construction error
Historic Civil Engineering Landmarks
Bridges over the Saint Lawrence River
National Historic Sites in Quebec
Transport in Lévis, Quebec
Buildings and structures in Lévis, Quebec
Road bridges in Quebec
1919 establishments in Quebec
Former toll bridges in Canada
Steel bridges in Canada
1907 disasters in Canada
1916 disasters in Canada | Quebec Bridge | Engineering | 3,250 |
265,986 | https://en.wikipedia.org/wiki/Nickel%20hydride | Nickel hydride is either an inorganic compound of the formula NiH or any of a variety of coordination complexes. It was discovered by Polish chemist Bogdan Baranowski in 1958.
Binary nickel hydrides and related materials
"The existence of definite hydrides of nickel and platinum is in doubt". This observation does not preclude the existence of nonstoichiometric hydrides. Indeed, nickel is a widely used hydrogenation catalyst. Experimental studies on nickel hydrides are rare and principally theoretical.
Hydrogen hardens nickel (as it does most metals), inhibiting dislocations in the nickel atom crystal lattice from sliding past one another. Varying the amount of alloying hydrogen and the form of its presence in the nickel hydride (precipitated phase) controls qualities such as the hardness, ductility, and tensile strength of the resulting nickel hydride. Nickel hydride with increased hydrogen content can be made harder and stronger than nickel, but such nickel hydride is also less ductile than nickel. Loss of ductility occurs due to cracks maintaining sharp points due to suppression of elastic deformation by the hydrogen, and voids forming under tension due to decomposition of the hydride. Hydrogen embrittlement can be a problem in nickel in use in turbines at high temperatures.
In the narrow range of stoichiometries adopted by nickel hydride, distinct structures are claimed. At room temperature, the most stable form of nickel is the face-centred cubic (FCC) structure α-nickel. It is a relatively soft metallic material that can dissolve only a very small concentration of hydrogen, no more than 0.002 wt% at , and only 0.00005% at . The solid solution phase with dissolved hydrogen, that maintains the same structure as the original nickel is termed the α-phase. At 25°C, 6 kbar of hydrogen pressure is needed to dissolve in β-nickel, but the hydrogen desorbs at pressures below 3.4 kbar.
Surface
Hydrogen dissociates on nickel surfaces. The dissociation energies on Ni(111), Ni(100), and Ni(11O) crystal faces are respectively 46, 52, and 36 kJ/mol. The H dissociates from each of these surfaces at distinct temperatures: 320–380, 220–360, and 230–430 K.
High pressure phases
Crystallographically distinct phases of nickel hydride are produced with hydrogen gas at 600 MPa; or electrolytically. The crystal form is face-centred cubic or β-nickel hydride. Hydrogen to nickel atomic ratios are up to one, with hydrogen occupying an octahedral site. The density of the β-hydride is 7.74 g/cm. It is grey. At a current density of 1 amp per square decimeter, in 0.5 mol/liter of sulfuric acid and thiourea a surface layer of nickel will be converted to nickel hydride. This surface is replete with cracks up to millimeters long. The direction of cracking is in the {001} plane of the original nickel crystals. The lattice constant of nickel hydride is 3.731 Å, which is 5.7% more than that of nickel.
The near-stoichiometric NiH is unstable and loses hydrogen at pressures below 340 MPa.
Molecular nickel hydrides
A large number of nickel hydride complexes are known. Illustrative is the complex trans-NiH(Cl)(P(CH)).
References
See also
Solid solution
Lattice energy
Metal hydrides
Nickel compounds
Nickel alloys | Nickel hydride | Chemistry | 757 |
39,144,557 | https://en.wikipedia.org/wiki/Potassium%20osmate | Potassium osmate is the inorganic compound with the formula K2[OsO2(OH)4]. This diamagnetic purple salt contains osmium in the VI (6+) oxidation state. When dissolved in water a red solution is formed. When dissolved in ethanol, the salt gives a pink solution, and it gives a blue solution when dissolved in methanol. The salt gained attention as a catalyst for the asymmetric dihydroxylation of olefins.
Structure
The complex anion is octahedral. Like related d2 dioxo complexes, the oxo ligands are trans. The Os=O and Os-OH distances are 1.75(2) and 1.99(2) Å, respectively. It is a relatively rare example of a metal oxo complex that obeys the 18e rule.
Preparation
The compound was first reported by Edmond Frémy in 1844. Potassium osmate is prepared by reducing osmium tetroxide with ethanol:
2 OsO4 + C2H5OH + 5 KOH → CH3CO2K + 2 K2[OsO2(OH)4]
Alkaline oxidative fusion of osmium metal also affords this salt.
See also
Sodium hexachloroosmate
References
Osmium compounds
Oxides
Potassium compounds
Transition metal oxyanions | Potassium osmate | Chemistry | 276 |
16,255,496 | https://en.wikipedia.org/wiki/Zirconyl%20chloride | Zirconyl chloride is the inorganic compound with the formula of [Zr4(OH)8(H2O)16]Cl8(H2O)12, more commonly written ZrOCl2·8H2O, and referred to as zirconyl chloride octahydrate. It is a white solid and is the most common water-soluble derivative of zirconium. A compound with the formula ZrOCl2 has not been characterized.
Production and structure
The salt is produced by hydrolysis of zirconium tetrachloride or treating zirconium oxide with hydrochloric acid. It adopts a tetrameric structure, consisting of the cation [Zr4(OH)8]8+. features four pairs of hydroxide bridging ligands linking four Zr4+ centers. The chloride anions are not ligands, consistent with the high oxophilicity of Zr(IV). The salt crystallizes as tetragonal crystals.
See also
Zirconyl acetate
References
External links
MSDS Data
MSDS data Sigma-Aldrich
Zirconium(IV) compounds
Chlorides
Metal halides
Oxychlorides | Zirconyl chloride | Chemistry | 253 |
5,365,799 | https://en.wikipedia.org/wiki/Elk%20State%20Park | Elk State Park is a Pennsylvania state park in Jones Township, Elk County and Sergeant Township, McKean County, Pennsylvania, in the United States. East Branch Clarion River Lake is a man-made lake covering within the park. The lake and streams in the park are stocked with cold and warm water fish. There are of woods open to hunting.
Recreation
East Branch Clarion River Lake
East Branch Clarion River Lake was constructed by the U.S. Army Corps of Engineers by damming the East Branch of the Clarion River. Construction of the rolled earth, impervious core dam was authorized by the Flood Control Act of 1944. The lake is one of sixteen flood control projects administered by the Pittsburgh District of the Army Corps of Engineers. East Branch Clarion River Lake helps to provide flood protection for the Clarion River valley and the lower portions of the Allegheny River and the upper portions of the Ohio River.
The dam is upstream from the confluence of the East and West branches of the Clarion River. It was constructed in 1952 for $9 million and serves a drainage area. It is estimated that East Branch Clarion River Lake has prevented $81 million in damage. The dam was especially important in curtailing damage during the 1972 floods caused by Hurricane Agnes.
East Branch Clarion River Lake also serves recreational purposes. Controlled releases of water during the dry summer months help to improve water quality and quantity for industrial and domestic uses. These releases of the lake waters also improve navigation on the rivers and enhance aquatic life.
East Branch Clarion River Lake is a destination for both fisherman and recreational boaters. The lake is home to cold water fishing for walleye, smallmouth bass, muskellunge, brook, lake, rainbow and brown trout. The creeks of the park are stocked by the Pennsylvania Fish and Boat Commission. There is a native brook trout population in some of the smaller streams of the park. There is no limit on the power of the boats. All boats are required to have current registration with any state. Ice fishing and ice boating are common winter activities on East Branch Clarion River Lake.
Hunting
There are of woods open to hunting. Hunters are expected to follow the rules and regulations of the Pennsylvania Game Commission. The common game species are black bears, squirrels, white-tailed deer, and turkeys. The hunting of groundhogs is prohibited. Hunters also use the park to gain access to the nearby State Game Lands and Elk State Forest.
Nearby state parks
The following state parks are within of Elk State Park:
Bendigo State Park (Elk County)
Bucktail State Park Natural Area (Cameron County and Clinton Counties)
Kinzua Bridge State Park (McKean County)
Parker Dam State Park (Clearfield County)
Sinnemahoning State Park (Cameron and Potter Counties)
Sizerville State Park (Cameron and Potter Counties)
References
External links
State parks of Pennsylvania
United States Army Corps of Engineers
Protected areas established in 1963
Parks in Elk County, Pennsylvania
Parks in McKean County, Pennsylvania
1963 establishments in Pennsylvania
Protected areas of Elk County, Pennsylvania
Protected areas of McKean County, Pennsylvania | Elk State Park | Engineering | 628 |
62,684,129 | https://en.wikipedia.org/wiki/Jennifer%20Dionne | Jennifer (Jen) Dionne is an American scientist and pioneer of nanophotonics. She is currently full professor of materials science and engineering at Stanford University and by courtesy, of radiology, and also a Chan Zuckerberg Biohub Investigator. She is Deputy Director of Q-NEXT, a National Quantum Information Science funded by the DOE. From 2020-2024, she served as Stanford's inaugural Vice Provost of Shared Facilities, where she advanced funding, infrastructure, education, and staff support within shared facilities. During this time, she also was Director of the Department of Energy's "Photonics at Thermodynamic Limits" Energy Frontier Research Center (EFRC), which strives to create thermodynamic engines driven by light. She is also an editor of the ACS journal Nano Letters. Dionne's research develops photonic materials and methods to observe and control chemical and biological processes as they unfold with nanometer scale resolution, emphasizing critical challenges in global health and sustainability.
Early life and education
Dionne was born October 28, 1981, in Warwick, Rhode Island, to Sandra Dionne (Draper), an intensive care unit nurse, and George Dionne, a cabinet maker. She grew up figure skating, but also enjoyed science and math. As a student at Bay View Academy, she was selected to be a student ambassador to Australia. She also participated in the Washington University Summer Scholars Program and the Harvard University Secondary School Program.
She attended Washington University in St. Louis, where she received bachelor's degrees in physics and systems science and mathematics in 2003. There, she served on the Mission Control of Steve Fosset's first attempted solo hot air balloon circumnavigation. She also worked as student lead of the Crow Observatory.
She then received her and doctoral degrees in Applied Physics from Caltech in 2009, advised by Harry Atwater. At Caltech, she was named an Everhart Lecturer, and awarded the Francis and Milton Clauser Prize for Best Ph.D. Thesis, recognizing her work developing the first negative refractive index material at visible wavelengths and nanoscale Si-based photonic modulators. Before starting her faculty position at Stanford, she spent a year as a postdoctoral fellow in Chemistry at Berkeley and Lawrence Berkeley National Lab, advised by Paul Alivisatos.
Career
Dionne began as an assistant professor at Stanford in March, 2010. In 2016, she was promoted to associate professor, and became an affiliate faculty of the Wu Tsai Neuroscience Institute, Bio-X, and the Precourt Institute for Energy. In 2019, she joined the department of radiology as a courtesy faculty. In 2019–2021, she was director of the TomKat Center for Sustainable Energy, and initiated their graduate student fellowship. In 2020, she became a senior fellow of the Precourt Institute and was appointed senior associate vice provost for shared facilities/research platforms. In her vice provost role, she helped Stanford to modernize shared research facilities across the schools of engineering, medicine, humanities and sciences, earth sciences, and SLAC. She initiated the Community for Shared Research Platforms (c-ShaRP), which has enabled improved education, instrumentation, organization, staffing, and translational efforts in the shared facilities.
In her research, Dionne is a pioneer in manipulating light at the atomic and molecular scale. Under Dionne's leadership, her lab helped to establish the field of quantum plasmonics. She also made critical contributions to the field of plasmon photocatalysis, including developing combined optical and environmental electron microscopy to image chemical transformations with near-atomic-scale resolution. Her work in plasmon catalysis could enable sustainable materials manufacturing, overturning the traditional trade-offs in thermal catalysis between selectivity and activity. Her group is also credited with developing the first high-quality-factor phase-gradient metasurfaces for resonant beam-shaping and beam-steering. Dionne uses this platform to detect pathogens, and view the intricacies of molecular-to-cellular structure, binding, and dynamics.
Awards
In 2011, MIT Technology Review Top Innovator under 35
In 2012, Washington University in St. Louis Outstanding Young Alum Award
In 2013, Oprah's 50 Things That Will Make You Say "Wow!"
In 2014, the Presidential Early Career Award for Scientists and Engineers given by President Barack Obama
In 2015, the Sloan Research Fellowship
In 2015, the Dreyfus Teacher-Scholar Award
In 2016, the Adolph Lomb Medal from Optica/the Optical Society of America
In 2017, the Moore Inventor's Fellowship
In 2019, the NIH Director's New Innovator Award
In 2019, the Alan T. Waterman Award for top US Scientist under 40, National Science Foundation
In 2021, a Fellow of The Optical Society
Patents
Patents include:
Metal Oxide Si field effect plasmonic modulator
Quantum converting nanoparticles as electrical field sensors
Method and structure for plasmonic optical trapping of nanoscale particles
Slot waveguide for color display
Direct detection of nucleic acids and proteins
Multiplexed nanophotonic microarray biosensor
A method for compact and low-cost vibrational spectroscopy platforms
References
Washington University in St. Louis physicists
Washington University in St. Louis alumni
California Institute of Technology alumni
Stanford University faculty
American women physicists
Year of birth missing (living people)
Living people
21st-century American women scientists
Women in optics
American optical engineers
American optical physicists
American women engineers
American materials scientists
21st-century American engineers
21st-century American physicists
Metamaterials scientists
American nanotechnologists
Recipients of the Presidential Early Career Award for Scientists and Engineers | Jennifer Dionne | Materials_science | 1,156 |
3,987,703 | https://en.wikipedia.org/wiki/Book%20embedding | In graph theory, a book embedding is a generalization of planar embedding of a graph to embeddings in a book, a collection of half-planes all having the same line as their boundary. Usually, the vertices of the graph are required to lie on this boundary line, called the spine, and the edges are required to stay within a single half-plane. The book thickness of a graph is the smallest possible number of half-planes for any book embedding of the graph. Book thickness is also called pagenumber, stacknumber or fixed outerthickness. Book embeddings have also been used to define several other graph invariants including the pagewidth and book crossing number.
Every graph with vertices has book thickness at most , and this formula gives the exact book thickness for complete graphs. The graphs with book thickness one are the outerplanar graphs. The graphs with book thickness at most two are the subhamiltonian graphs, which are always planar; more generally, every planar graph has book thickness at most four. It is NP-hard to determine the exact book thickness of a given graph, with or without knowing a fixed vertex ordering along the spine of the book. Testing the existence of a three-page book embedding of a graph, given a fixed ordering of the vertices along the spine of the embedding, has unknown computational complexity: it is neither known to be solvable in polynomial time nor known to be NP-hard.
One of the original motivations for studying book embeddings involved applications in VLSI design, in which the vertices of a book embedding represent components of a circuit and the wires represent connections between them. Book embedding also has applications in graph drawing, where two of the standard visualization styles for graphs, arc diagrams and circular layouts, can be constructed using book embeddings.
In transportation planning, the different sources and destinations of foot and vehicle traffic that meet and interact at a traffic light can be modeled mathematically as the vertices of a graph, with edges connecting different source-destination pairs. A book embedding of this graph can be used to design a schedule that lets all the traffic move across the intersection with as few signal phases as possible. In bioinformatics problems involving the folding structure of RNA, single-page book embeddings represent classical forms of nucleic acid secondary structure, and two-page book embeddings represent pseudoknots. Other applications of book embeddings include abstract algebra and knot theory.
History
The notion of a book, as a topological space, was defined by C. A. Persinger and Gail Atneosen in the 1960s. As part of this work, Atneosen already considered embeddings of graphs in books. The embeddings he studied used the same definition as embeddings of graphs into any other topological space: vertices are represented by distinct points, edges are represented by curves, and the only way that two edges can intersect is for them to meet at a common endpoint.
In the early 1970s, Paul C. Kainen and L. Taylor Ollmann developed a more restricted type of embedding that came to be used in most subsequent research. In their formulation, the graph's vertices must be placed along the spine of the book, and each edge must lie in a single page.
Important milestones in the later development of book embeddings include the proof by Mihalis Yannakakis in the late 1980s that planar graphs have book thickness at most four, and the discovery in the late 1990s of close connections between book embeddings and bioinformatics.
Definitions
A book is a particular kind of topological space, also called a fan of half-planes. It consists of a single line , called the spine or back of the book, together with a collection of one or more half-planes, called the pages or leaves of the book, each having the spine as its boundary. Books with a finite number of pages can be embedded into three-dimensional space, for instance by choosing to be the -axis of a Cartesian coordinate system and choosing the pages to be the half-planes whose dihedral angle with respect to the -plane is an integer multiple of .
A book drawing of a finite graph onto a book is a drawing of on such that every vertex of is drawn as a point on the spine of , and every edge of is drawn as a curve that lies within a single page of . The -page book crossing number of is the minimum number of crossings in a -page book drawing.
A book embedding of onto is a book drawing that forms a graph embedding of into . That is, it is a book drawing of on that does not have any edge crossings.
Every finite graph has a book embedding onto a book with a large enough number of pages. For instance, it is always possible to embed each edge of the graph on its own separate page.
The book thickness, pagenumber, or stack number of is the minimum number of pages required for a book embedding of .
Another parameter that measures the quality of a book embedding, beyond its number of pages, is its pagewidth. This is defined analogously to cutwidth as the maximum number of edges that can be crossed by a ray perpendicular to the spine within a single page. Equivalently (for book embeddings in which each edge is drawn as a monotonic curve), it is the maximum size of a subset of edges within a single page such that the intervals defined on the spine by pairs of endpoints of the edges all intersect each other.
It is crucial for these definitions that edges are only allowed to stay within a single page of the book. As Atneosen already observed, if edges may instead pass from one page to another across the spine of the book, then every graph may be embedded into a three-page book. For such a three-page topological book embedding in which spine crossings are allowed, every graph can be embedded with at most a logarithmic number of spine crossings per edge, and some graphs need this many spine crossings.
Specific graphs
As shown in the first figure, the book thickness of the complete graph is three: as a non-planar graph its book thickness is greater than two, but a book embedding with three pages exists. More generally, the book thickness of every complete graph with vertices is exactly . This result also gives an upper bound on the maximum possible book thickness of any -vertex graph.
The two-page crossing number of the complete graph is
matching a still-unproven conjecture of Anthony Hill on what the unrestricted crossing number of this graph should be. That is, if Hill's conjecture is correct, then the drawing of this graph that minimizes the number of crossings is a two-page drawing.
The book thickness of the complete bipartite graph is at most . To construct a drawing with this book thickness, for each vertex on the smaller side of the bipartition, one can place the edges incident with that vertex on their own page. This bound is not always tight; for instance, has book thickness three, not four. However, when the two sides of the graph are very unbalanced, with , the book thickness of is exactly .
For the Turán graph (a complete multipartite graph formed from independent sets of vertices per independent set, with an edge between every two vertices from different independent sets) the book thickness is sandwiched between
and when is odd the upper bound can be improved to
The book thickness of binary de Bruijn graphs, shuffle-exchange graphs, and cube-connected cycles (when these graphs are large enough to be nonplanar) is exactly three.
Properties
Planarity and outerplanarity
The book thickness of a given graph is at most one if and only if is an outerplanar graph. An outerplanar graph is a graph that has a planar embedding in which all vertices belong to the outer face of the embedding. For such a graph, placing the vertices in the same order along the spine as they appear in the outer face provides a one-page book embedding of the given graph. (An articulation point of the graph will necessarily appear more than once in the cyclic ordering of vertices around the outer face, but only one of those copies should be included in the book embedding.) Conversely, a one-page book embedding is automatically an outerplanar embedding. For, if a graph is embedded on a single page, and another half-plane is attached to the spine to extend its page to a complete plane, then the outer face of the embedding includes the entire added half-plane, and all vertices lie on this outer face.
Every two-page book embedding is a special case of a planar embedding, because the union of two pages of a book is a space topologically equivalent to the whole plane. Therefore, every graph with book thickness two is automatically a planar graph. More precisely, the book thickness of a graph is at most two if and only if is a subgraph of a planar graph that has a Hamiltonian cycle. If a graph is given a two-page embedding, it can be augmented to a planar Hamiltonian graph by adding (into any page) extra edges between any two consecutive vertices along the spine that are not already adjacent, and between the first and last spine vertices. The Goldner–Harary graph provides an example of a planar graph that does not have book thickness two: it is a maximal planar graph, so it is not possible to add any edges to it while preserving planarity, and it does not have a Hamiltonian cycle. Because of this characterization by Hamiltonian cycles, graphs that have two-page book embeddings are also known as subhamiltonian graphs.
All planar graphs whose maximum degree is at most four have book thickness at most two. Planar 3-trees have book thickness at most three. More generally, all planar graphs have book thickness four. It has been claimed by Mihalis Yannakakis in 1986 that there exist some planar graphs that have book thickness exactly four. However, a detailed proof of this claim, announced in a subsequent journal paper, was not known until 2020, when Bekos et al. presented planar graphs with treewidth 4 that require four pages in any book embedding.
Behavior under subdivisions
Subdividing every edge of a graph into two-edge paths, by adding new vertices within each edge, may sometimes increase its book thickness. For instance, the diamond graph has book thickness one (it is outerplanar) but its subdivision has book thickness two (it is planar and subhamiltonian but not outerplanar). However, this subdivision process can also sometimes significantly reduce the book thickness of the subdivided graph. For instance, the book thickness of the complete graph is proportional to its number of vertices, but subdividing each of its edges into a two-edge path produces a subdivision whose book thickness is much smaller, only . Despite the existence of examples such as this one, conjectured that a subdivision's book thickness cannot be too much smaller than that of the original graph. Specifically, they conjectured that there exists a function such that, for every graph and for the graph formed by replacing every edge in by a two-edge path, if the book thickness of is then the book thickness of is at most . Their conjecture turned out to be false: graphs formed by Cartesian products of stars and triangular tilings have unbounded book thickness, but subdividing their edges into six-edge paths reduces their book thickness to three.
Relation to other graph invariants
Book thickness is related to thickness, the number of planar graphs needed to cover the edges of the given graph. A graph has thickness if it can be drawn in the plane, and its edges colored with colors, in such a way that edges of the same color as each other do not cross. Analogously, a graph has book thickness if it can be drawn in a half plane, with its vertices on the boundary of the half plane, with its edges colored with colors with no crossing between two edges of the same color. In this formulation of book thickness, the colors of the edges correspond to the pages of the book embedding. However, thickness and book thickness can be very different from each other: there exist graphs (subdivisions of complete graphs) that have unbounded book thickness, despite having thickness two.
Graphs of treewidth have book thickness at most and this bound is tight for
. Graphs with edges have book thickness , and graphs of genus have book thickness . More generally, it has been stated that every minor-closed graph family has bounded book thickness. However, the proof of this claim rests on a previous claim that graphs embedded on non-orientable surfaces have bounded book thickness, for which a detailed proof has not been supplied. The 1-planar graphs, which are not closed under minors, have bounded book thickness, but some 1-planar graphs including have book thickness at least four.
Every shallow minor of a graph of bounded book thickness is a sparse graph, whose ratio of edges to vertices is bounded by a constant that depends only on the depth of the minor and on the book thickness. That is, in the terminology of , the graphs of bounded book thickness have bounded expansion. However, even the graphs of bounded degree, a much stronger requirement than having bounded expansion, can have unbounded book thickness.
Because graphs of book thickness two are planar graphs, they obey the planar separator theorem: they have separators, subsets of vertices whose removal splits the graph into pieces with at most vertices each, with only vertices in the separator. Here, refers to the number of vertices in the graph. However, there exist graphs of book thickness three that do not have separators of sublinear size.
The edges within a single page of a book embedding behave in some ways like a stack data structure. This can be formalized by considering an arbitrary sequence of push and pop operations on a stack, and forming a graph in which the stack operations correspond to the vertices of the graph, placed in sequence order along the spine of a book embedding. Then, if one draws an edge from each pop operation that pops an object from the stack, to the previous push operation that pushed , the resulting graph will automatically have a one-page embedding. For this reason, the page number of a graph has also been called its stack number. In the same way, one may consider an arbitrary sequence of enqueue and dequeue operations of a queue data structure, and form a graph that has these operations as its vertices, placed in order on the spine of a single page, with an edge between each enqueue operation and the corresponding dequeue. Then, in this graph, each two edges will either cross or cover disjoint intervals on the spine. By analogy, researchers have defined a queue embedding of a graph to be an embedding in a topological book such that each vertex lies on the spine, each edge lies in a single page, and each two edges in the same page either cross or cover disjoint intervals on the spine. The minimum number of pages needed for a queue embedding of a graph is called its queue number.
Computational complexity
Finding the book thickness of a graph is NP-hard. This follows from the fact that finding Hamiltonian cycles in maximal planar graphs is NP-complete. In a maximal planar graph, the book thickness is two if and only if a Hamiltonian cycle exists. Therefore, it is also NP-complete to test whether the book thickness of a given maximal planar graph is two.
If an ordering of the vertices of a graph along the spine of an embedding is fixed, then a two-page embedding (if it exists) can be found in linear time, as an instance of planarity testing for a graph formed by augmenting the given graph with a cycle connecting the vertices in their spine ordering. claimed that finding three-page embeddings with a fixed spine ordering can also be performed in polynomial time although his writeup of this result omits many details. However, for graphs that require four or more pages, the problem of finding an embedding with the minimum possible number of pages remains NP-hard, via an equivalence to the NP-hard problem of coloring circle graphs, the intersection graphs of chords of a circle. Given a graph with a fixed spine ordering for its vertices, drawing these vertices in the same order around a circle and drawing the edges of as line segments produces a collection of chords representing . One can then form a circle graph that has the chords of this diagram as vertices and crossing pairs of chords as edges. A coloring of the circle graph represents a partition of the edges of into subsets that can be drawn without crossing on a single page. Therefore, an optimal coloring is equivalent to an optimal book embedding. Since circle graph coloring with four or more colors is NP-hard, and since any circle graph can be formed in this way from some book embedding problem, it follows that optimal book embedding is also NP-hard. For a fixed vertex ordering on the spine of a two-page book drawing, it is also NP-hard to minimize the number of crossings when this number is nonzero.
If the spine ordering is unknown but a partition of the edges into two pages is given, then it is possible to find a 2-page embedding (if it exists) in linear time by an algorithm based on SPQR trees. However, it is NP-complete to find a 2-page embedding when neither the spine ordering nor the edge partition is known. Finding the book crossing number of a graph is also NP-hard, because of the NP-completeness of the special case of testing whether the 2-page crossing number is zero.
As a consequence of bounded expansion, the subgraph isomorphism problem, of finding whether a pattern graph of bounded size exists as a subgraph of a larger graph, can be solved in linear time when the larger graph has bounded book thickness. The same is true for detecting whether the pattern graph is an induced subgraph of the larger graph, or whether it has a graph homomorphism to the larger graph. For the same reason, the problem of testing whether a graph of bounded book thickness obeys a given formula of first order logic is fixed-parameter tractable.
describe a system for finding optimal book embeddings by transforming the problem into an instance of the Boolean satisfiability problem and applying a SAT solver to the resulting problem. They state that their system is capable of finding an optimal embedding for 400-vertex maximal planar graphs in approximately 20 minutes.
Applications
Fault-tolerant multiprocessing
One of the main motivations for studying book embedding cited by involves an application in VLSI design, to the organization of fault-tolerant multiprocessors. In the DIOGENES system developed by these authors, the CPUs of a multiprocessor system are arranged into a logical sequence corresponding to the spine of a book (although this sequence may not necessarily be placed along a line in the physical layout of this system). Communication links connecting these processors are grouped into "bundles" which correspond to the pages of a book and act like stacks: connecting one of the processors to the start of a new communications link pushes all the previous links upward in the bundle, and connecting another processor to the end of a communication link connects it to the one at the bottom of the bundle and pops all the other ones down. Because of this stack behavior, a single bundle can handle a set of communications links that form the edges of a single page in a book embedding. By organizing the links in this way, a wide variety of different network topologies can be implemented, regardless of which processors have become faulty, as long as enough non-faulty processors remain to implement the network. The network topologies that can be implemented by this system are exactly the ones that have book thickness at most equal to the number of bundles that have been made available.
Book embedding may also be used to model the placement of wires connecting VLSI components into the layers of a circuit.
Stack sorting
Another application cited by concerns sorting permutations using stacks.
An influential result of showed that a system that processes a data stream by pushing incoming elements onto a
stack and then, at appropriately chosen times, popping them from the stack onto an output stream can sort the data if and only if its initial order is described by a permutation that avoids the permutation pattern 231. Since then, there has been much work on similar problems of sorting data streams by more general systems of stacks and queues. In the system considered by , each element from an input data stream must be pushed onto one of several stacks. Then, once all of the data has been pushed in this way, the items are popped from these stacks (in an appropriate order) onto an output stream. As Chung et al. observe, a given permutation can be sorted by this system if and only if a certain graph, derived from the permutation, has a book embedding with the vertices in a certain fixed order along the spine and with a number of pages that is at most equal to the number of stacks.
Traffic control
As described, a book embedding may be used to describe the phases of a traffic signal at a controlled intersection.
At an intersection, the incoming and outgoing lanes of traffic (including the ends of pedestrian crosswalks and bicycle lanes as well as lanes for motor vehicles) may be represented as the vertices of a graph, placed on the spine of a book embedding in their clockwise order around the junction. The paths through the intersection taken by traffic to get from an incoming lane to an outgoing lane may be represented as the edges of an undirected graph. For instance, this graph might have an edge from an incoming to an outgoing lane of traffic that both belong to the same segment of road, representing a U-turn from that segment back to that segment, only if U-turns are allowed at the junction. For a given subset of these edges, the subset represents a collection of paths that can all be traversed without interference from each other if and only if the subset does not include any pair of edges that would cross if the two edges were placed in a single page of a book embedding. Thus, a book embedding of this graph describes a partition of the paths into non-interfering subsets, and the book thickness of this graph (with its fixed embedding on the spine) gives the minimum number of distinct phases needed for a signalling schedule that includes all possible traffic paths through the junction.
Graph drawing
Book embedding has also been frequently applied in the visualization of network data. Two of the standard layouts in graph drawing, arc diagrams and circular layouts, can be viewed as book embeddings, and book embedding has also been applied in the construction of clustered layouts, simultaneous embeddings, and three-dimensional graph drawings.
An arc diagram or linear embedding places vertices of a graph along a line, and draws the edges of the graph as semicircles either above or below this line, sometimes also allowing edges to be drawn on segments of the line. This drawing style corresponds to a book embedding with either one page (if all semicircles are above the line) or two pages (if both sides of the line are used), and was originally introduced as a way of studying the crossing numbers of graphs. Planar graphs that do not have two-page book embeddings may also be drawn in a similar way, by allowing their edges to be represented by multiple semicircles above and below the line. Such a drawing is not a book embedding by the usual definition, but has been called a topological book embedding. For every planar graph, it is always possible to find such an embedding in which each edge crosses the spine at most once.
In another drawing style, the circular layout, the vertices of a graph are placed on a circle and the edges are drawn either inside or outside the circle. Again, a placement of the edges within the circle (for instance as straight line segments) corresponds to a one-page book drawing, while a placement both inside and outside the circle corresponds to a two-page book drawing.
For one-page drawings of either style, it is important to keep the number of crossings small as a way of reducing the visual clutter of the drawing. Minimizing the number of crossings is NP-complete, but may be approximated with an approximation ratio of where is the number of vertices. Minimizing the one-page or two-page crossing number is fixed-parameter tractable when parameterized by the cyclomatic number of the given graph, or by a combination of the crossing number and the treewidth of the graph. Heuristic methods for reducing the crossing complexity have also been devised, based e.g. on a careful vertex insertion order and on local optimization.
Two-page book embeddings with a fixed partition of the edges into pages can be interpreted as a form of clustered planarity, in which the given graph must be drawn in such a way that parts of the graph (the two subsets of edges) are placed in the drawing in a way that reflects their clustering. Two-page book embedding has also been used to find simultaneous embeddings of graphs, in which two graphs are given on the same vertex set and one must find a placement for the vertices in which both graphs are drawn planarly with straight edges.
Book embeddings with more than two pages have also been used to construct three-dimensional drawings of graphs. In particular, used a construction for book embeddings that keep the degree of each vertex within each page low, as part of a method for embedding graphs into a three-dimensional grid of low volume.
RNA folding
In the study of how RNA molecules fold to form their structure, the standard form of nucleic acid secondary structure can be described diagrammatically as a chain of bases (the RNA sequence itself), drawn along a line, together with a collection of arcs above the line describing the basepairs of the structure. That is, although these structures actually have a complicated three-dimensional shape, their connectivity (when a secondary structure exists) can be described by a more abstract structure, a one-page book embedding. However, not all RNA folds behave in this simple way. have proposed a so-called "bi-secondary structure" for certain RNA pseudoknots that takes the form of a two-page book embedding: the RNA sequence is again drawn along a line, but the basepairs are drawn as arcs both above and below this line. In order to form a bi-secondary structure, a graph must have maximum degree at most three: each base can only participate in one arc of the diagram, in addition to the two links to its neighbors in the base sequence. Advantages of this formulation include the facts that it excludes structures that are actually knotted in space, and that it matches most known RNA pseudoknots.
Because the spine ordering is known in advance for this application, testing for the existence of a bi-secondary structure for a given basepairing is straightforward. The problem of assigning edges to the two pages in a compatible way can be formulated as either an instance of 2-satisfiability, or as a problem of testing the bipartiteness of the circle graph whose vertices are the basepairs and whose edges describe crossings between basepairs. Alternatively and more efficiently, as show, a bi-secondary structure exists if and only if the diagram graph of the input (a graph formed by connecting the bases into a cycle in their sequence order and adding the given basepairs as edges) is a planar graph. This characterization allows bi-secondary structures to be recognized in linear time as an instance of planarity testing.
used the connection between secondary structures and book embeddings as part of a proof of the NP-hardness of certain problems in RNA secondary structure comparison. And if an RNA structure is tertiary rather than bi-secondary (that is, if it requires more than two pages in its diagram), then determining the page number is again NP-hard.
Computational complexity theory
used book embedding to study the computational complexity theory of the reachability problem in directed graphs. As they have observed, reachability for two-page directed graphs may be solved in unambiguous logarithmic space (the analogue, for logarithmic space complexity, of the class UP of unambiguous polynomial-time problems). However, reachability for three-page directed graphs requires the full power of nondeterministic logarithmic space. Thus, book embeddings seem intimately connected with the distinction between these two complexity classes.
The existence of expander graphs with constant page number is the key step in proving that there is no subquadratic-time simulation of two-tape non-deterministic Turing machines by one-tape non-deterministic Turing machines.
Other areas of mathematics
study applications of book thickness in abstract algebra, using graphs defined from the zero divisors of a finite local ring by making a vertex for each zero divisor and an edge for each pair of values whose product is zero.
In a multi-paper sequence, Dynnikov has studied the topological book embeddings of knots and links, showing that these embeddings can be described by a combinatorial sequence of symbols and that the topological equivalence of two links can be demonstrated by a sequence of local changes to the embeddings.
References
Topological graph theory
NP-complete problems | Book embedding | Mathematics | 6,227 |
2,880,000 | https://en.wikipedia.org/wiki/Theta%20Arietis | Theta Arietis, Latinised from θ Arietis, is the Bayer designation for a binary star system in the northern constellation of Aries. It is faintly visible to the naked eye with an apparent visual magnitude of 5.58. With an annual parallax shift of 7.61 mas, the distance to this star is an estimated with a 10-light-year margin of error. It is drifting further away with a radial velocity of +6 km/s.
The primary, component A, is a white-hued, A-type main-sequence star with a stellar classification of A1 Vn. It is spinning at a rapid pace as shown by the projected rotational velocity of 186 km/s. This is causing the "nebulous" appearance of the absorption lines indicated by the 'n' suffix in the classification. In 2005, C. Neiner and associates classified this as a Be star because is displays emission features in the hydrogen Balmer lines.
In 2016, a solar-mass companion was reported in close orbit around this star, based on observations using adaptive optics with the Gemini North Telescope.
References
External links
Aladin previewer
Aladin sky atlas
HR 669
Image Theta Arietis
A-type main-sequence stars
Be stars
Binary stars
Aries (constellation)
Arietis, Theta
BD+19 0340
Arietis, 22
014191
010732
0669 | Theta Arietis | Astronomy | 290 |
34,302,118 | https://en.wikipedia.org/wiki/Balanced%20module | In the subfield of abstract algebra known as module theory, a right R module M is called a balanced module (or is said to have the double centralizer property) if every endomorphism of the abelian group M which commutes with all R-endomorphisms of M is given by multiplication by a ring element. Explicitly, for any additive endomorphism f, if fg = gf for every R endomorphism g, then there exists an r in R such that f(x) = xr for all x in M. In the case of non-balanced modules, there will be such an f that is not expressible this way.
In the language of centralizers, a balanced module is one satisfying the conclusion of the double centralizer theorem, that is, the only endomorphisms of the group M commuting with all the R endomorphisms of M are the ones induced by right multiplication by ring elements.
A ring is called balanced if every right R module is balanced. It turns out that being balanced is a left-right symmetric condition on rings, and so there is no need to prefix it with "left" or "right".
The study of balanced modules and rings is an outgrowth of the study of QF-1 rings by C.J. Nesbitt and R. M. Thrall. This study was continued in V. P. Camillo's dissertation, and later it became fully developed. The paper gives a particularly broad view with many examples. In addition to these references, K. Morita and H. Tachikawa have also contributed published and unpublished results. A partial list of authors contributing to the theory of balanced modules and rings can be found in the references.
Examples and properties
Examples
Semisimple rings are balanced.
Every nonzero right ideal over a simple ring is balanced.
Every faithful module over a quasi-Frobenius ring is balanced.
The double centralizer theorem for right Artinian rings states that any simple right R module is balanced.
The paper contains numerous constructions of nonbalanced modules.
It was established in that uniserial rings are balanced. Conversely, a balanced ring which is finitely generated as a module over its center is uniserial.
Among commutative Artinian rings, the balanced rings are exactly the quasi-Frobenius rings.
Properties
Being "balanced" is a categorical property for modules, that is, it is preserved by Morita equivalence. Explicitly, if F(–) is a Morita equivalence from the category of R modules to the category of S modules, and if M is balanced, then F(M) is balanced.
The structure of balanced rings is also completely determined in , and is outlined in .
In view of the last point, the property of being a balanced ring is a Morita invariant property.
The question of which rings have all finitely generated right R modules balanced has already been answered. This condition turns out to be equivalent to the ring R being balanced.
Notes
References
Module theory
Ring theory | Balanced module | Mathematics | 629 |
154,711 | https://en.wikipedia.org/wiki/Aerospace | Aerospace is a term used to collectively refer to the atmosphere and outer space. Aerospace activity is very diverse, with a multitude of commercial, industrial, and military applications. Aerospace engineering consists of aeronautics and astronautics. Aerospace organizations research, design, manufacture, operate, maintain, and repair both aircraft and spacecraft.
The beginning of space and the ending of the air are proposed as 100km (62mi) above the ground according to the physical explanation that the air density is too low for a lifting body to generate meaningful lift force without exceeding orbital velocity.
Overview
In most industrial countries, the aerospace industry is a co-operation of the public and private sectors. For example, several states have a civilian space program funded by the government, such as National Aeronautics and Space Administration in the United States, European Space Agency in Europe, the Canadian Space Agency in Canada, Indian Space Research Organisation in India, Japan Aerospace Exploration Agency in Japan, Roscosmos State Corporation for Space Activities in Russia, China National Space Administration in China, SUPARCO in Pakistan, Iranian Space Agency in Iran, and Korea Aerospace Research Institute in South Korea.
Along with these public space programs, many companies produce technical tools and components such as spacecraft and satellites. Some known companies involved in space programs include Boeing, Cobham, Airbus, SpaceX, Lockheed Martin, RTX Corporation, MDA and Northrop Grumman. These companies are also involved in other areas of aerospace, such as the construction of aircraft.
History
Modern aerospace began with Engineer George Cayley in 1799. Cayley proposed an aircraft with a "fixed wing and a horizontal and vertical tail," defining characteristics of the modern aeroplane.
The 19th century saw the creation of the Aeronautical Society of Great Britain (1866), the American Rocketry Society, and the Institute of Aeronautical Sciences, all of which made aeronautics a more serious scientific discipline. Airmen like Otto Lilienthal, who introduced cambered airfoils in 1891, used gliders to analyze aerodynamic forces. The Wright brothers were interested in Lilienthal's work and read several of his publications. They also found inspiration in Octave Chanute, an airman and the author of Progress in Flying Machines (1894). It was the preliminary work of Cayley, Lilienthal, Chanute, and other early aerospace engineers that brought about the first powered sustained flight at Kitty Hawk, North Carolina on December 17, 1903, by the Wright brothers.
War and science fiction inspired scientists and engineers like Konstantin Tsiolkovsky and Wernher von Braun to achieve flight beyond the atmosphere. World War II inspired Wernher von Braun to create the V1 and V2 rockets.
The launch of Sputnik 1 in October 1957 started the Space Age, and on July 20, 1969 Apollo 11 achieved the first crewed Moon landing. In April 1981, the Space Shuttle Columbia launched, the start of regular crewed access to orbital space. A sustained human presence in orbital space started with "Mir" in 1986 and is continued by the "International Space Station". Space commercialization and space tourism are more recent features of aerospace.
Manufacturing
Aerospace manufacturing is a high-technology industry that produces "aircraft, guided missiles, space vehicles, aircraft engines, propulsion units, and related parts". Most of the industry is geared toward governmental work. For each original equipment manufacturer (OEM), the US government has assigned a Commercial and Government Entity (CAGE) code. These codes help to identify each manufacturer, repair facilities, and other critical aftermarket vendors in the aerospace industry.
In the United States, the Department of Defense and the National Aeronautics and Space Administration (NASA) are the two largest consumers of aerospace technology and products. Others include the very large airline industry. The aerospace industry employed 472,000 wage and salary workers in 2006. Most of those jobs were in Washington state and in California, with Missouri, New York and Texas also being important. The leading aerospace manufacturers in the U.S. are Boeing, United Technologies Corporation, SpaceX, Northrop Grumman and Lockheed Martin. As talented American employees age and retire, these manufacturers face an expanding labor shortfall. In order to supply the industrial sector with fresh workers, apprenticeship programs like the Aerospace Joint Apprenticeship Council (AJAC) collaborate with community colleges and aerospace firms in Washington state.
Important locations of the civilian aerospace industry worldwide include Washington state (Boeing), California (Boeing, Lockheed Martin, etc.) and Montreal, Quebec, Canada (Bombardier, Pratt & Whitney Canada) in North America; Toulouse, France (Airbus SE) and Hamburg, Germany (Airbus SE) in Europe; as well as São José dos Campos, Brazil (Embraer), Querétaro, Mexico (Bombardier Aerospace, General Electric Aviation) and Mexicali, Mexico (United Technologies Corporation, Gulfstream Aerospace) in Latin America.
In the European Union, aerospace companies such as Airbus SE, Safran, Thales, Dassault Aviation, Leonardo and Saab AB account for a large share of the global aerospace industry and research effort, with the European Space Agency as one of the largest consumers of aerospace technology and products.
In India, Bangalore is a major center of the aerospace industry, where Hindustan Aeronautics Limited, the National Aerospace Laboratories and the Indian Space Research Organisation are headquartered. The Indian Space Research Organisation (ISRO) launched India's first Moon orbiter, Chandrayaan-1, in October 2008.
In Russia, large aerospace companies like Oboronprom and the United Aircraft Building Corporation (encompassing Mikoyan, Sukhoi, Ilyushin, Tupolev, Yakovlev, and Irkut which includes Beriev) are among the major global players in this industry. The historic Soviet Union was also the home of a major aerospace industry.
The United Kingdom formerly attempted to maintain its own large aerospace industry, making its own airliners and warplanes, but it has largely turned its lot over to cooperative efforts with continental companies, and it has turned into a large import customer, too, from countries such as the United States. However, the United Kingdom has a very active aerospace sector, with major companies such as BAE Systems, supplying fully assembled aircraft, aircraft components, sub-assemblies and sub-systems to other manufacturers, both in Europe and all over the world.
Canada has formerly manufactured some of its own designs for jet warplanes, etc. (e.g. the CF-100 fighter), but for some decades, it has relied on imports from the United States and Europe to fill these needs. However Canada still manufactures some military aircraft although they are generally not combat capable. Another notable example was the late 1950s development of the Avro Canada CF-105 Arrow, a supersonic fighter-interceptor whose 1959 cancellation was considered highly controversial.
France has continued to make its own warplanes for its air force and navy, and Sweden continues to make its own warplanes for the Swedish Air Force—especially in support of its position as a neutral country. (See Saab AB.) Other European countries either team up in making fighters (such as the Panavia Tornado and the Eurofighter Typhoon), or else to import them from the United States.
Pakistan has a developing aerospace engineering industry. The National Engineering and Scientific Commission, Khan Research Laboratories and Pakistan Aeronautical Complex are among the premier organizations involved in research and development in this sector. Pakistan has the capability of designing and manufacturing guided rockets, missiles and space vehicles. The city of Kamra is home to the Pakistan Aeronautical Complex which contains several factories. This facility is responsible for manufacturing the MFI-17, MFI-395, K-8 and JF-17 Thunder aircraft. Pakistan also has the capability to design and manufacture both armed and unarmed unmanned aerial vehicles.
In the People's Republic of China, Beijing, Xi'an, Chengdu, Shanghai, Shenyang and Nanchang are major research and manufacture centers of the aerospace industry. China has developed an extensive capability to design, test and produce military aircraft, missiles and space vehicles. Despite the cancellation in 1983 of the experimental Shanghai Y-10, China is still developing its civil aerospace industry.
The aircraft parts industry was born out of the sale of second-hand or used aircraft parts from the aerospace manufacture sector. Within the United States there is a specific process that parts brokers or resellers must follow. This includes leveraging a certified repair station to overhaul and "tag" a part. This certification guarantees that a part was repaired or overhauled to meet OEM specifications. Once a part is overhauled its value is determined from the supply and demand of the aerospace market. When an airline has an aircraft on the ground, the part that the airline requires to get the plane back into service becomes invaluable. This can drive the market for specific parts. There are several online marketplaces that assist with the commodity selling of aircraft parts.
In the aerospace and defense industry, much consolidation occurred at the end of the 20th century and in the early 21st century. Between 1988 and 2011, more than 6,068 mergers and acquisitions with a total known value of US$678 billion were announced worldwide. The largest transactions have been:
The acquisition of Rockwell Collins by United Technologies Corporation for US$30.0 billion in 2018
The acquisition of Goodrich Corporation by United Technologies Corporation for US$16.2 billion in 2011
The merger of Allied Signal with Honeywell in a stock swap valued at US$15.6 billion in 1999
The merger of Boeing with McDonnell valued at US$13.4 billion in 1996
The acquisition of Marconi Electronic Systems, a subsidiary of GEC, by British Aerospace for US$12.9 billion in 1999 (now called: BAE Systems)
The acquisition of Hughes Aircraft by Raytheon for US$9.5 billion in 1997
Technology
Multiple technologies and innovations are used in aerospace, many of them pioneered around World War II:
patented by Short Brothers, folding wings optimise aircraft carrier storage from a simple fold to the entire rotating wing of the V-22, and the wingtip fold of the Boeing 777X for airport compatibility.
To improve low-speed performance, a de Havilland DH4 was modified by Handley Page to a monoplane with high-lift devices: full-span leading-edge slats and trailing-edge flaps; in 1924, Fowler flaps that extend backward and downward were invented in the US, and used on the Lockheed Model 10 Electra while in 1943 forward-hinged leading-edge Krueger flaps were invented in Germany and later used on the Boeing 707.
The 1927 large Propeller Research Tunnel at NACA Langley confirmed that the landing gear was a major source of drag, in 1930 the Boeing Monomail featured a retractable gear.
The flush rivet displaced the domed rivet in the 1930s and pneumatic rivet guns work in combination with a heavy reaction bucking bar; not depending on plastic deformation, specialist rivets were developed to improve fatigue life as shear fasteners like the Hi-Lok, threaded pins tightened until a collar breaks off with enough torque.
First flown in 1935, the Queen Bee was a radio-controlled target drone derived from the Tiger Moth for Flak training; the Ryan Firebee was a jet-powered target drone developed into long-range reconnaissance UAVs: the Ryan Model 147 Fire Fly and Lightning Bug; the Israeli IAI Scout and Tadiran Mastiff launched a line of battlefield UAVs including the IAI Searcher; developed from the General Atomics Gnat long-endurance UAV for the CIA, the MQ-1 Predator led to the armed MQ-9 Reaper.
At the end of World War I, piston engine power could be boosted by compressing intake air with a compressor, also compensating for decreasing air density with altitude, improved with 1930s turbochargers for the Boeing B-17 and the first pressurized airliners.
The 1937 Hindenburg disaster ended the era of passenger airships but the US Navy used airships for anti-submarine warfare and airborne early warning into the 1960s, while small airships continue to be used for aerial advertising, sightseeing flights, surveillance and research, and the Airlander 10 or the Lockheed Martin LMH-1 continue to be developed.
As US airlines were interested in high-altitude flying in the mid-1930s, the Lockheed XC-35 with a pressurized cabin was tested in 1937 and the Boeing 307 Stratoliner would be the first pressurized airliner to enter commercial service.
In 1933, Plexiglas, a transparent Acrylic plastic, was introduced in Germany and shortly before World War II, was first used for aircraft windshields as it is lighter than glass, and the bubble canopy improved fighter pilots visibility.
In January 1930, Royal Air Force pilot and engineer Frank Whittle filed a patent for a gas turbine aircraft engine with an inlet, compressor, combustor, turbine and nozzle, while an independent turbojet was developed by researcher Hans von Ohain in Germany; both engines ran within weeks in early 1937 and the Heinkel HeS 3-propelled Heinkel He 178 experimental aircraft made its first flight on Aug 27, 1939 while the Whittle W.1-powered Gloster E.28/39 prototype flew on May 15, 1941.
In 1935, Britain demonstrated aircraft radio detection and ranging and in 1940 the RAF introduced the first VHF airborne radars on Bristol Blenheims, then higher-resolution microwave-frequency radar with a cavity magnetron on Bristol Beaufighters in 1941, and in 1959 the radar-homing Hughes AIM-4 Falcon became the first US guided missile on the Convair F-106.
In the early 1940s, British Hurricane and Spitfire pilots wore g-suits to prevent G-LOC due to blood pooling in the lower body in high g situations; Mayo Clinic researchers developed air-filled bladders to replace water-filled bladders and in 1943 the US military began using pressure suits from the David Clark Company.
The modern ejection seat was developed during World War II, a seat on rails ejected by rockets before deploying a parachute, which could have been enhanced by the USAF in the late 1960s as a turbojet-powered autogyro with 50 nm of range, the Kaman KSA-100 SAVER.
In 1942, numerical control machining was conceived by machinist John T. Parsons to cut complex structures from solid blocks of alloy, rather than assembling them, improving quality, reducing weight, and saving time and cost to produce bulkheads or wing skins.
In World War II, the German V-2 combined gyroscopes, an accelerometer and a primitive computer for real-time inertial navigation allowing dead reckoning without reference to landmarks or guide stars, leading to packaged IMUs for spacecraft and aircraft.
The UK Miles M.52 supersonic aircraft was to have an afterburner, augmenting a turbojet thrust by burning additional fuel in the nozzle, but was cancelled in 1946.
In 1935, German aerodynamicist Adolf Busemann proposed using swept wings to reduce high-speed drag and the Messerschmitt P.1101 fighter prototype was 80% complete by the end of World War II; the later US North American F-86 and Boeing B-47 flew in 1947, as the Soviet MiG-15, and the British de Havilland Comet in 1949.
In 1951, the Avro Jetliner featured an ice protection system from Goodyear through electro-thermal resistances in the wing and tail leading edges; jet aircraft use hot engine bleed air and lighter aircraft use pneumatic deicing boots or weep anti-icing fluid on propellers, wing and tail leading edges.
In 1954, Bell Labs developed the first transistorized airborne digital computer, Tradic for the US Boeing B-52 and in the 1960s Raytheon built the MIT-developed Apollo Guidance Computer; the MIL-STD-1553 avionics digital bus was defined in 1973 then first used in the General Dynamics F-16, while the civil ARINC 429 was first used in the Boeing 757/B767 and Airbus A310 in the early 1980s.
After World War II, the initial promoter of Photovoltaic power for spacecraft, Hans K. Ziegler, was brought to the US under Operation Paperclip along Wernher von Braun and Vanguard 1 was its first application in 1958, later enhanced in space-deployable structures like the International Space Station solar arrays of .
To board an airliner, jet bridges are more accessible, comfortable and efficient than climbing the stairs.
In the 1950s, to improve thrust and fuel efficiency, the jet engine airflow was divided into a core stream and a bypass stream with a lower velocity for better propulsive efficiency: the first was the Rolls-Royce Conway with a 0.3 BPR on the Boeing 707 in 1960, followed by the Pratt & Whitney JT3D with a 1.5 BPR and, derived from the J79, the General Electric CJ805 powered the Convair 990 with a 28% lower cruise fuel burn; bypass ratio improved to the 9.3 BPR Rolls-Royce Trent XWB, the 10:1 BPR GE9X and the Pratt & Whitney GTF with high-pressure ratio cores.
Functional safety
Functional safety relates to a part of the general safety of a system or a piece of equipment. It implies that the system or equipment can be operated properly and without causing any danger, risk, damage or injury.
Functional safety is crucial in the aerospace industry, which allows no compromises or negligence. In this respect, supervisory bodies, such as the European Aviation Safety Agency (EASA), regulate the aerospace market with strict certification standards. This is meant to reach and ensure the highest possible level of safety. The standards AS 9100 in America, EN 9100 on the European market or JISQ 9100 in Asia particularly address the aerospace and aviation industry. These are standards applying to the functional safety of aerospace vehicles. Some companies are therefore specialized in the certification, inspection, verification and testing of the vehicles and spare parts to ensure and attest compliance with the appropriate regulations.
Spinoffs
Spinoffs refer to any technology that is a direct result of coding or products created by NASA and redesigned for an alternate purpose. These technological advancements are one of the primary results of the aerospace industry, with $5.2 billion worth of revenue generated by spinoff technology, including computers and cellular devices. These spinoffs have applications in a variety of different fields including medicine, transportation, energy, consumer goods, public safety and more. NASA publishes an annual report called "Spinoffs", regarding many of the specific products and benefits to the aforementioned areas in an effort to highlight some of the ways funding is put to use. For example, in the most recent edition of this publication, "Spinoffs 2015", endoscopes are featured as one of the medical derivations of aerospace achievement. This device enables more precise and subsequently cost-effective neurosurgery by reducing complications through a minimally invasive procedure that abbreviates hospitalization. "These NASA technologies are not only giving companies and entrepreneurs a competitive edge in their own industries, but are also helping to shape budding industries, such as commercial lunar landers," said Daniel Lockney.
See also
Aerodynamics
Aeronautics
Aerospace engineering
Aircraft
Astronautics
NewSpace
Space agencies (List of)
Space exploration
Spacecraft
Wiktionary: Aviation, aerospace, and aeronautical terms
References
Further reading
Blockley, Richard, and Wei Shyy. Encyclopedia of aerospace engineering (American Institute of Aeronautics and Astronautics, Inc., 2010).
Brunton, Steven L., et al. "Data-driven aerospace engineering: reframing the industry with machine learning." AIAA Journal.. 59.8 (2021): 2820-2847. online
Davis, Jeffrey R., Robert Johnson, and Jan Stepanek, eds. Fundamentals of aerospace medicine (Lippincott Williams & Wilkins, 2008) online.
Mouritz, Adrian P. Introduction to aerospace materials (Elsevier, 2012) online.
Petrescu, Relly Victoria, et al. "Modern propulsions for aerospace-a review." Journal of Aircraft and Spacecraft Technology 1.1 (2017).
Phero, Graham C., and Kessler Sterne. "The aerospace revolution: development, intellectual property, and value." (2022). online
Wills, Jocelyn. Tug of War: Surveillance Capitalism, Military Contracting, and the Rise of the Security State'' (McGill-Queen's University Press, 2017), scholarly history of MDA in Canada. online book review
External links | Aerospace | Physics | 4,230 |
71,008,486 | https://en.wikipedia.org/wiki/Manipur%20State%20Museum | The Manipur State Museum () is an institution displaying a collection of artistic, cultural, historical and scientific artefacts and relics in Imphal, Manipur, India. It has galleries housing materials of natural history, ethnology and archeology.
Overview
The Manipur State Museum () houses ornaments, textiles, agricultural equipments of Ancient Manipur, Medieval Manipur and Modern Manipur. The museum conveys an all encompassing picture of the history of the life of the Manipuri people.
History
The Manipur State Museum () was inaugurated by Indira Gandhi, the then prime minister of India on 23 September 1969. It has been expanded to a multipurpose museum. It has many sections and subsections. One prominent section is the ethnological gallery. This gallery was formally reopened by Ved Marwah, the then Governor of Manipur, on 20 January 2001.
Collections
The most famous piece on display is a Hiyang Hiren, used by the royalties. It is 78 feet in length and is in an open gallery.
Other collection include coins, manuscripts, instruments, pottery, dresses, paintings and ornaments of Ancient Manipur, Medieval Manipur and Modern Manipur.
The Museum has a publication for more than 500 species of rare orchids, out of which only 472 orchids have been identified. Several experts opined that no one comes across anywhere in India with such a variety of orchid species as in Manipur.
The royal Howdah (), presently on display in the Manipur State Museum, was personally used by Sir Meidingngu Churachand Singh KCSI (1891-1941 AD), CBE, the King of Manipur.
Exhibits
The Museum exhibits mainly cultural themes and awareness programs. Some of the exhibits include tribal ornaments, Meitei ornaments, headgears, agricultural implements, domestic implements, hunting tools, smoking pipes and lighters, terracotta pottery, gold and silver utensils, polo saddlery, traditional water pipe, Meitei textiles, Meitei time measuring device, ancient gold mask, caskets, riderless horse statues, arms and armory, basketry, tribal costumes, etc.
The time measuring implements like the "Tanyei Pung" and the "Tanyei Chei" testify the knowledge of the ancient Meiteis in Ancient Manipur civilization.
The costumes exhibited are important to study the social structure of Manipur.
The royal Howdah () of Sir Churachand Singh KCSI (1891-1941 AD), CBE, the then King of Manipur, is also displayed in the Manipur State Museum.
The Manipur State Museum also organises workshops for traditional Manipuri sculptors-souvenir.
See also
Imphal Peace Museum
INA War Museum
Kakching Garden
Keibul Lamjao National Park - world's only floating national park in Manipur, India
Khonghampat Orchidarium
Loktak Folklore Museum
Manipur Zoological Garden
Phumdi - Floating biomasses in Manipur, India
Sekta Archaeological Living Museum
Yangoupokpi-Lokchao Wildlife Sanctuary
References
External links
Manipur State Museum artnculturemanipur.gov.in
Manipur State Museum www.museumsofindia.org
Meitei architecture
Monuments and memorials in Imphal
Monuments and memorials to Meitei royalty
Museums in Manipur
Public art in India
Tourist attractions in Manipur | Manipur State Museum | Engineering | 686 |
2,467,521 | https://en.wikipedia.org/wiki/Skraup%20reaction | The Skraup synthesis is a chemical reaction used to synthesize quinolines. It is named after the Czech chemist Zdenko Hans Skraup (1850-1910). In the archetypal Skraup reaction, aniline is heated with sulfuric acid, glycerol, and an oxidizing agent such as nitrobenzene to yield quinoline.
In this example, nitrobenzene serves as both the solvent and the oxidizing agent. The reaction, which otherwise has a reputation for being violent, is typically conducted in the presence of ferrous sulfate. Arsenic acid may be used instead of nitrobenzene and the former is better since the reaction is less violent.
Mechanism
See also
Bischler-Napieralski reaction
Doebner-Miller reaction
References
Condensation reactions
Quinoline forming reactions
Name reactions | Skraup reaction | Chemistry | 181 |
325,789 | https://en.wikipedia.org/wiki/Architectural%20state | Architectural state is the collection of information in a computer system that defines the state of a program during execution. Architectural state includes main memory, architectural registers, and the program counter. Architectural state is defined by the instruction set architecture and can be manipulated by the programmer using instructions. A core dump is a file recording the architectural state of a computer program at some point in time, such as when it has crashed.
Examples of architectural state include:
Main Memory (Primary storage)
Control registers
Instruction flag registers (such as EFLAGS in x86)
Interrupt mask registers
Memory management unit registers
Status registers
General purpose registers (such as AX, BX, CX, DX, etc. in x86)
Address registers
Counter registers
Index registers
Stack registers
String registers
Architectural state is not microarchitectural state. Microarchitectural state is hidden machine state used for implementing the microarchitecture. Examples of microarchitectural state include pipeline registers, cache tags, and branch predictor state. While microarchitectural state can change to suit the needs of each processor implementation in a processor family, binary compatibility among processors in a processor family requires a common architectural state.
Architectural state naturally does not include state-less elements of a computer such as busses and computation units (e.g., the ALU).
References
Central processing unit | Architectural state | Technology | 279 |
3,899,074 | https://en.wikipedia.org/wiki/Prayer%20flag | A Tibetan prayer flag is a colorful rectangular cloth, often found strung along trails and peaks high in the Himalayas. They are used to bless the surrounding countryside and for other purposes.
Prayer flags are believed to have originated within the religious tradition of Bon. In Bon, shamanistic Bonpo used primary-colored plain flags in Tibet. Traditional prayer flags include woodblock-printed text and images.
History
Nepal Sutras, originally written on cloth banners, were transmitted to other regions of the world as prayer flags. Legend ascribes the origin of the prayer flag to the Gautama Buddha, whose prayers were written on battle flags used by the devas against their adversaries, the asuras. The legend may have given the Indian Bhikṣu a reason for carrying the heavenly banner as a way of signifying his commitment to ahimsa. This knowledge was carried into Tibet by 800 CE, and the actual flags were introduced no later than 1040 CE, where they were further modified. The Indian monk Atisha (980–1054 CE) introduced the Indian practice of printing on cloth prayer flags to Tibet and Nepal.
During the Cultural Revolution, prayer flags were discouraged but not entirely eliminated. Many traditional designs may have been lost. Currently, different styles of prayer flags can be seen all across the Tibetan region.
Lung ta/Darchog styles
There are two kinds of prayer flags: horizontal ones, called Lung ta (Wylie: rlung-rta, meaning "Wind Horse" in Tibetan), and vertical ones, called Darchog (Wylie: dar-lcog, meaning "flagstaff").
Lung ta (horizontal) prayer flags are of square or rectangular shape, and are connected along their top edges to a long string or thread. They are commonly hung on a diagonal line from high to low between two objects (e.g., a rock and the top of a pole) in high places such as the tops of temples, monasteries, stupas, and mountain passes.
Darchog (vertical) prayer flags are usually large single rectangles attached to poles along their vertical edge. Darchog are commonly planted in the ground, mountains, cairns, and on rooftops, and are iconographically and symbolically related to the Dhvaja.
Color and order
Traditionally, prayer flags come in sets of five. The five colors represent the five elements and the Five Pure Lights. Different elements are associated with different colors for specific traditions, purposes and sadhana. Blue symbolizes the sky and space, white symbolizes the air and wind, red symbolizes fire, green symbolizes water, and yellow symbolizes earth. According to Traditional Tibetan medicine, health and harmony are produced through the balance of the five elements.
Symbols and prayers
The center of a prayer flag traditionally features a lung ta (powerful or strong horse) bearing three flaming jewels (specifically ratna) on its back. The ta is a symbol of speed and the transformation of bad fortune to good fortune. The three flaming jewels symbolize the Buddha, the Dharma (Buddhist teachings) and the Sangha (Buddhist community)—the three cornerstones of Tibetan philosophical tradition.
Surrounding the lung ta are various versions of approximately 400 traditional mantras, each dedicated to a particular deity. These writings include mantras from three of the great Buddhist Bodhisattvas: Padmasambhava (Guru Rinpoche), Avalokiteśvara (Chenrezig, the bodhisattva of compassion and the patron of the Tibetan people) and Manjusri.
In addition to mantras, prayers for a long life of good fortune are often included for the person who mounts the flags.
Images or the names of four powerful animals, also known as the Four Dignities, adorn each corner of a flag: the dragon, the garuda, the tiger, and the snow lion. The prayer tag Om mani padme hum is based on four symbolic terms: om (which symbolizes one's impure body speech and mind), mani (which means jewel and symbolizes the factors of method—the altruistic intention to become enlightened, compassion and love, padme (which means lotus and symbolizes wisdom), and hum (the seed syllable of Akshobhya—the immovable and the unfluctuating that which cannot be disturbed by anything).
Wishes are also written on them.
Symbolism and tradition
Traditionally, prayer flags are used to promote peace, compassion, strength, and wisdom. The flags do not carry prayers to gods, which is a common misconception; rather, the Tibetans believe the prayers and mantras will be blown by the wind to spread the good will and compassion into all pervading space. Therefore, prayer flags are thought to bring benefit to all.
By hanging flags in high places the Lung ta will carry the blessings depicted on the flags to all beings. As wind passes over the surface of the flags, which are sensitive to the slightest movement of the wind, the air is purified and sanctified by the mantras.
The prayers of a flag become a permanent part of the universe as the images fade from exposure to the elements. Just as life moves on and is replaced by new life, Tibetans renew their hopes for the world by continually mounting new flags alongside the old. This act symbolizes a welcoming of life's changes and an acknowledgment that all beings are part of a greater ongoing cycle.
According to traditional belief, because the symbols and mantras on prayer flags are sacred, they should be treated with respect. They should not be placed on the ground or used on clothing. Old prayer flags should be burned.
Timing of hanging and taking down
Some believe that if the flags are hung on inauspicious astrological dates, they may bring negative results for as long as they are flying. The best time to put up new prayer flags is in the morning on sunny, windy days.
In Tibet, old prayer flags are replaced with new ones annually on the Tibetan New Year.
See also
Buddhist prayer beads
Bunting (textile)
Namkha
Phurba
Stupa
Tibetan prayer wheel
Notes
References
Barker, Diane (2003). Tibetan Prayer Flags. Connections Book Publishing. .
Beer, Robert (2004). Encyclopedia of Tibetan Symbols and Motifs. Serindia Publications. .
Wise, Tad (2002). Blessings on the Wind: The Mystery & Meaning of Tibetan Prayer Flags. Chronicle Books. .
External links
Bon
Indian inventions
Rainbow flags
Religious objects
Tantric practices
Tibetan Buddhist practices
Tibetan Buddhist ritual implements | Prayer flag | Physics | 1,334 |
48,870,431 | https://en.wikipedia.org/wiki/Hyperuniformity | Hyperuniform materials are characterized by an anomalous suppression of density fluctuations at large scales. More precisely, the vanishing of density fluctuations in the long-wave length limit (like for crystals) distinguishes hyperuniform systems from typical gases, liquids, or amorphous solids. Examples of hyperuniformity include all perfect crystals, perfect quasicrystals, and exotic amorphous states of matter.
Quantitatively, a many-particle system is said to be hyperuniform if the variance of the number of points within a spherical observation window grows more slowly than the volume of the observation window. This definition is equivalent to a vanishing of the structure factor in the long-wavelength limit, and it has been extended to include heterogeneous materials as well as scalar, vector, and tensor fields. Disordered hyperuniform systems, were shown to be poised at an "inverted" critical point. They can be obtained via equilibrium or nonequilibrium routes, and are found in both classical physical and quantum-mechanical systems. Hence, the concept of hyperuniformity now connects a broad range of topics in physics, mathematics, biology, and materials science.
The concept of hyperuniformity generalizes the traditional notion of long-range order and thus defines an exotic state of matter. A disordered hyperuniform many-particle system can be statistically isotropic like a liquid, with no Bragg peaks and no conventional type of long-range order. Nevertheless, at large scales, hyperuniform systems resemble crystals, in their suppression of large-scale density fluctuations. This unique combination is known to endow disordered hyperuniform materials with novel physical properties that are, e.g., both nearly optimal and direction independent (in contrast to those of crystals that are anisotropic).
History
The term hyperuniformity (also independently called super-homogeneity in the context of cosmology) was coined and studied by Salvatore Torquato and Frank Stillinger in a 2003 paper, in which they showed that, among other things, hyperuniformity provides a unified framework to classify and structurally characterize crystals, quasicrystals, and exotic disordered varieties. In that sense, hyperuniformity is a long-range property that can be viewed as generalizing the traditional notion of long-range order (e.g., translational / orientational order of crystals or orientational order of quasicrystals) to also encompass exotic disordered systems.
Hyperuniformity was first introduced for point processes and later generalized to two-phase materials (or porous media) and random scalar or vectors fields. It has been observed in theoretical models, simulations, and experiments, see list of examples below.
Definition
A many-particle system in -dimensional Euclidean space is said to be hyperuniform if the number of points in a spherical observation window with radius has a variance that scales slower than the volume of the observation window:This definition is (essentially) equivalent to the vanishing of the structure factor at the origin:for wave vectors .
Similarly, a two-phase medium consisting of a solid and a void phase is said to be hyperuniform if the volume of the solid phase inside the spherical observation window has a variance that scales slower than the volume of the observation window. This definition is, in turn, equivalent to a vanishing of the spectral density at the origin.
An essential feature of hyperuniform systems is their scaling of the number variance for large radii or, equivalently, of the structure factor for small wave numbers. If we consider hyperuniform systems that are characterized by a power-law behavior of the structure factor close to the origin:with a constant , then there are three distinct scaling behaviors that define three classes of hyperuniformity:Examples are known for all three classes of hyperuniformity.
Examples
Examples of disordered hyperuniform systems in physics are disordered ground states, jammed disordered sphere packings, amorphous ices, amorphous speckle patterns, certain fermionic systems, random self-organization, perturbed lattices, and avian photoreceptor cells.
In mathematics, disordered hyperuniformity has been studied in the context of probability theory, geometry, and number theory, where the prime numbers have been found to be effectively limit periodic and hyperuniform in a certain scaling limit. Further examples include certain random walks and stable matchings of point processes.
Ordered hyperuniformity
Examples of ordered, hyperuniform systems include all crystals, all quasicrystals, and limit-periodic sets. While weakly correlated noise typically preserves hyperuniformity, correlated excitations at finite temperature tend to destroy hyperuniformity.
Hyperuniformity was also reported for fermionic quantum matter in correlated electron systems as a result of cramming.
Disordered hyperuniformity
Torquato (2014) gives an illustrative example of the hidden order found in a "shaken box of marbles", which fall into an arrangement, called maximally random jammed packing. Such hidden order may eventually be used for self-organizing colloids or optics with the ability to transmit light with an efficiency like a crystal but with a highly flexible design.
It has been found that disordered hyperuniform systems possess unique optical properties. For example, disordered hyperuniform photonic networks have been found to exhibit complete photonic band gaps that are comparable in size to those of photonic crystals but with the added advantage of isotropy, which enables free-form waveguides not possible with crystal structures. Moreover, in stealthy hyperuniform systems, light of any wavelength longer than a value specific to the material is able to propagate forward without loss (due to the correlated disorder) even for high particle density.
By contrast, in conditions where light is propagated through an uncorrelated, disordered material of the same density, the material would appear opaque due to multiple scattering. “Stealthy” hyperuniform materials can be theoretically designed for light of any wavelength, and the applications of the concept cover a wide variety of fields of wave physics and materials engineering.
Disordered hyperuniformity was recently discovered in amorphous 2‑D materials, including amorphous silica as well as amorphous graphene, which was shown to enhance electronic transport in the material. It was shown that the Stone-Wales topological defects, which transform two-pair of neighboring hexagons to a pair of pentagons and a pair of heptagons by flipping a bond, preserves the hyperuniformity of the parent honeycomb lattice.
Disordered hyperuniformity in biology
Disordered hyperuniformity was found in the photoreceptor cell patterns in the eyes of chickens. This is thought to be the case because the light-sensitive cells in chicken or other bird eyes cannot easily attain an optimal crystalline arrangement but instead form a disordered configuration that is as uniform as possible. Indeed, it is the remarkable property of "mulithyperuniformity" of the avian cone patterns, that enables birds to achieve acute color sensing.
It may also emerge in the mysterious biological patterns known as fairy circles - circle and patterns of circles that emerge in arid places. It is believed such vegetation patterns can optimize the efficiency of water utility, which is crucial for the survival of the plants.
A universal hyperuniform organization was observed in the looped vein network of tree leaves, including ficus religiosa, ficus caulocarpa, ficus microcarpa, smilax indica, populus rotundifolia, and yulania denudate, etc. It was shown the hyperuniform network optimizes the diffusive transport of water and nutrients from the vein to the leaf cells. The hyperuniform vein network organization was believed to result from a regulation of growth factor uptake during vein network development.
Making disordered, but highly uniform, materials
The challenge of creating disordered hyperuniform materials is partly attributed to the inevitable presence of imperfections, such as defects and thermal fluctuations. For example, the fluctuation-compressibility relation dictates that any compressible one-component fluid in thermal equilibrium cannot be strictly hyperuniform at finite temperature.
Recently Chremos & Douglas (2018) proposed a design rule for the practical creation of hyperuniform materials at the molecular level. Specifically, effective hyperuniformity as measured by the hyperuniformity index is achieved by specific parts of the molecules (e.g., the core of the star polymers or the backbone chains in the case of bottlebrush polymers). The combination of these features leads to molecular packings that are highly uniform at both small and large length scales.
Non-equilibrium hyperuniform fluids and length scales
Disordered hyperuniformity implies a long-ranged direct correlation function (the Ornstein–Zernike equation). In an equilibrium many-particle system, this requires delicately designed effectively long-ranged interactions, which are not necessary for the dynamic self-assembly of non-equilibrium hyperuniform states. In 2019, Ni and co-workers theoretically predicted a non-equilibrium strongly hyperuniform fluid phase that exists in systems of circularly swimming active hard spheres, which was confirmed experimentally in 2022.
This new hyperuniform fluid features a special length scale, i.e., the diameter of the circular trajectory of active particles, below which large density fluctuations are observed. Moreover, based on a generalized random organising model, Lei and Ni (2019) formulated a hydrodynamic theory for non-equilibrium hyperuniform fluids, and the length scale above which the system is hyperuniform is controlled by the inertia of the particles. The theory generalizes the mechanism of fluidic hyperuniformity as the damping of the stochastic harmonic oscillator, which indicates that the suppressed long-wavelength density fluctuation can exhibit as either acoustic (resonance) mode or diffusive (overdamped) mode. In the Lei-Ni reactive hard-sphere model, it was found that the discontinuous absorbing transition of metastable hyperuniform fluid into an immobile absorbing state does not have the kinetic pathway of nucleation and growth, and the transition rate decreases with increasing the system size. This challenges the common understanding of metastability in discontinuous phase transitions and suggests that non-equilibrium hyperuniform fluid is fundamentally different from conventional equilibrium fluids.
See also
Crystal
Quasicrystal
Amorphous solid
State of matter
References
External links
Liquids
Concepts in physics
Materials science
Statistical mechanics | Hyperuniformity | Physics,Chemistry,Materials_science,Engineering | 2,184 |
3,772,370 | https://en.wikipedia.org/wiki/United%20Air%20Lines%20Flight%20266 | United Air Lines Flight 266 was a scheduled passenger flight from Los Angeles International Airport, California, to General Mitchell International Airport, Milwaukee, Wisconsin, via Stapleton International Airport, Denver, Colorado. On January 18, 1969, at approximately 18:21 PST, the Boeing 727 operating the flight crashed into Santa Monica Bay, Pacific Ocean, about west of Los Angeles International Airport, four minutes after takeoff, killing all 38 on board.
Aircraft and crew
The Boeing 727-22C aircraft, registration was almost new and had been delivered to United Airlines only four months earlier. It had less than 1,100 hours of operating time.
The crew of Flight 266 was Captain Leonard Leverson, 49, a veteran pilot who had been with United Airlines for 22 years and had almost 13,700 flying hours to his credit. His first officer was Walter Schlemmer, 33, who had approximately 7,500 hours, and the flight engineer was Keith Ostrander, 29, who had 634 hours. Between them the crew had more than 4,300 hours of flight time on the Boeing 727.
Accident
The aircraft had had a nonfunctional #3 generator for the past several days leading up to the accident. Per standard procedure, the crew placed masking tape over the switches and warning lights for the generator.
Approximately two minutes after takeoff, the crew reported a fire warning on engine #1 and shut it off. The crew radioed to departure control that they only had one functioning generator and needed to come back to the airport, but it turned out to be their last communication, with subsequent attempts to contact Flight 266 proving unsuccessful. Shortly after engine #1 shut down, the #2 generator also ceased operating for reasons unknown.
With the loss of all power to the lights and flight attitude instruments, flying at night in instrument conditions, the pilots quickly became spatially disoriented and unable to know which inputs to the flight controls were necessary to keep the plane flying normally. Consequently, the crew lost control of the aircraft and crashed into the ocean in a steep nose-down angle, killing everyone on board.
Rescuers (at the time) speculated that an explosion occurred aboard the plane, a Boeing 727. Three and a half hours after the crash three bodies had been found in the ocean along with parts of fuselage and a United States mail bag carrying letters with that day's postmark. Hope was dim for survivors because the aircraft was configured for domestic flights and did not carry liferafts or lifejackets. A Coast Guard spokesman said it looked "very doubtful that there could be anybody alive."
Investigation
The National Transportation Safety Board (NTSB) was unable to determine why the #2 generator had failed after it had become the plane's sole power source, nor why the "standby electrical system either was not activated or failed to function."
Several witnesses saw Flight 266 take off and reported seeing sparks emanating from either engine #1 or the rear of the fuselage, while others claimed an engine was on fire. Salvage operations were conducted to recover the wreckage of the aircraft, but not much useful information was gleaned as the cockpit instruments were not recovered. The wreckage was in approximately of water and had been severely fragmented, however the relatively small area in which it was spread indicated an extremely steep, nose-down angle at impact. There was little in the way of identifiable human remains at the wreckage site, only two passengers were identified and only one intact body was found. The #2 and #3 engines suffered severe rotational damage from high RPM speeds at impact, but the #1 engine had almost no damage because it had been powered off. No evidence of any fire or heat damage was found on the engines, thus disproving the witnesses' claims. The small portion of the electrical system that was recovered did not provide any relevant information. The CVR took nearly six weeks to locate and recover. NTSB investigators could not explain the sparking seen by witnesses on the ground and theorized that it might have been caused by debris being sucked into the engine, a transient compressor stall or an electrical system problem that led to the eventual power failure. They also were unable to explain the engine #1 fire warning in the absence of a fire, but this may have resulted from electrical system problems or a cracked duct that allowed hot engine air to set off the temperature sensors. The sensors from the #1 and #2 engines were recovered and exhibited no signs of malfunction. Some tests indicated that it was indeed possible for the #2 generator to fail from an overload condition as a result of the operating load being suddenly shifted onto it following the #1 generator's shutdown, and this was maintained as a possible cause of the failure.
N7434U had recently been fitted with a generator control panel that had been passed around several different UAL aircraft because of several malfunctions. After being installed in N7434U the month prior to the ill-fated flight, generator #3 once again caused operating problems and was swapped with a different unit. Since that generator was subsequently tested and found to have no mechanical issues, the control panel was identified as the problem after it caused further malfunctions with the replacement generator. Busy operating schedules and limited aircraft availability meant that repair work on N7434U was put on hold, with nothing that could be done in the meantime except to disable the #3 generator. The NTSB investigators believed that the inoperative #3 generator probably was not responsible for the #2 generator's in-flight failure since it was assumed to be isolated from the rest of the electrical system.
The flight control system would not have been affected by the loss of electrical power, since it relied on hydraulic and mechanical lines, so it was concluded that the loss of control was the result of the crew's inability to see around the cockpit. It was theorized that the non-activation of the backup electrical system might have been for one of several reasons:
The aircraft's battery, which powered the backup electrical system, could have been inadvertently disconnected by the flight engineer following the shutdown of engine 1, as he made sure that the galley power switch (which was similar in shape and adjacent to the battery switch) was turned off (in accordance with procedures for operating with only one functional generator).
The battery, or its charging circuitry, could have malfunctioned, rendering it unable to power the backup electrical system.
The flight engineer could have mistakenly set the aircraft's essential power switch to the APU position, rather than the standby (backup) position; the switch has to pass through a gate when turning from the APU position to the standby position, and the flight engineer, turning the switch until he encountered resistance, may have assumed that this meant that the switch had reached the end of its travel and was now in the standby position, when it had actually hit the detent between the APU and standby positions. The 727's APU is inoperative in flight.
The flight engineer could simply have neglected to switch the aircraft to the backup electrical system; the United Airlines procedures for the loss of all generators did not, at the time, explicitly tell the crew to switch to backup power (instead focusing on regaining at least one generator), and it is possible that the flight engineer repeatedly tried to bring a generator back online instead of immediately switching the aircraft to the backup system.
The CVR and FDR both lost power just after the crew informed ATC of the fire warning on engine #1. At an unknown later point, both resumed operation for a short period of time. The FDR came back online for 15 seconds, the CVR nine seconds, during which time it recorded the crew discussing their inability to see where the plane was. No sounds of the plane impacting the water could be heard when this second portion of the recording ceased.
At the time, a battery-powered backup source for critical flight instruments was not required on commercial aircraft. The accident prompted the Federal Aviation Administration to require all transport-category aircraft to carry backup instrumentation, powered by a source independent of the generators.
Probable cause
The NTSB's "probable cause" stated:
Aftermath
On January 13, 1969, just five days before the crash of United Flight 266, Scandinavian Airlines System Flight 933, a DC-8 on final approach to Los Angeles International also crashed into Santa Monica Bay. The jet broke in half on impact, killing 15. Thirty people survived in a portion of the fuselage that remained afloat.
Up until 2013, United used "Flight 266" designation on its San Francisco–Chicago (O'Hare) route.
References
External links
NTSB Aircraft Accident Report (Alternate)
Airliner accidents and incidents caused by mechanical failure
1969 in Los Angeles
Aviation accidents and incidents in the United States in 1969
Accidents and incidents involving the Boeing 727
266
Airliner accidents and incidents in California
Los Angeles International Airport
January 1969 events in the United States | United Air Lines Flight 266 | Materials_science | 1,824 |
21,284,823 | https://en.wikipedia.org/wiki/Plug%20computer | A plug computer is a small-form-factor computer whose chassis contains the AC power plug, and thus plugs directly into the wall. Alternatively, the computer may resemble an AC adapter or a similarly small device. Plug computers are often configured for use in the home or office as compact computer.
Description
Plug computers consist of a high-performance, low-power system-on-a-chip processor, with several I/O hardware ports (USB ports, Ethernet connectors, etc.). Most versions do not have provisions for connecting a display and are best suited to running media servers, back-up services, or file sharing and remote access functions; thus acting as a bridge between in-home protocols (such as Digital Living Network Alliance (DLNA) and Server Message Block (SMB)) and cloud-based services. There are, however, plug computer offerings that have analog VGA monitor and/or HDMI connectors, which, along with multiple USB ports, permit the use of a display, keyboard, and mouse, thus making them full-fledged, low-power alternatives to desktop and laptop computers. They typically run any of a number of Linux distributions.
Plug computers typically consume little power and are inexpensive.
History
A number of other devices of this type began to appear at the 2009 Consumer Electronics Show.
On January 6, 2009 CTERA Networks launched a device called CloudPlug that provides online backup at local disk speeds and overlays a file sharing service. The device also transforms any external USB hard drive into a network-attached storage device.
On January 7, 2009, Cloud Engines unveiled the Pogoplug network access server.
On January 8, 2009, Axentra announced availability of their HipServ platform.
On February 23, 2009, Marvell Technology Group announced its plans to build a mini-industry around plug computers.
On August 19, 2009, CodeLathe announced availability of their TonidoPlug network access server.
On November 13, 2009 QuadAxis launched its plug computing device product line and development platform, featuring the QuadPlug and QuadPC and running QuadMix, a modified Linux.
On January 5, 2010, Iomega announced their iConnect network access server.
On January 7, 2010 Pbxnsip launched its plug computing device the sipJack running pbxnsip: an IP Communications platform.
See also
Classes of computers
Computer appliance
CuBox, a plug computer
GuruPlug, a plug computer
DreamPlug, a plug computer
FreedomBox, an operating system
Personal web server
Print server
Raspberry Pi, a single-board computer
SheevaPlug, a plug computer
Stick PC, a computer attached to and powered by a USB or HDMI plug
References
External links
Cloud computing
Classes of computers
Cloud clients
Home servers
Server appliance | Plug computer | Technology | 574 |
58,536,963 | https://en.wikipedia.org/wiki/Bartels%E2%80%93Stewart%20algorithm | In numerical linear algebra, the Bartels–Stewart algorithm is used to numerically solve the Sylvester matrix equation . Developed by R.H. Bartels and G.W. Stewart in 1971, it was the first numerically stable method that could be systematically applied to solve such equations. The algorithm works by using the real Schur decompositions of and to transform into a triangular system that can then be solved using forward or backward substitution. In 1979, G. Golub, C. Van Loan and S. Nash introduced an improved version of the algorithm, known as the Hessenberg–Schur algorithm. It remains a standard approach for solving Sylvester equations when is of small to moderate size.
The algorithm
Let , and assume that the eigenvalues of are distinct from the eigenvalues of . Then, the matrix equation has a unique solution. The Bartels–Stewart algorithm computes by applying the following steps:
1.Compute the real Schur decompositions
The matrices and are block-upper triangular matrices, with diagonal blocks of size or .
2. Set
3. Solve the simplified system , where . This can be done using forward substitution on the blocks. Specifically, if , then
where is the th column of . When , columns should be concatenated and solved for simultaneously.
4. Set
Computational cost
Using the QR algorithm, the real Schur decompositions in step 1 require approximately flops, so that the overall computational cost is .
Simplifications and special cases
In the special case where and is symmetric, the solution will also be symmetric. This symmetry can be exploited so that is found more efficiently in step 3 of the algorithm.
The Hessenberg–Schur algorithm
The Hessenberg–Schur algorithm replaces the decomposition in step 1 with the decomposition , where is an upper-Hessenberg matrix. This leads to a system of the form that can be solved using forward substitution. The advantage of this approach is that can be found using Householder reflections at a cost of flops, compared to the flops required to compute the real Schur decomposition of .
Software and implementation
The subroutines required for the Hessenberg-Schur variant of the Bartels–Stewart algorithm are implemented in the SLICOT library. These are used in the MATLAB control system toolbox.
Alternative approaches
For large systems, the cost of the Bartels–Stewart algorithm can be prohibitive. When and are sparse or structured, so that linear solves and matrix vector multiplies involving them are efficient, iterative algorithms can potentially perform better. These include projection-based methods, which use Krylov subspace iterations, methods based on the alternating direction implicit (ADI) iteration, and hybridizations that involve both projection and ADI. Iterative methods can also be used to directly construct low rank approximations to when solving .
References
Algorithms
Control theory
Matrices
Numerical linear algebra | Bartels–Stewart algorithm | Mathematics | 580 |
22,119,853 | https://en.wikipedia.org/wiki/National%20apportionment%20of%20MP%20seats%20in%20the%20Riksdag | The electoral system in Sweden is proportional. Of the 349 seats in the national diet, the unicameral Riksdag, 310 are fixed constituency seats () allocated to constituencies in relation to the number of people entitled to vote in each constituency (). The remaining 39 leveling seats () are used to correct the deviations from proportional national distribution that may arise when allocating the fixed constituency seats. There is a constraint in the system that means that only a party that has received at least four per cent of the votes in the whole country participates in the distribution of seats. However, a party that has received at least twelve per cent of the votes in a constituency participates in the distribution of the fixed constituency seats in that constituency.
Apportionment of fixed constituency seats
Notes
Riksdag
Numbering in politics | National apportionment of MP seats in the Riksdag | Mathematics | 167 |
2,282,614 | https://en.wikipedia.org/wiki/Antonius%20van%20den%20Broek | Antonius Johannes van den Broek (4 May 1870 – 25 October 1926) was a Dutch mathematical economist and amateur physicist, notable for being the first who realized that the position of an element in the periodic table (now called atomic number) corresponds to the charge of its atomic nucleus. This hypothesis was published in 1911 and inspired the experimental work of Henry Moseley, who found good experimental evidence for it by 1913.
Life
Van den Broek was the son of a civil law notary and trained to be a lawyer himself. He studied at Leiden University and at the Sorbonne in Paris, obtaining a degree in 1895 in Leiden. From 1895 to 1900 he held a lawyers office in The Hague until 1900, after which he studied mathematical economy in Vienna and Berlin. However, from 1903 on, his main interest was physics. Much of the time between 1903 and 1911 he lived in France and Germany. Most of his papers he wrote between 1913 and 1916 while living in Gorssel. He married Elisabeth Margaretha Mauve in 1906, with whom he had five children.
Major contribution to science
The idea of the direct correlation of the charge of the atom nucleus and the periodic table was contained in his paper published in Nature on 20 July 1911, just one month after Ernest Rutherford published the results of his experiments that showed the existence of a small charged nucleus in an atom (see Rutherford model). However, Rutherford's original paper noted only that the charge on the nucleus was large, on the order of about half of the atomic weight of the atom, in whole number units of hydrogen mass. Rutherford on this basis made the tentative suggestion that atomic nuclei are composed of numbers of helium nuclei, each with a charge corresponding to half of its atomic weight. This consideration would make the nuclear charge nearly equal to atomic number in smaller atoms, with some deviation from this rule for the largest atoms, such as gold. For example, Rutherford found the charge on gold to be about 100 units and thought perhaps that it might be exactly 98 (which would be close to half its atomic weight). But gold's place in the periodic table (and thus its atomic number) was known to be 79.
Thus Rutherford did not make the proposal that the number of charges in the nucleus of an atom might be exactly equal to its place on the periodic table (atomic number). This hypothesis was put forward by Van den Broek. The number of the place of an element in the periodic table (or atomic number) at that time was not thought by most physicists to be a physical property. It was not until the work of Henry Moseley working with the Bohr model of the atom with the explicit idea of testing Van den Broek's hypothesis, that it was realized that atomic number was indeed a purely physical property (the charge of the nucleus) which could be measured, and that Van den Broek's original guess had been correct, or very close to being correct. Moseley's work actually found (see Moseley's law) the nuclear charge best described by the Bohr equation and a charge of Z-1, where Z is the atomic number.
Henry Moseley, in his paper on atomic number and X-ray emission, mentions only the models of Rutherford and Van den Broek.
References
H. A. M. Snelders (1979) BROEK, Antonius Johannes van den (1870-1926), Biografisch Woordenboek van Nederland 1, The Hague. (in Dutch)
E. R. Scerri (2007) The Periodic Table, Its Story and Its Significance, Oxford University Press
E.R. Scerri (2016) A Tale of Seven Scientists and A New Philosophy of Science, chapter 3, Oxford University Press
External links
1870 births
1926 deaths
20th-century Dutch lawyers
20th-century Dutch physicists
People involved with the periodic table
Leiden University alumni
People from Zoetermeer
University of Paris alumni
Dutch expatriates in France | Antonius van den Broek | Chemistry | 808 |
7,912,244 | https://en.wikipedia.org/wiki/Fuzzy%20backup | A fuzzy backup is a secondary (or backup) copy of data file(s) or directories that were in one state when the backup started, but in a different state by the time the backup completed. This may result in the backup copy being unusable because of the data inconsistencies. Although the backup process might have seemed successful, the resultant copies of the files or directories could be useless because a restore would yield inconsistent and unusable data.
References
"Fuzzy Backups"
IBM Tivoli Storage Manager and Open Files Backup
Computer data | Fuzzy backup | Technology | 115 |
73,228,040 | https://en.wikipedia.org/wiki/Potassium%20stearate | Potassium stearate is a metal-organic compound, a salt of potassium and stearic acid with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid.
Synthesis
Potassium stearate may be prepared by saturating a hot alcoholic solution of stearic acid with alcoholic potash.
Physical properties
The compound forms colorless crystals.
Slightly soluble in cold water, soluble in hot water, ethanol, insoluble in ether, chloroform, carbon disulfide. A component of liquid soap.
Uses
The compound is primarily used as an emulsifier in cosmetics and in food products. It is also used as a cleansing ingredient and lubricant.
Hazards
Causes skin irritation and serious eye irritation.
References
Stearates
Potassium compounds | Potassium stearate | Chemistry | 167 |
8,920,916 | https://en.wikipedia.org/wiki/John%20Marburger | John Harmen "Jack" Marburger III (February 8, 1941 – July 28, 2011) was an American physicist who directed the Office of Science and Technology Policy in the administration of President George W. Bush, serving as the Science Advisor to the President. His tenure was marred by controversy regarding his defense of the administration against allegations from over two dozen Nobel Laureates, amongst others, that scientific evidence was being suppressed or ignored in policy decisions, including those relating to stem cell research and global warming. However, he has also been credited with keeping the political effects of the September 11 attacks from harming science research—by ensuring that tighter visa controls did not hinder the movement of those engaged in scientific research—and with increasing awareness of the relationship between science and government. He also served as the President of Stony Brook University from 1980 until 1994, and director of Brookhaven National Laboratory from 1998 until 2001.
Early life
Marburger was born in Staten Island, New York, to Virginia Smith and John H. Marburger Jr., and grew up in Severna Park, Maryland. He attended Princeton University, graduating in 1962 with a B.A. in physics, followed by a Ph.D. in applied physics from Stanford University in 1967.
After completing his education, he served as a professor of physics and electrical engineering at the University of Southern California beginning in 1966, specializing in the theoretical physics of nonlinear optics and quantum optics, and co-founded the Center for Laser Studies at that institution. He rose to become chairman of the physics department in 1972, and then dean of the College of Letters, Arts and Sciences in 1976. He was engaged as a public speaker on science, including hosting a series of educational television programs on CBS. He was also outspoken on campus issues, and was designated the university's spokesperson during a scandal over preferential treatment of athletes.
Stony Brook University
In 1980, Marburger left USC to become the third president of Stony Brook University in Long Island, New York. At the time, state budget cuts were afflicting the university, and he returned it to growth with increases in the university's science research funding from the federal government.
From 1988 to 1994, Marburger chaired Universities Research Association, the organization that operated Fermilab and oversaw construction of the ill-fated Superconducting Super Collider, an experience that is credited with convincing him of the influence government had in how science is carried out. During this time he also served as a trustee of Princeton University. He stepped down as President of Stony Brook University in 1994, and began doing research again as a member of the faculty.
Chair of Shoreham commission
In 1983, he was picked by New York Governor Mario Cuomo to chair a scientific fact-finding commission on the Shoreham Nuclear Power Plant, a job that required him to find common ground between the many viewpoints represented on the commission. The commission eventually recommended the closure of the plant, a course he personally disagreed with. Cuomo had formed the commission in mid-May 1983 to provide him with recommendations regarding the plant's safety, the adequacy of emergency plans, and the economics of operating the plant. The commission's consensus recommendations included unanimous findings that no emergency evacuation of the plant could be conducted without the cooperation of Suffolk County, which was refusing to approve an evacuation plan; that the construction of the plant would have been prevented if it had been started after new Nuclear Regulatory Commission regulations were put into effect after the Three Mile Island accident in 1979; and that operating the plant would not reduce utility costs. Marburger himself at the time emphasized that the governor had not been seeking a consensus but rather encouraged multiple viewpoints to be reflected, and characterized the consensus conclusions as not the only important section of the report.
Marburger characterized his participation as a learning experience, and the experience was credited with profoundly changing his view on the relationship between the scientific community and the public. He had never been to a public hearing prior to his participation in the Shoreham commission, and he said that he had initially expected that the issues could be resolved by examining scientific data and establishing failure probabilities. However, he quickly became aware of the importance of the public participation process itself, stating that it was "one of the rare opportunities for the public to feel they were being heard and taken seriously." Marburger's conduct on the committee was praised by activists on both sides of the debate, with his focus on listening to all viewpoints and his ability to not take disagreements personally being especially noted.
Brookhaven National Laboratory
In January 1998, Marburger became president of Brookhaven Science Associates, which subsequently won a bid to operate Brookhaven National Laboratory for the federal government, and he became the director of the lab. He took office after a highly publicized scandal in which tritium leaked from the lab's High Flux Beam Reactor, leading to calls by activists to shut down the lab. Rather than directly oppose the activists, Marburger created policies that improved the environmental management of the lab as well as community involvement and transparency. Marburger also presided over the commissioning of the Relativistic Heavy Ion Collider, expanded the lab's program in medical imaging and neuroscience, and placed more emphasis on its technology transfer program.
The tritium leak, combined with other disclosures about improper handling and disposal of hazardous waste, had caused Secretary of Energy Federico Peña to fire the lab's previous manager, Associated Universities, Inc. Upon starting as the laboratory's director, Marburger noted the increased importance of health and environmental concerns since the beginning of the Cold War, stating that "getting the people at Brookhaven to understand that won't be simple, and there may be some disagreement on how we should do it, but that's my job." Marburger set up a permanent community advisory council and met with local environmental groups to increase communication between them and the laboratory's management. By 2001, when Marburger left to join the Bush administration, local environmental groups credited him with having largely dissipated the distrust that had existed between the groups when he started.
In 2001 he was elected a Fellow of the American Physical Society for "his contributions to laser physics and for his scientific leadership as Director of Brookhaven National Laboratory".
Bush administration
In September 2001, Marburger became Director of the Office of Science and Technology Policy under George W. Bush. Marburger was a noted Democrat, a fact that Nature magazine stated was relevant to the decision by the administration to take the unusual step of withholding from Marburger the title of Assistant to the President that previous science advisors had been granted.
His tenure was marked by controversy as he defended the Bush administration from accusations that political influence on science was distorting scientific research in federal agencies and that scientific evidence was being suppressed or ignored in policy decisions, especially on the topics of abstinence-only birth control education, climate change policy, and stem cell research. Marburger defended the Bush Administration from these accusations, saying they were inaccurate or motivated by partisanship, especially on the issue of science funding levels. Marburger continued to be personally respected by many of his academic colleagues.
Marburger's tenure as Director was the longest in the history of that post. After the September 11 attacks, he helped to establish the DHS Directorate for Science and Technology within the new Department of Homeland Security. He has been called a central player opposing new restrictions of international scientific exchanges of people and ideas after the attacks. He later was responsible for reorienting the nation's space policy after the Space Shuttle Columbia disaster, and played an important part in the nation's re-entry into the International Thermonuclear Experimental Reactor program. Marburger was also known for his support of the emerging field of science of science policy, which seeks to analyze how science policy decisions affects a nation's ability to produce and benefit from innovation.
In February 2004, the Union of Concerned Scientists published a report accusing the Bush administration of manipulating science for political purposes, listing more than 20 alleged incidents of censoring scientific results or applying a litmus test in the appointment of supposedly scientific advisory panel members. In April 2004, Marburger published a statement rebutting the report and exposing errors and incomplete explanations in it, and stating that "even when the science is clear—and often it is not—it is but one input into the policy process," but "in this Administration, science strongly informs policy." The Union of Concerned Scientists issued a revised version of their report after Marburger's statement was published. Marburger also called the report's conclusions illusory and the result of focusing on unrelated incidents within a vast government apparatus, and attributed the controversy as being related to the upcoming elections. It was noted that Marburger enjoyed close personal relationships with President Bush, White House Chief of Staff Andrew Card and Office of Management and Budget Director Joshua Bolten, attesting to his active involvement within the administration.
Marburger responded to criticism of his support for Bush administration policies in 2004, stating "No one will know my personal positions on issues as long as I am in this job. I am here to make sure that the science input to policy making is sound and that the executive branch functions properly with respect to its science and technology missions." On the topic of stem cell research, he in 2004 said that stem cells "offer great promise for addressing incurable diseases and afflictions. But I can't tell you when a fertilized egg becomes sacred. That's not my job. That's not a science issue. And so whatever I think about reproductive technology or choice, or whatever, is irrelevant to my job as a science adviser." However, in February 2005, in a speech at the annual conference of the National Association of Science Writers, he stated, "Intelligent design is not a scientific theory.... I don't regard intelligent design as a scientific topic". Also In 2005, he told The New York Times that "global warming exists, and we have to do something about it."
Sherwood Boehlert, the Republican chair of the House Committee on Science during most of Marburger's tenure, said that "the challenge he faced was serving a president who didn't really want much scientific advice, and who let politics dictate the direction of his science policy... and he was in the unenviable position of being someone who had earned the respect of his scientific colleagues while having to be identified with policies that were not science-based." On the other hand, Robert P. Crease, a colleague of Marburger at Stony Brook University, characterized him as someone who "[went] to the White House as a scientist, not an advocate. He refused to weigh in on high-profile, politically controversial issues, but instead set about fixing broken connections in the unwieldy machinery by which the government approves and funds scientific projects.... Some bitterly criticized him for collaborating with the Bush administration. But he left the office running better than when he entered."
Later life
Marburger returned to Stony Brook University as a faculty member in 2009, and co-edited the book The Science of Science Policy: A Handbook, which was published in 2011. He also served as Vice President for Research but stepped down on July 1, 2011. Marburger died Thursday, July 28, 2011, at his home in Port Jefferson, New York, after four years of treatment for non-Hodgkin's lymphoma. He was survived by his wife, two sons, and a grandson. His final publication, a book on quantum physics for laypeople called Constructing Reality: Quantum Theory and Particle Physics, was published shortly after his death.
References
External links
|-
1941 births
2011 deaths
20th-century American writers
21st-century American non-fiction writers
American nuclear physicists
American science writers
United States biotechnology law
Brookhaven National Laboratory staff
California Democrats
Deaths from lymphoma in New York (state)
Deaths from non-Hodgkin lymphoma
Energy policy of the United States
Fellows of the American Physical Society
Fermilab
George W. Bush administration personnel
NASA oversight
New York (state) Democrats
Nonlinear optics
Nuclear energy in the United States
People from Severna Park, Maryland
Scientists from Los Angeles
People from Port Jefferson, New York
People from Staten Island
Princeton University alumni
Quantum optics
American quantum physicists
Space policy
Stanford University alumni
Stem cell research
Presidents of Stony Brook University
American theoretical physicists
United States Department of Homeland Security officials
University of Southern California faculty
Scientists from New York (state)
Directors of the Office of Science and Technology Policy | John Marburger | Physics,Chemistry,Biology | 2,582 |
16,721,466 | https://en.wikipedia.org/wiki/Tropane%20alkaloid | Tropane alkaloids are a class of bicyclic [3.2.1] alkaloids and secondary metabolites that contain a tropane ring in their chemical structure. Tropane alkaloids occur naturally in many members of the plant family Solanaceae. Certain tropane alkaloids such as cocaine and scopolamine are notorious for their psychoactive effects, related usage and cultural associations. Particular tropane alkaloids such as these have pharmacological properties and can act as anticholinergics or stimulants.
Classification
Anticholinergics
Anticholinergic drugs and deliriants:
Atropine, racemic hyoscyamine, from the deadly nightshade (Atropa belladonna)
Hyoscyamine, the levo-isomer of atropine, from henbane (Hyoscyamus niger), mandrake (Mandragora officinarum) and the sorcerers' tree (Latua pubiflora).
Scopolamine, from henbane and Datura species (Jimson weed)
All three acetylcholine-inhibiting chemicals can also be found in the leaves, stems, and flowers in varying, unknown amounts in Brugmansia (angel trumpets), a relative of Datura. The same is also true of many other plants belonging to subfamily Solanoideae of the Solanaceae, the alkaloids being concentrated particularly in the leaves and seeds. However, the concentration of alkaloids can vary greatly, even from leaf to leaf and seed to seed.
Stimulants
Stimulants and cocaine-related alkaloids:
Cocaine, from coca plant (Erythroxylum coca)
Ecgonine, a precursor and metabolite of cocaine
Benzoylecgonine, a metabolite of cocaine
Hydroxytropacocaine, from coca plant (Erythroxylum coca)
Methylecgonine cinnamate, from coca plant (Erythroxylum coca)
Others
Catuabines, found in catuaba, an infusion or dry extract made from Erythroxylum vaccinifolium
Scopine
Synthetic analogs of tropane alkaloids also exist, such as the phenyltropanes. They are not considered to be alkaloids per definition.
Biosynthesis
The biosynthesis of the tropane alkaloids have attracted intense interest because of their high physiological activity as well as the presence of the bicyclic tropane core.
References
Deliriants | Tropane alkaloid | Chemistry | 549 |
1,267,110 | https://en.wikipedia.org/wiki/Radical%20substitution | In organic chemistry, a radical-substitution reaction is a substitution reaction involving free radicals as a reactive intermediate.
The reaction always involves at least two steps, and possibly a third.
In the first step called initiation (2,3), a free radical is created by homolysis. Homolysis can be brought about by heat or ultraviolet light, but also by radical initiators such as organic peroxides or azo compounds. UV Light is used to create two free radicals from one diatomic species. The final step is called termination (6,7), in which the radical recombines with another radical species. If the reaction is not terminated, but instead the radical group(s) go on to react further, the steps where new radicals are formed and then react are collectively known as propagation (4,5). This is because a new radical is created, able to participate in secondary reactions.
Radical substitution reactions
In free radical halogenation reactions, radical substitution takes place with halogen reagents and alkane substrates. Another important class of radical substitutions involve aryl radicals. One example is the hydroxylation of benzene by Fenton's reagent. Many oxidation and reduction reactions in organic chemistry have free radical intermediates, for example the oxidation of aldehydes to carboxylic acids with chromic acid. Coupling reactions can also be considered radical substitutions. Certain aromatic substitutions takes place by radical-nucleophilic aromatic substitution. Auto-oxidation is a process responsible for deterioration of paints and food, as well as production of certain lab hazards such as diethyl ether peroxide.
More radical substitutions are listed below:
The Barton–McCombie deoxygenation involves substitution of a hydroxyl group for a proton.
The Wohl–Ziegler reaction involves allylic bromination of alkenes.
The Hunsdiecker reaction converts silver salts of carboxylic acids to alkyl halides.
The Dowd–Beckwith reaction involves ring expansion of cyclic β-keto esters.
The Barton reaction involves synthesis of nitrosoalcohols from nitrites.
The Minisci reaction involves generation of an alkyl radical from a carboxylic acid and a silver salt, and subsequent substitution at an aromatic compound
References
Free radical reactions
Reaction mechanisms | Radical substitution | Chemistry | 483 |
13,063,435 | https://en.wikipedia.org/wiki/Ferguson%20Electronics | Ferguson Electronics (formerly known as Ferguson Radio Corporation) is an electronics company specializing in small electronics items such as radios and set top boxes.
History
Ferguson is one of the older electronics companies, alongside Ultra, Dynatron, Pye and Bush in the United Kingdom. It was originally an American–Canadian pre-War company making radio sets for the U.K. market based upon contemporary American models. After World War II, it became Ferguson Radio Corporation, making radio receivers and, later, televisions. Later still, it became part of the British Radio Corporation. It was taken over by Thorn Electrical Industries in the late 1950s, but the Ferguson name continued to be used by Thorn, and its successor Thorn EMI.
Throughout the company's early history, Ferguson products were very popular across its wide customer base. By the early 1960s its wide product range included a most comprehensive range of audio and TV equipment. Small, battery-operated portable transistor radios to solid oak 6 ft wide hydraulic lid radiograms sporting fully automatic stackable Garrard turntables, multi-channel radios and 2-foot-wide stereo speakers were commonplace in many UK households. Open reel tape recorders and hi-fis followed.
Sales held well, with 1980s new introductions including personal cassette players, CD players and video recorders.
The 1980s saw much competition from foreign brands such as JVC, Tandy, Hitachi and Sanyo. This took its toll on the Ferguson brand and in 1987 it was sold off to the French electronics company Thomson. Thomson group itself subsequently withdrew from the competitive European consumer electronics market. In the UK, the Ferguson brand was licensed initially to DSI (Dixons and Currys). DSI ceased using it in 2006 and competitor Comet took up the licence and used it until 2012. Comet used the brand on Freeview and Freesat set-top boxes, DVD players and DAB radios. Although Comet went into administration in November 2012, it had discontinued using the Ferguson brand earlier in the year.
Today
In 2017 Ferguson in the UK was relaunched by a British television manufacturer Cello Electronics. Cello has licensed the name from Technicolor (Thomson) to be used for a new range of televisions manufactured in County Durham.
UK Trademarks
There are currently several Ferguson trademarks registered for class 09, audio visual equipment, in the UK:
EU010787471 Registered by Ferguson Sp. z.o.o. 04/04/2012 but opposed and UKUK00000652009 EU003131927 Registered by Thomson Multimedia now Technicolor, dates back to 26/09/1946.
References
External links
List of Ferguson televisions
Ferguson UK Website
British brands
Electronics companies of the United Kingdom
Electronics industry in London
Manufacturing companies of the United Kingdom
Radio manufacturers | Ferguson Electronics | Engineering | 565 |
27,314,842 | https://en.wikipedia.org/wiki/Bublitz%20Case%20Company | The Bublitz Case Company was a manufacturer of musical instrument cases in Elkhorn, Wisconsin. Assets of the Bublitz Case Company were bought by G. Leblanc Corporation, a manufacturer of musical instruments in Kenosha, Wisconsin.
The Bublitz Case Company manufactured over a hundred models of cases for clarinets, oboes, bassoons, flutes, piccolos, trumpets, cornets, saxophones, trombones, French horns, baritones, tubas and a case for television test instrumentation. In 1948 the company produced eighteen thousand cases. During the nineteen fifties and sixties Bublitz manufactured over twenty-five thousand cases annually.
Early beginnings
William Frank "Bill" Bublitz (4 May 1900 - 3 July 1962), Elkhorn, Wisconsin, son of a nearby farmer began making violins in 1912 when he was only twelve years old. July 9, 1921 the violin maker filed for a patent on his newly invented “Cramping Form for Violin Bouts" and received a patent on May 9, 1922. "A general line of musical instruments was announced by William F. Bublitz, who opened this week (February 17, 1923) in Elkhorn, Wis." “The Wisconsin inventor initially specialized in making violins especially adapted for juveniles. These violins were about three-quarters the regular size which were particularly suited for a boy or girl who found it difficult to handle adult violins. Mr. Bublitz inherited his love for music from his grandfather, Carl Bublitz. The latter was a violinist in Leipzig, Germany. The younger Bublitz was skilled as a violinist, but modestly said that his knowledge was acquired to enable him to better perfect the instruments that he makes." Although the business was originally located in Elkhorn, later it moved to Burlington until a fire destroyed the business.
Founding
William Frank “Bill” Bublitz then founded the case company after making violins as a young skilled wood carver and teaching students to play the violins in nearby Burlington, Wisconsin. A few of his violins still exists today. After his business burned down, Bill began work in the case department of the Frank Holton Company. Later he managed the Elkhorn Case Company until he began his own case manufacturing business. Two men and three women were employed in Bill's factory when he closed the business to enter military service during World War II. After his discharge from the Army in 1945, Bill resumed his manufacturing business. In 1947, Bill's brother, Robert Earnest “Bob” Bublitz (24 November 1916 - 30 November 1973) who prior to the war, managed the plating department at the Frank Holton Company, joined Bill. The Bublitz Case Company was initially located in the rear of a garage behind Bill's mother, Tina's home at 209 West Page Street.
Expansion
As the business grew, the factory was expanded three time over adjacent lots behind Bill's and his brother Bob's homes. The business usually employed between fifteen and twenty-five employees. The cases were sold all over the United States and in Europe. The primary customers were musical instrument manufacturers like Leblanc (musical instrument manufacturer), W.T. Armstrong Company, Getzen, Allied, Gemeinhardt and Buffet. Additionally, the cases were sold to large wholesalers and jobbers as well as a TV test instrument manufacturer.
Personnel
Bill was the general manager and in-charge of the wood shop, lining and packing department. Bob managed the covering and hardware installation departments. Both Bill and Bob worked alongside their employees in every phase of the work. Bill made most of the specialized machinery and molds for manufacturing the cases. He also designed and constructed the forms from which trombone, French-horn, baritone, trumpet and cornet cases were molded from layers of basswood veneer.
Design and materials
Basswood was used for the side frames and the tops and bottoms were made from either luan mahogany, birch or molded basswood. The cases were usually covered with a black leatherette and the insides were lined with a blue or red plush. The collegiate models of clarinets, trumpets, cornets and saxophones cases were sewn with a leather binding and metal corner protectors were riveted in place for durability. French-horn, trombone, tube and baritone cases had a steel banding installed on the top to make the case rugged and prevent sheet music from falling out of the case. Quality craftsmanship was always a concern with Bill &/or Bob who personally inspecting every case that left the factory. The factory was one of the first factories in Southeastern Wisconsin to be completely air conditioned.
Leadership changes
Following Bill's death in 1962, Robert Bublitz became the general manager and was joined by another brother, Gustave Julius William “Gus” Bublitz (22 June 1906 - 5 July 1976) who worked in Holton's trombone department. Bob and Gus purchased the shares in the company inherited by Mrs. William Bublitz. In the incorporation, Gus became the president, Robert Joseph Bublitz, the vice president and Robert Ernest Bublitz the General Manager, Secretary & Treasurer.
Changes in ownership
During the Summer of 1966, due to working capital & inventory shortages brought on by the purchase of Mrs. Bublitz interest, the business was sold to one of its primary historical long term customers, LeBlanc. Two year later the business was moved to the second floor of the Holton Company for better work flow. Bob continued as the general manager of the company until he resigned in 1972. Later the name of the business unit was changed to the Leblanc's Case Division. In 2005 the manufacturing equipment was crated and shipped to China. During the Fall 2008 the Holton factory was closed and the manufacturing equipment was moved to Eastlake, Ohio, home town of The H. N. White Company, manufacturers of the King line of musical instruments. By 2010, many of the companies that made up the musical instrument manufacturing industry including C.G. Conn and Selmer have merged into one musical instrument manufacturing conglomerate, Steinway Musical Instruments, Inc. (NYSE: LVB until taken private in 2013).
Elkhart, Indiana once the capital for musical instrument manufacturing, has gone from over sixty companies in the 1960s to just three companies in 2010. The musical instrument industry has changed drastically because of consolidation, outsourcing of manufacturing offshore to take advantage of lower labor cost, inexpensive imports and reduced school budgets for music.
References
External links
National Music Museum, The University of South Dakota, 414 East Clark Street, Vermillion, SD 57069
http://orgs.usd.edu/nmm/
Musical instrument parts and accessories
Manufacturing companies based in Wisconsin | Bublitz Case Company | Technology | 1,365 |
38,268,733 | https://en.wikipedia.org/wiki/Quantum%20carpet | In quantum mechanics, a quantum carpet
is a regular art-like pattern drawn by the wave function evolution or the probability density in the space of the Cartesian product of the quantum particle position coordinate and time or in spacetime resembling carpet art. It is the result of self-interference of the wave function during its interaction with reflecting boundaries. For example, in the infinite potential well, after the spread of the initially localized Gaussian wave packet in the center of the well, various pieces of the wave function start to overlap and interfere with each other after reflection from the boundaries. The geometry of a quantum carpet is mainly determined by the quantum fractional revivals.
Quantum carpets demonstrate many principles of quantum mechanics, including wave-particle duality, quantum revivals, and decoherence. Thus, they illustrate certain aspects of theoretical physics.
In 1995, Michael Berry created the first quantum carpet, which described the momentum of an excited atom. Today, physicists use quantum carpets to demonstrate complex theoretical principles.
Quantum carpets that demonstrate theoretical principles
Wave-particle duality
Quantum carpets demonstrate wave-particle duality by showing interference within wave packets.
Wave particle duality is difficult to comprehend. However, quantum carpets provide an opportunity to visualize this property. Consider the graph of the probability distribution of an excited electron in a confined space (particle in a box), where brightness of color corresponds to momentum. Lines of dull color (ghost terms or canals) appear across the quantum carpet. In these canals, the momentum of the electron is very small. Destructive interference, when the trough of a wave overlaps with the crest of another wave, causes these ghost terms. In contrast, some areas of the graph display bright color. Constructive interference, when the crests of two waves overlap to form a larger wave, causes these bright colors. Thus, quantum carpets provide visual evidence of interference within electrons and other wave packets. Interference is a property of waves, not particles, so interference within these wave packets prove that they have properties of waves in addition to properties of particles. Therefore, quantum carpets display wave particle duality.
Quantum revivals
Quantum carpets demonstrate quantum revivals by showing the periodic expansions and contractions of wave packets.
When the momentum of a wave packet is graphed on a quantum carpet, it displays an intricate pattern. When the temporal evolution of this wave packet is graphed on quantum carpets, the wave packet expands, and the initial pattern is lost. However, after a certain period of time, the waveform contracts and returns to its original state, and the initial pattern is restored. This continues to occur with periodic regularity. Quantum revivals, the periodic expansion and contraction of wave packets, are responsible for the restoration of the pattern. Although quantum revivals are mathematically complex, they are simple and easy to visualize on quantum carpets, as patterns expanding and reforming. Thus, quantum carpets provide clear visual evidence of quantum revivals.
Decoherence
Quantum carpets demonstrate decoherence by showing a loss of coherence over time.
When the temporal evolution of an electron, photon, or atom is graphed on a quantum carpet, there is initially a distinct pattern. This distinct pattern shows coherence. That is to say, the wave can be split in two pieces and recombined to form a new wave. However, this pattern fades with time, and eventually, devolves into nothing. When the pattern fades, coherence is lost, and it is impossible to split the wave in two and recombine it. This loss of coherence is called decoherence. A set of complex mathematical equations model decoherence. However, a simple loss of pattern shows decoherence in quantum carpets. Thus, quantum carpets are a tool to visualize and simplify decoherence.
History
While performing an experiment on optics, English physicist Henry Fox Talbot inadvertently discovered the key to quantum carpets. In this experiment, a wave struck a diffraction grating, and Talbot noticed that the patterns of grating repeated themselves with periodic regularity. This phenomenon became known as the Talbot Effect. The bands of light that Talbot discovered were never graphed on an axis, and thus, he never created a true quantum carpet. However, the bands of light were similar to the images on a quantum carpet. Centuries later, physicists graphed the Talbot effect, creating the first quantum carpet. Since then, scientists have turned to quantum carpets as visual evidence for quantum theory.
References
Quantum mechanics | Quantum carpet | Physics | 906 |
35,369,474 | https://en.wikipedia.org/wiki/Emanuels%20Gr%C4%ABnbergs | Emanuels Donats Frīdrihs Jānis Grinbergs (1911–1982, westernized as Emanuel Grinberg) was a Latvian mathematician, known for Grinberg's theorem on the Hamiltonicity of planar graphs.
Biography
Grinbergs was born on January 25, 1911, in St. Petersburg, the son of a Lutheran bishop from Latvia. Latvia became independent from Russia in 1917, and on the death of his father in 1923, Grinbergs' family returned to Riga, taking Grinbergs with them.
In 1927, he won a high school mathematics competition, the prize for which was to study in Lille, France. He then studied mathematics at the University of Latvia beginning in 1930. On graduating in 1934, he won a prize that again funded study in France; he did graduate studies in 1935 and 1936 at the École Normale Supérieure in Paris, during which he published his first paper, in geometry. He returned to the University of Latvia as a privatdozent in 1937, and joined the faculty as a dozent in 1940. His lectures at that time covered subjects including geometry, probability theory, and group theory. While there, he defended a thesis in geometry at the University of Latvia in 1943, entitled On Oscillations, Superoscillations and Characteristic Points.
In the meantime, the Soviet Union had annexed Latvia in 1940, and the army of Nazi Germany had occupied it and incorporated it into the Reichskommissariat Ostland. Grinbergs was drafted into the Latvian Legion, part of the German military, in 1944. After the war, because of his service as a German soldier, he was held prisoner in a camp in Kutaisi, Georgia, until 1946; he lost his university position, and his doctorate (awarded during the German occupation) was annulled.
Grinbergs returned to Latvia, where he became a factory worker in the Radiotehnika radio factory, while continuing to be interrogated regularly by the KGB. He developed mathematical models of electrical circuits, which he wrote up as a second thesis, Problems of analysis and synthesis of simple linear circuits, his defense of which earned him a candidate degree.
In 1954, Grinbergs was allowed to return to the University of Latvia faculty. In 1956, he joined the Institute of Physics of the Latvian Academy of Sciences, and in 1960, he began working at the Computer Center of the University of Latvia, where he remained for the rest of his career, eventually becoming Chief Scientist there.
Research
Grinbergs' initial research interests were in geometry, and later shifted to graph theory. With professors Arins and Daube at the University of Latvia, Grinbergs was one of the first to work in applied mathematics and computer science in Latvia.
Grinbergs and his collaborators wrote many papers on the design of electrical circuits and electronic filters, stemming from his radio work. He earned the State Prize of the Latvian SSR in 1980 for his research on nonlinear electronic circuit theory.
Another early line of research by Grinbergs at the Computer Center concerned the automated design of ship hulls, and the computations with spline curves and surfaces needed in this design. The goal of this research was to calculate patterns for cutting and then bending flat steel plates so that they could be welded together to form ship hulls without the need for additional machining after the bending step; the methods developed by Grinbergs were later used throughout the Soviet Union.
In graph theory, Grinbergs is best known for Grinberg's theorem, a necessary condition for a planar graph to have a Hamiltonian cycle that has been frequently used to find non-Hamiltonian planar graphs with other special properties. His researches in graph theory also concerned graph coloring, graph isomorphism, cycles in directed graphs, and a counterexample to a conjecture of András Ádám on the number of cycles in tournaments.
Other topics in Grinbergs' research include Steiner triple systems, magnetohydrodynamics, operations research, and the mathematical modeling of hydrocarbon exploration.
References
1911 births
1982 deaths
University of Latvia alumni
Academic staff of the University of Latvia
Graph theorists
Latvian Waffen-SS personnel
20th-century Latvian mathematicians
Soviet mathematicians | Emanuels Grīnbergs | Mathematics | 854 |
926,962 | https://en.wikipedia.org/wiki/Podium | A podium (: podiums or podia) is a platform used to raise something to a short distance above its surroundings. In architecture a building can rest on a large podium. Podiums can also be used to raise people, for instance the conductor of an orchestra stands on a podium as do many public speakers. Common parlance has shown an increasing use of podium in North American English to describe a lectern.
In sports, a type of podium can be used to honor the top three competitors in events. In the modern Olympics a tri-level podium is used. Traditionally, the highest platform is in the center for the gold medalist. To their right is a lower platform for the silver medalist, and to the left of the gold medalist is a lower platform for the bronze medalist. At the 2016 Rio Summer Olympics, the Silver and Bronze podium places were of equal elevation. In many sports, results in the top three of a competition are often referred to as podiums or podium finishes. In some individual sports, podiums is an official statistic, referring to the number of top three results an athlete has achieved over the course of a season or career. The word may also be used, chiefly in the United States, as a verb, "to podium", meaning to attain a podium place.
Etymology
The word podium derives from Latin, which in turn borrowed it from Ancient Greek (podion), a word derived from (pous, "foot", with a stem pod-).
Use at modern Olympics
Podiums were first used at the 1930 British Empire Games (now Commonwealth Games) in Hamilton, Ontario and subsequently during the 1932 Winter Olympics in Lake Placid and the 1932 Summer Olympics in Los Angeles.
Podiuming has become a slang term for finishing a contest within the first three places. The use of the word podium as a verb instead of noun is controversial. The New York Times wrote on the very subject of the correct use of the word podium during its Winter Olympic coverage in 2010.
In motorsport
The winner stands in the middle, with the second placed driver to their right and the third place driver to their left. Also present are the dignitaries selected by the race organisers who will present the trophies.
In some motorsport events, including Formula One, a representative of the team that won the race will also be present at the podium, with a fourth podium step, trophy and champagne. In many forms of motorsport, the three top-placed drivers in a race stand on a podium for the trophy ceremony. In an international series, the national anthem of the winning driver, and the winning team or constructor may be played over a public address system and the flags of the drivers' countries are hoisted above them. The recordings are short versions of the national anthems, ensuring the podium ceremony stays within its allocated time. Should a driver experience problems with his car on a slow lap in Formula One, that driver is transported to the pit lane via road car by the Formula One Administration security officer.
Following the presentation of the trophies, the drivers will often spray Champagne over each other and their team members watching below, a tradition started by Dan Gurney following the 1967 24 Hours of Le Mans race. The drivers will generally refrain from spraying champagne if a fatality or major accident occurs during the event. Also, in countries where alcohol sponsorship or drinking is prohibited, alcoholic beverages may be replaced by other drinks, for example rose water.
The term has become common parlance in the media, where a driver may be said to "be heading for a podium finish" or "just missing out on a podium" when he is heading for, or just misses out on a top three finish. The NASCAR Cup Series, the highest level of stock car racing in the United States, does not use a podium in post-game events or statistics. Instead, the winning team celebrates in victory lane, and top-five and top-ten finishes are recognized statistically. Those finishing second to fifth are required to stop in a media bullpen located on pit lane for interviews. The INDYCAR NTT IndyCar Series does not use a podium at either the Indianapolis 500 or at Texas Motor Speedway. The Indy 500 has a long tradition of the winning driver and team celebrating in victory lane, while Texas Motor Speedway president Eddie Gossage has stated that victory lane should be reserved for the winner of the race. The series uses a podium at all other races, particularly road course events.
In architecture
Architectural podiums consist of a projecting base or pedestal at ground level, and they have been used since ancient times. Originally sometimes only meters tall, architectural podiums have become more prominent in buildings over time, as illustrated in the gallery.
See also
Dais
Lectern
Pedestal
Pulpit
Rostrum
Soapbox
References
Architectural elements
Public speaking | Podium | Technology,Engineering | 966 |
36,722,822 | https://en.wikipedia.org/wiki/Mill%20test%20report%20%28metals%20industry%29 | A mill test report (MTR) and often also called a certified mill test report, certified material test report, mill test certificate (MTC), inspection certificate, certificate of test, or a host of other names, is a quality assurance document used in the metals industry that certifies a material's chemical and physical properties and states a product made of metal (steel, aluminum, brass or other alloys) complies with an international standards organization (such as ANSI, ASME, etc.) specific standards.
Mill here refers to an industry which manufactures and processes raw materials.
Steel
An MTC provides traceability and assurance to the end user about the quality of the steel used and the process used to produce it.
Typically a European MTC will be produced to EN 10204. High quality steels for pressure vessel of structural purposes will be declared to 2.1 or 2.2 or certificated to 3.1 or 3.2. (EDIT: type is declared not by chapter in the document, but by type name, so edited the numbering)
The MTC will specify the type of certificate, the grade of steel and any addenda. It will also specify the results of chemical and physical examination to allow the purchaser or end user to compare the plate to the requirements of the relevant standards.
What is MTC for steel?
In steel industry
There are mainly two types of MTC in steel industry, as for steel plates or steel pipes, there must be specific inspection scope or lists:
MTC EN 10204 3.1 :MTC 3.1 is issued by the manufacturer in which they declare that the products supplied are in compliance with the requirements of the order and in which they supply test results. This is the most common MTC in steel industry, when there is no extra requirement of customer for TPI inspection and witness of production and inspection of tests.
MTC EN 10204 3.2 :MTC 3.2 refers to the report prepared by both the manufacturer's authorized inspection representative, independent of the manufacturing department and either the purchaser's authorized inspection representative or the inspector designated by the official regulations and in which they declare that the products supplied are in compliance with the requirements of the order and in which test results are supplied.
References
Metallurgy
Quality control | Mill test report (metals industry) | Chemistry,Materials_science,Engineering | 472 |
1,028,836 | https://en.wikipedia.org/wiki/Root%20nodule | Root nodules are found on the roots of plants, primarily legumes, that form a symbiosis with nitrogen-fixing bacteria. Under nitrogen-limiting conditions, capable plants form a symbiotic relationship with a host-specific strain of bacteria known as rhizobia. This process has evolved multiple times within the legumes, as well as in other species found within the Rosid clade. Legume crops include beans, peas, and soybeans.
Within legume root nodules, nitrogen gas (N2) from the atmosphere is converted into ammonia (NH3), which is then assimilated into amino acids (the building blocks of proteins), nucleotides (the building blocks of DNA and RNA as well as the important energy molecule ATP), and other cellular constituents such as vitamins, flavones, and hormones. Their ability to fix gaseous nitrogen makes legumes an ideal agricultural organism as their requirement for nitrogen fertilizer is reduced. Indeed, high nitrogen content blocks nodule development as there is no benefit for the plant of forming the symbiosis. The energy for splitting the nitrogen gas in the nodule comes from sugar that is translocated from the leaf (a product of photosynthesis). Malate as a breakdown product of sucrose is the direct carbon source for the bacteroid. Nitrogen fixation in the nodule is very oxygen sensitive. Legume nodules harbor an iron containing protein called leghaemoglobin, closely related to animal myoglobin, to facilitate the diffusion of oxygen gas used in respiration.
Symbiosis
Leguminous family
Plants that contribute to N2 fixation include the legume family – Fabaceae – with taxa such as kudzu, clovers, soybeans, alfalfa, lupines, peanuts, and rooibos. They contain symbiotic bacteria called rhizobia within the nodules, producing nitrogen compounds that help the plant to grow and compete with other plants. When the plant dies, the fixed nitrogen is released, making it available to other plants, and this helps to fertilize the soil. The great majority of legumes have this association, but a few genera (e.g., Styphnolobium) do not. In many traditional farming practices, fields are rotated through various types of crops, which usually includes one consisting mainly or entirely of a leguminous crop such as clover, in order to take advantage of this.
Non-leguminous
Although by far the majority of plants able to form nitrogen-fixing root nodules are in the legume family Fabaceae, there are a few exceptions:
Actinorhizal plants such as alder and bayberry can form (less complex) nitrogen-fixing nodules, thanks to a symbiotic association with Frankia bacteria. These plants belong to 25 genera distributed among 8 plant families. According to a count in 1998, it includes about 200 species and accounts for roughly the same amount of nitrogen fixation as rhizobial symbioses. An important structural difference is that in these symbioses the bacteria are never released from the infection thread.
Parasponia, a tropical genus in the Cannabaceae is also able to interact with rhizobia and form nitrogen-fixing nodules. As related plants are actinorhizal, it is believed that the plant "switched partner" in its evolution.
The ability to fix nitrogen is far from universally present in these families. For instance, of 122 genera in the Rosaceae, only 4 genera are capable of fixing nitrogen. All these families belong to the orders Cucurbitales, Fagales, and Rosales, which together with the Fabales form a nitrogen-fixing clade (NFC) of eurosids. In this clade, Fabales were the first lineage to branch off; thus, the ability to fix nitrogen may be plesiomorphic and subsequently lost in most descendants of the original nitrogen-fixing plant; however, it may be that the basic genetic and physiological requirements were present in an incipient state in the last common ancestors of all these plants, but only evolved to full function in some of them:
Classification
Two main types of nodule have been described in legumes: determinate and indeterminate.
Determinate nodules are found on certain tribes of tropical legume such as those of the genera Glycine (soybean), Phaseolus (common bean), and Vigna. and on some temperate legumes such as Lotus. These determinate nodules lose meristematic activity shortly after initiation, thus growth is due to cell expansion resulting in mature nodules which are spherical in shape. Another type of determinate nodule is found in a wide range of herbs, shrubs and trees, such as Arachis (peanut). These are always associated with the axils of lateral or adventitious roots and are formed following infection via cracks where these roots emerge and not using root hairs. Their internal structure is quite different from those of the soybean type of nodule.
Indeterminate nodules are found in the majority of legumes from all three sub-families, whether in temperate regions or in the tropics. They can be seen in Faboideae legumes such as Pisum (pea), Medicago (alfalfa), Trifolium (clover), and Vicia (vetch) and all mimosoid legumes such as acacias, the few nodulated caesalpinioid legumes such as partridge pea. They earned the name "indeterminate" because they maintain an active apical meristem that produces new cells for growth over the life of the nodule. This results in the nodule having a generally cylindrical shape, which may be extensively branched. Because they are actively growing, indeterminate nodules manifest zones which demarcate different stages of development/symbiosis:
Zone I—the active meristem. This is where new nodule tissue is formed which will later differentiate into the other zones of the nodule.
Zone II—the infection zone. This zone is permeated with infection threads full of bacteria. The plant cells are larger than in the previous zone and cell division is halted.
Interzone II–III—Here the bacteria have entered the plant cells, which contain amyloplasts. They elongate and begin terminally differentiating into symbiotic, nitrogen-fixing bacteroids.
Zone III—the nitrogen fixation zone. Each cell in this zone contains a large, central vacuole and the cytoplasm is filled with fully differentiated bacteroids which are actively fixing nitrogen. The plant provides these cells with leghemoglobin, resulting in a distinct pink color.
Zone IV—the senescent zone. Here plant cells and their bacteroid contents are being degraded. The breakdown of the heme component of leghemoglobin results in a visible greening at the base of the nodule.
This is the most widely studied type of nodule, but the details are quite different in nodules of peanut and relatives and some other important crops such as lupins where the nodule is formed following direct infection of rhizobia through the epidermis and where infection threads are never formed. Nodules grow around the root, forming a collar-like structure. In these nodules and in the peanut type the central infected tissue is uniform, lacking the uninfected ells seen in nodules of soybean and many indeterminate types such as peas and clovers.
Actinorhizal-type nodules are markedly different structures found in non-legumes. In this type, cells derived from the root cortex form the infected tissue, and the prenodule becomes part of the mature nodule. Despite this seemingly major difference, it is possible to produce such nodules in legumes by a single homeotic mutation.
Nodulation
Legumes release organic compounds as secondary metabolites called flavonoids from their roots, which attract the rhizobia to them and which also activate nod genes in the bacteria to produce nod factors and initiate nodule formation. These nod factors initiate root hair curling. The curling begins with the very tip of the root hair curling around the Rhizobium. Within the root tip, a small tube called the infection thread forms, which provides a pathway for the Rhizobium to travel into the root epidermal cells as the root hair continues to curl.
Partial curling can even be achieved by nod factor alone. This was demonstrated by the isolation of nod factors and their application to parts of the root hair. The root hairs curled in the direction of the application, demonstrating the action of a root hair attempting to curl around a bacterium. Even application on lateral roots caused curling. This demonstrated that it is the nod factor itself, not the bacterium that causes the stimulation of the curling.
When the nod factor is sensed by the root, a number of biochemical and morphological changes happen: cell division is triggered in the root to create the nodule, and the root hair growth is redirected to curl around the bacteria multiple times until it fully encapsulates one or more bacteria. The bacteria encapsulated divide multiple times, forming a microcolony. From this microcolony, the bacteria enter the developing nodule through the infection thread, which grows through the root hair into the basal part of the epidermis cell, and onwards into the root cortex; they are then surrounded by a plant-derived symbiosome membrane and differentiate into bacteroids that fix nitrogen.
Effective nodulation takes place approximately four weeks after crop planting, with the size, and shape of the nodules dependent on the crop. Crops such as soybeans, or peanuts will have larger nodules than forage legumes such as red clover, or alfalfa, since their nitrogen needs are higher. The number of nodules, and their internal color, will indicate the status of nitrogen fixation in the plant.
Nodulation is controlled by a variety of processes, both external (heat, acidic soils, drought, nitrate) and internal (autoregulation of nodulation, ethylene). Autoregulation of nodulation controls nodule numbers per plant through a systemic process involving the leaf. Leaf tissue senses the early nodulation events in the root through an unknown chemical signal, then restricts further nodule development in newly developing root tissue. The Leucine rich repeat (LRR) receptor kinases (NARK in soybean (Glycine max); HAR1 in Lotus japonicus, SUNN in Medicago truncatula) are essential for autoregulation of nodulation (AON). Mutation leading to loss of function in these AON receptor kinases leads to supernodulation or hypernodulation. Often root growth abnormalities accompany the loss of AON receptor kinase activity, suggesting that nodule growth and root development are functionally linked. Investigations into the mechanisms of nodule formation showed that the ENOD40 gene, coding for a 12–13 amino acid protein [41], is up-regulated during nodule formation [3].
Connection to root structure
Root nodules apparently have evolved three times within the Fabaceae but are rare outside that family. The propensity of these plants to develop root nodules seems to relate to their root structure. In particular, a tendency to develop lateral roots in response to abscisic acid may enable the later evolution of root nodules.
Nodule-like structures
Some fungi produce nodular structures known as tuberculate ectomycorrhizae on the roots of their plant hosts. Suillus tomentosus, for example, produces these structures with its plant host lodgepole pine (Pinus contorta var. latifolia). These structures have, in turn, been shown to host nitrogen fixing bacteria, which contribute a significant amount of nitrogen and allow the pines to colonize nutrient-poor sites.
Gallery
See also
Root gall nematode
Rhizobium
Sinorhizobium
Bradyrhizobium
Neorhizobium
Pararhizobium
Common Symbiotic Signaling Pathway
References
External links
Legume root nodules at the Tree of Life Web project
Video and commentary on root nodules of White Clover
Plant organogenesis
Fabaceae
Nitrogen cycle
Plant roots
Symbiosis
Oligotrophs | Root nodule | Chemistry,Biology | 2,590 |
4,141,406 | https://en.wikipedia.org/wiki/Solar%20dynamo | The solar dynamo is a physical process that generates the Sun's magnetic field. It is explained with a variant of the dynamo theory. A naturally occurring electric generator in the Sun's interior produces electric currents and a magnetic field, following the laws of Ampère, Faraday and Ohm, as well as the laws of fluid dynamics, which together form the laws of magnetohydrodynamics. The detailed mechanism of the solar dynamo is not known and is the subject of current research.
Mechanism
A dynamo converts kinetic energy into electric-magnetic energy. An electrically conducting fluid with shear or more complicated motion, such as turbulence, can temporarily amplify a magnetic field through Lenz's law: fluid motion relative to a magnetic field induces electric currents in the fluid that distort the initial field. If the fluid motion is sufficiently complicated, it can sustain its own magnetic field, with advective fluid amplification essentially balancing diffusive or ohmic decay. Such systems are called self-sustaining dynamos. The Sun is a self-sustaining dynamo that converts convective motion and differential rotation within the Sun to electric-magnetic energy.
Currently, the geometry and width of the tachocline are hypothesized to play an important role in models of the solar dynamo by winding up the weaker poloidal field to create a much stronger toroidal field. However, recent radio observations of cooler stars and brown dwarfs, which do not have a radiative core and only have a convection zone, have demonstrated that they maintain large-scale, solar-strength magnetic fields and display solar-like activity despite the absence of tachoclines. This suggests that the convection zone alone may be responsible for the function of the solar dynamo.
Solar cycle
The most prominent time variation of the solar magnetic field is related to the quasi-periodic 11-year solar cycle, characterized by an increasing and decreasing number and size of sunspots. Sunspots are visible as dark patches on the Sun's photosphere and correspond to concentrations of magnetic field. At a typical solar minimum, few or no sunspots are visible. Those that do appear are at high solar latitudes. As the solar cycle progresses towards its maximum, sunspots tend to form closer to the solar equator, following Spörer's law.
The 11-year sunspot cycle is half of a 22-year Babcock–Leighton solar dynamo cycle, which corresponds to an oscillatory exchange of energy between toroidal and poloidal solar magnetic fields. At solar-cycle maximum, the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation within the tachocline, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the convection zone forces emergence of the toroidal magnetic field through the photosphere, giving rise to pairs of sunspots, roughly aligned east–west with opposite magnetic polarities. The magnetic polarity of sunspot pairs alternates every solar cycle, a phenomenon known as the Hale cycle.
During the solar cycle's declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number. At solar minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are relatively rare and the poloidal field is at maximum strength. During the next cycle, differential rotation converts magnetic energy back from the poloidal to the toroidal field, with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11-year sunspot cycle corresponds to a change in the polarity of the Sun's large-scale magnetic field. Long minima of solar activity can be associated with the interaction between double dynamo waves of the solar magnetic field caused by the beating effect of the wave interference.
See also
Stellar magnetic field
Solar phenomena
Atmospheric dynamo
References
Dynamo
Magnetism in astronomy | Solar dynamo | Astronomy | 831 |
45,197,219 | https://en.wikipedia.org/wiki/Escalation%20archetype | The escalation archetype is one of possible types of system behaviour that are known as system archetypes.
The escalation archetype is common for situations of non-cooperative games where each player can make own decisions and these decisions lead to the outcome for the player. However, when both players try to maximize their output (at the expense of the other one) they can get into a loop where each player will try harder and harder to surpass the opponent. While it can have favourable consequences it can also lead to self-destructive behaviour.
Structure
Elements of archetype
Escalation archetype system can be described using causal loop diagrams which may consist of balancing and reinforcing loops.
Balancing loop
Balancing loop is a structure representing negative feedback process. In such a structure, a change in system leads to actions that usually eliminate the effect of that change which means that the system tends to remain stable in time.
Reinforcing loop
Balancing loop is a structure representing positive feedback process. This reinforcing feedback causes that even a small change in the system can lead to huge disturbances, e.g. variable A is increased which leads to an increase of variable B which leads to another increase of A and so there might be an exponential growth over time.
Escalation archetype as balancing loops
The image below shows escalation archetype as two balancing loops.
When X makes an action, it leads to a change in results of X relative to results of Y. Y then makes action to equalize the situation and the result again changes the balance and induces another action by X. As this repeats actions done by X and Y are bigger and bigger to keep up with other's actions and results.
Escalation archetype as reinforcing loop
The causal loop diagram below shows escalation archetype as a single reinforcing loop. It can be read simply as that more action done by X creates bigger results of action done by X. The bigger results of X, the bigger difference between X and Y results. The bigger difference means more action by Y and more action by Y leads to bigger result of Y. The bigger result of Y leads to a smaller difference between X and Y, but the smaller is this difference, the bigger will be the action of X and it starts all over again.
The image below simplifies reinforcing loop and shows general idea of what happens. Increased activity of X leads to an increase of threat for Y which leads to an increased activity by Y. Increased activity by Y leads to increased threat for X which creates another potential for activity of X to grow.
Examples
Arms race
A well known example of escalation archetype is the arms race. The idea is that in the arms race two (or more) parties are competing to have the strongest army and weapons. An example is the race in producing nuclear weapons by the United States and the Soviet Union that was an important factor in the Cold War. Over the time, each party can get temporarily a slight advantage, but then the other one produces or obtains in other way more weapons and gets the advantage on its side, temporarily. In the end, both parties have great military power that is very threatening not only to them but also to non-aligned parties.
Picking apples in an orchard
The escalation archetype can turn up in situations such as picking fruit in an orchard. Imagine a large apple orchard with a bountiful crop. An owner of such a plantation cannot pick the whole harvest himself because there are simply too many trees. Therefore, he employs fruit pickers to do the work for him. He tries to figure a way to measure the work they do so that he can reward them fairly. As he is suspicious that workers would might slowly, he is hesitant to link the wage only to hours worked, and he comes up with an idea. He divides workers into two groups and declares that the group which harvests more apples will get a bonus, in addition to their regular wage.
Both groups start harvesting apples with passion and energy. First, group X collects a pallet load a little bit sooner than the second group, Y. Therefore, the Y-group motivates those members who were a little bit slower to increase their pace. Now Y is a little bit better, so they not only catch up, but even finish the second pallet load before the X-group. Then X comes with an idea that they should assign roles to their members – some will pick apples from the upper part of trees using ladders, while some will collect those that are in the lower part of the trees; other will load boxes, and one person will organise the work and help where necessary. This advantage enables X-group to again get ahead of Y. While Y adapts to the model of X, they make some modifications to their procedures, and soon they are the leading group. This improvement of processes continues in several iterations until both parties are exceptionally effective in harvesting apples. The owner can be satisfied with the situation, as pallets are quickly being loaded with apples. Should everything continue this way, the orchard could be harvested in few hours, with X-group beating Y-group by a tenth of a percent. The owner could reward only the winning team or reward both teams, because they were almost equally hard-working.
However, due to the fact that one group was always a little bit behind, the situation in the middle of day is bad for one of the groups, who are slightly slower, let's say it is Y-group. They can continue working at the same rate, and they would finish second, with a loss of a tenth of a percent. Or, they can come up with another innovation, which would enable them to increase their production output. They have an idea that harvesting the topmost apples is slower that harvesting the rest of them. Because of that, they decide to skip these apples and only collect the others. This way, the situation has escalated problematically. While Y could win the competition now, there will be a considerable quantity of apples left on the trees. Or, if both groups are instructed not to leave a single apple in the orchard, they will have to stay much longer to finish these apples, and the owner will have higher costs for their wages.
The owner could, of course, set a condition that no team could leave a tree until it is fully harvested. That would help in some way to break the escalation archetype, unless workers realize they are not punished for some other undesirable behaviour, for example being careless regarding tree condition after the harvest.
As can be seen in this example, the escalation archetype might bring positive results (faster harvesting) but it is necessary to monitor behaviour of the affected system to ensure long-term profitability.
The attention fight
To avoid naming in this example of escalation archetype occurrence, behaviour of imaginary persons Amy (A), Betty (B), Cecil (C) and Daniel (D)is described.
Amy, Betty, Cecil and Daniel want to become famous celebrities with a lot of money, hordes of admirers and amazing children. They already have many friends and everybody in the town knows each one of them. They all work hard to be best in their field of expertise and soon they are known to the whole country. They know of each other and try to work harder to become more famous than each other. This is when an escalation archetype comes into play. They become the most famous celebrities in the country, and each one of them realizes that to draw more attention than others they need to be unique and very special. As A starts to work even harder than before, B notices that A's fame is growing more than hers and starts to work harder than A. This is noted by C and he does what must be done - starts working more than anyone else. But there is also D, whose ambitions are no smaller; he wants to be the most famous celebrity, so he starts working even harder than anyone else. As A notices her effort is not sufficient, she does what is obvious - starts to work more than before.
Now, this cycle could repeat continuously until Amy, Betty, Cecil or Daniel dies of exhaustion. In the meantime, some of them could start taking drugs with the presumption that it could boost their productivity and ability to concentrate, or with the aim to get rid of depression from working all the time. Another solution presumed by them as feasible would be some way of eliminating their opponents - by false accusation or by pointing out their faults.
Or, if they found it impossible to be better by simply working more, they could try to figure out some way to attract attention by qualitative change instead of merely quantitative change. This way, A could say something shocking on TV; B could simply follow by saying something even more shocking or controversial. Then, C would feel threatened and so he will come up with an idea that he could make controversial photographs. Then D will try to surpass everyone and will do some action that will attract the attention of the media and the public. They would escalate this to an extreme situation.
While, at the beginning, the competitiveness was beneficial for Amy, Betty, Cecil, Daniel and the public, in the long term, many negative consequences result.
What could be a meaningful solution for these people? They could have set some limits for themselves beforehand, for example, how much time they are willing to work to achieve their desire to be a famous celebrity and what is acceptable behaviour and what is not. If they are not able to do so, there has to be some mechanism from outside to stop them - e.g., family or friends giving them cautionary advice.
Competing children
Tendency of parents to compare their children creates nourishing environment for escalation archetype. Parents tend to compare their kids with other children and among own kids. This creates pressure on children as they are expected to perform well.
Imagine a family with two kids named, for example, Lisa and Bartolomeo. Their parents are very much children-focused and hope their kids will get proper educations and live what they consider successful lives. They invest significant portions of both their family budget and time into both children and hope that this investment will pay off in the form of Lisa and Bartolomeo being successful in school and later in life.
Lisa and Bartolomeo are usual children with some hobbies who study diligently but without extreme passion. They simply do what they got to do. Their results are good but not perfect. So their parents come and start the escalation archetype by telling the kids that the one with better marks will go on special holiday of their choice. As both Lisa and Bartolomeo like travelling a lot, they start to study hard. To satisfaction of their parents children's marks get soon better which is very positive outcome. Yet a problem arises. As they both study really hard to keep pace with the other one, something might go wrong.
For example, when Bartolomeo is very creative and skillful painter but not so talented in studying usual subjects like math or languages. Sooner or later he will reach his limits. Then to keep up the good marks he will have to abandon his painting hobby and his talent will be wasted. Or he will try to sustain great marks and then Bartolomeo will start cheating during exams.
However even when no negative effect happens there will be a difficulty how to reward children after Bartolome finishes second very closely. Should their parents appreciate only Lisa's effort or acknowledge also Bartolomeo's achievements. When they reward only Lisa, Bartolomeo could easily become demotivated and then it would be impossible to make use of positive effects of escalation archetype. On the other hand, rewarding both could mean the same and for both children as they realise that their extra effort does not have necessarily mean better outcome.
There is also an alternative version of how competition amongst children can be affected by escalation archetype. When all parents motivate children to improve in comparison to their peers, they will all study harder and harder while the differences amongst participating kids will remain relatively stable (and if teachers increase requirements they will even retain their marks). Under such simple circumstances most children might benefit from the competition nevertheless children with weaker intellectual skill may become isolated when they are no longer able to pursue others. Reversely in another alternative scenario where all children are demotivated to study for some reason, their results are worse and worse (and if teachers decrease requirements they will retain their marks while being less educated) and downward spiral is working in a way that situation gets worse and worse.
Risks and opportunities
The dangers of systems with escalation archetypes are various. First, it might be difficult to identify the existence of archetype at the first sight. Then the behaviour of the system might look desirable at first and therefore requiring no immediate action. Another risk is a possibility of exponential growth within such structure. Finally the system might have different outcome in short term and long term.
Escalation archetype comes with a possibility to make a big change in the system with a little input or a small action done at the beginning (due to the fact that it behaves like reinforcing loop).
Solution and optimization
To remove downward or upward spiral effect of escalation archetype there is a change in the system necessary which breaks the ongoing pattern. That change is typically switching the actors from non-cooperative game mode to cooperative game behaviour so that they stop escalating their actions to keep with others and rather find mutual solution and movement.
See also
Attractiveness principle
Fixes that fail
Growth and underinvestment
Limits to growth
References
Causality
Conflict (process)
Systems theory | Escalation archetype | Physics,Biology | 2,818 |
2,261,519 | https://en.wikipedia.org/wiki/User%20interface%20design | User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that clearly communicate to the user what's important. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design). User-centered design is typically accomplished through the execution of modern design thinking which involves empathizing with the target audience, defining a problem statement, ideating potential solutions, prototyping wireframes, and testing prototypes in order to refine final interface mockups.
User interfaces are the points of interaction between users and designs.
Three types of user interfaces
Graphical user interfaces (GUIs)
Users interact with visual representations on a computer's screen. The desktop is an example of a GUI.
Interfaces controlled through voice
Users interact with these through their voices. Most smart assistants, such as Siri on smartphones or Alexa on Amazon devices, use voice control.
Interactive interfaces utilizing gestures
Users interact with 3D design environments through their bodies, e.g., in virtual reality (VR) games.
Interface design is involved in a wide range of projects, from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered on their expertise, whether it is software design, user research, web design, or industrial design.
Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design and typography are utilized to support its usability, influencing how the user performs certain interactions and improving the aesthetic appeal of the design; design aesthetics may enhance or detract from the ability of users to use the functions of the interface. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.
UI design vs. UX design
Compared to UX design, UI design is more about the surface and overall look of a design. User interface design is a craft in which designers perform an important function in creating the user experience. UI design should keep users informed about what is happening, giving appropriate feedback in a timely manner. The visual look and feel of UI design sets the tone for the user experience. On the other hand, the term UX design refers to the entire process of creating a user experience.
Don Norman and Jakob Nielsen said:
Design thinking
User interface design requires a good understanding of user needs. It mainly focuses on the needs of the platform and its user expectations. There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. The modern design thinking framework was created in 2004 by David M. Kelley, the founder of Stanford’s d.school, formally known as the Hasso Plattner Institute of Design. EDIPT is a common acronym used to describe Kelley’s design thinking framework—it stands for empathize, define, ideate, prototype, and test. Notably, the EDIPT framework is non-linear, therefore a UI designer may jump from one stage to another when developing a user-centric solution. Iteration is a common practice in the design thinking process; successful solutions often require testing and tweaking to ensure that the product fulfills user needs.
EDIPT
Empathize
Conducting user research to better understand the needs and pain points of the target audience. UI designers should avoid developing solutions based on personal beliefs and instead seek to understand the unique perspectives of various users. Qualitative data is often gathered in the form of semi-structured interviews.
Common areas of interest include:
What would the user want the system to do?
How would the system fit in with the user's normal workflow or daily activities?
How technically savvy is the user and what similar systems does the user already use?
What interface aesthetics and functionalities styles appeal to the user?
Define
Solidifying a problem statement that focuses on user needs and desires; effective problem statements are typically one sentence long and include the user, their specific need, and their desired outcome or goal.
Ideate
Brainstorming potential solutions to address the refined problem statement. The proposed solutions should ideally align with the stakeholders' feasibility and viability criteria while maintaining user desirability standards.
Prototype
Designing potential solutions of varying fidelity (low, mid, and high) while applying user experience principles and methodologies. Prototyping is an iterative process where UI designers should explore multiple design solutions rather than settling on the initial concept.
Test
Presenting the prototypes to the target audience to gather feedback and gain insights for improvement. Based on the results, UI designers may need to revisit earlier stages of the design process to enhance the prototype and user experience.
Usability testing
The Nielsen Norman Group, co-founded by Jakob Nielsen and Don Norman in 1998, promotes user experience and interface design education. Jakob Nielsen pioneered the interface usability movement and created the "10 Usability Heuristics for User Interface Design." Usability is aimed at defining an interface’s quality when considering ease of use; an interface with low usability will burden a user and hinder them from achieving their goals, resulting in the dismissal of the interface. To enhance usability, user experience researchers may conduct usability testing—a process that evaluates how users interact with an interface. Usability testing can provide insight into user pain points by illustrating how efficiently a user can complete a task without error, highlighting areas for design improvement.
Usability inspection
Letting an evaluator inspect a user interface. This is generally considered to be cheaper to implement than usability testing (see step below), and can be used early on in the development process since it can be used to evaluate prototypes or specifications for the system, which usually cannot be tested on users. Some common usability inspection methods include cognitive walkthrough, which focuses the simplicity to accomplish tasks with the system for new users, heuristic evaluation, in which a set of heuristics are used to identify usability problems in the UI design, and pluralistic walkthrough, in which a selected group of people step through a task scenario and discuss usability issues.
Usability testing
Testing of the prototypes on an actual user—often using a technique called think aloud protocol where the user is asked to talk about their thoughts during the experience. User interface design testing allows the designer to understand the reception of the design from the viewer's standpoint, and thus facilitates creating successful applications.
Requirements
The dynamic characteristics of a system are described in terms of the dialogue requirements contained in seven principles of part 10 of the ergonomics standard, the ISO 9241. This standard establishes a framework of ergonomic "principles" for the dialogue techniques with high-level definitions and illustrative applications and examples of the principles. The principles of the dialogue represent the dynamic aspects of the interface and can be mostly regarded as the "feel" of the interface.
Seven dialogue principles
Suitability for the task
The dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task.
Self-descriptiveness
The dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request.
Controllability
The dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met.
Conformity with user expectations
The dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions.
Error tolerance
The dialogue is error-tolerant if, despite evident errors in input, the intended result may be achieved with either no or minimal action by the user.
Suitability for individualization
The dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user.
Suitability for learning
The dialogue is suitable for learning when it supports and guides the user in learning to use the system.
The concept of usability is defined of the ISO 9241 standard by effectiveness, efficiency, and satisfaction of the user.
Part 11 gives the following definition of usability:
Usability is measured by the extent to which the intended goals of use of the overall system are achieved (effectiveness).
The resources that have to be expended to achieve the intended goals (efficiency).
The extent to which the user finds the overall system acceptable (satisfaction).
Effectiveness, efficiency, and satisfaction can be seen as quality factors of usability. To evaluate these factors, they need to be decomposed into sub-factors, and finally, into usability measures.
The information presented is described in Part 12 of the ISO 9241 standard for the organization of information (arrangement, alignment, grouping, labels, location), for the display of graphical objects, and for the coding of information (abbreviation, colour, size, shape, visual cues) by seven attributes. The "attributes of presented information" represent the static aspects of the interface and can be generally regarded as the "look" of the interface. The attributes are detailed in the recommendations given in the standard. Each of the recommendations supports one or more of the seven attributes.
Seven presentation attributes
Clarity
The information content is conveyed quickly and accurately.
Discriminability
The displayed information can be distinguished accurately.
Conciseness
Users are not overloaded with extraneous information.
Consistency
A unique design, conformity with user's expectation.
Detectability
The user's attention is directed towards information required.
Legibility
Information is easy to read.
Comprehensibility
The meaning is clearly understandable, unambiguous, interpretable, and recognizable.
Usability
The user guidance in Part 13 of the ISO 9241 standard describes that the user guidance information should be readily distinguishable from other displayed information and should be specific for the current context of use.
User guidance can be given by the following five means:
Prompts indicating explicitly (specific prompts) or implicitly (generic prompts) that the system is available for input.
Feedback informing about the user's input timely, perceptible, and non-intrusive.
Status information indicating the continuing state of the application, the system's hardware and software components, and the user's activities.
Error management including error prevention, error correction, user support for error management, and error messages.
On-line help for system-initiated and user-initiated requests with specific information for the current context of use.
Research
User interface design has been a topic of considerable research, including on its aesthetics. Standards have been developed as far back as the 1980s for defining the usability of software products.
One of the structural bases has become the IFIP user interface reference model.
The model proposes four dimensions to structure the user interface:
The input/output dimension (the look)
The dialogue dimension (the feel)
The technical or functional dimension (the access to tools and services)
The organizational dimension (the communication and co-operation support)
This model has greatly influenced the development of the international standard ISO 9241 describing the interface design requirements for usability.
The desire to understand application-specific UI issues early in software development, even as an application was being developed, led to research on GUI rapid prototyping tools that might offer convincing simulations of how an actual application might behave in production use. Some of this research has shown that a wide variety of programming tasks for GUI-based software can, in fact, be specified through means other than writing program code.
Research in recent years is strongly motivated by the increasing variety of devices that can, by virtue of Moore's law, host very complex interfaces.
See also
Chief experience officer (CXO)
Cognitive dimensions
Discoverability
Experience design
Gender HCI
Human interface guidelines
Human-computer interaction
Icon design
Information architecture
Interaction design
Interaction design pattern
Interaction Flow Modeling Language (IFML)
Interaction technique
Knowledge visualization
Look and feel
Mobile interaction
Natural mapping (interface design)
New Interfaces for Musical Expression
Participatory design
Principles of user interface design
Process-centered design
Progressive disclosure
T Layout
User experience design
User-centered design
References
Usability
Design
Graphic design
Industrial design
Information architecture
Design | User interface design | Technology,Engineering | 2,590 |
39,292,006 | https://en.wikipedia.org/wiki/Yunnanxane | Yunnanxane is a bioactive taxane diterpenoid first isolated from Taxus wallichiana. Yunnanxane was later isolated from cell cultures of Taxus cuspidata and Taxus chinensis. Four homologous esters of yunnanxane have also been isolated from Taxus. Yunnanxane is reported to have anticancer activity in vitro.
See also
Taxusin
Hongdoushans
References
Acetate esters
Taxanes
Vinylidene compounds | Yunnanxane | Chemistry | 98 |
26,127,170 | https://en.wikipedia.org/wiki/Monobloc%20engine | A monobloc or en bloc engine is an internal-combustion piston engine some of whose major components (such as cylinder head, cylinder block, or crankcase) are formed, usually by casting, as a single integral unit, rather than being assembled later. This has the advantages of improving mechanical stiffness, and improving the reliability of the sealing between them.
Monobloc techniques date back to the beginnings of the internal combustion engine. Use of this term has changed over time, usually to address the most pressing mechanical problem affecting the engines of its day. There have been three distinct uses of the technique:
Cylinder head and cylinder
Cylinder block
Cylinder block and crankcase
In most cases, any use of the term describes single-unit construction that is opposed to the more common contemporary practice. Where the monobloc technique has later become the norm, the specific term fell from favour. It is now usual practice to use monobloc cylinders and crankcases, but a monobloc head (for a water-cooled inline engine at least) would be regarded as peculiar and obsolescent.
Cylinder head
The head gasket is the most highly stressed static seal in an engine, and was a source of considerable trouble in early years. The monobloc cylinder head forms both cylinder and head in one unit, thus averting the need for a seal.
Along with head gasket failure, one of the least reliable parts of the early petrol engine was the exhaust valve, which tended to fail by overheating. A monobloc head could provide good water cooling, thus reduced valve wear, as it could extend the water jacket uninterrupted around both head and cylinder. Engines with gaskets required a metal-to-metal contact face here, disrupting water flow.
The drawback to the monobloc head is that access to the inside of the combustion chamber (the upper volume of the cylinder) is difficult. Access through the cylinder bore is restricted for machining the valve seats, or for inserting angled valves. An even more serious restriction is de-coking and re-grinding valve seats, a regular task on older engines. Rather than removing the cylinder head from above, the mechanic must remove pistons, connecting rods and the crankshaft from beneath.
One solution to this for side-valve engines was to place a screwed plug directly above each valve, and to access the valves through this (illustrated). The tapered threads of the screwed plug provided a reliable seal. For low-powered engines this was a popular solution for some years, but it was difficult to cool this plug, as the water jacket didn't extend into the plug. As performance increased, it also became important to have better combustion chamber designs with less "dead space". One solution was to place the spark plug in the centre of this plug, which at least made use of the space. This placed the spark plug further from the combustion chamber, leading to long flame paths and slower ignition.
During World War I, development of the internal combustion engine greatly progressed. After the war, as civilian car production resumed, the monobloc cylinder head was required less frequently. Only high-performance cars such as the Leyland Eight of 1920 persisted with it. Bentley and Bugatti were other racing marques who notably adhered to them, through the 1920s and into the 1930s, most famously being used in the purpose-built American Offenhauser straight-four racing engines, first designed and built in the 1930s.
Aircraft engines at this time were beginning to use high supercharging pressures, increasing the stress on their head gaskets. Engines such as the Rolls-Royce Buzzard used monobloc heads for reliability.
The last engines to make widespread use of monobloc cylinder heads were large air-cooled aircraft radial engines, such as the Wasp Major. These have individual cylinder barrels, so access is less restricted than on an inline engine with a monobloc crankcase and cylinders, as most modern engines are. As they have high specific power and require great reliability, the advantages of the monobloc remained attractive.
General aviation engines such as Franklin, Continental, and Lycoming are still manufactured new and continue to use monobloc individual cylinders, although Franklin uses a removable sleeve. A combination of materials are used in their construction, such as steel for the cylinder barrels and aluminum alloys for the cylinder heads to save weight. Common rebuilding techniques include chrome plating the inside of the cylinder barrels in a "cracked" finish that mimics the "cross-hatched" finish normally created by typical cylinder honing. Older engines operated on unleaded automotive gasoline as allowed by supplemental type certificates approved by the FAA may require more frequent machining replacement of valves and seats. Special tools are used to maintain valve seats in these cylinders. Non-destructive testing should be performed to look for flaws that may have arisen during extreme use, engine damage from sudden propeller stoppage or extended engine operation at every overhaul or rebuild.
Historically the difficulties of machining, and maintaining a monobloc cylinder head were and continue to be a severe drawback. As head gaskets became able to handle greater heat and pressure, the technique went out of use. It is almost unknown today, but has found a few niche uses, as the technique of monobloc cylinder heads was adopted by the Japanese model engine manufacturer Saito Seisakusho for their glow fueled and spark ignition model four-stroke engines for RC aircraft propulsion needs.
Monobloc cylinders also continue to be used on small 2 stroke-cycle engines for power equipment used to maintain lawns and gardens, such as string trimmers, tillers and leaf blowers.
Cylinder block
Casting technology at the dawn of the internal combustion engine could reliably cast either large castings, or castings with complex internal cores to allow for water jackets, but not both simultaneously. Most early engines, particularly those with more than four cylinders, had their cylinders cast as pairs or triplets of cylinders, then bolted to a single crankcase.
As casting techniques improved, the entire cylinder block of four, six or even eight cylinders could be cast as one. This was a simpler construction, thus less expensive to manufacture, and the communal water jacket permitted closer spacing between cylinders. This also improved the mechanical stiffness of the engine, against bending and the increasingly important torsional twist, as cylinder numbers and engine lengths increased. In the context of aircraft engines, the non-monobloc precursor to monobloc cylinders was a construction where the cylinders (or at least their liners) were cast as individuals, and the outer water jacket was applied later from copper or steel sheet. This complex construction was expensive, but lightweight, and so it was only widely used for aircraft.
V engines remained with a separate block casting for each bank. The complex ducting required for inlet manifolds between the banks were too complicated to cast otherwise. For economy, a few engines, such as the V12 Pierce-Arrow, were designed to use identical castings for each bank, left and right. Some rare engines, such as the Lancia 22½° narrow-angle V12 of 1919, did use a single block casting for both banks.
A monobloc engine was used in 1936's Series 60. It was designed to be the company's next-generation powerplant at reduced cost from the 353 and Cadillac V16. The monobloc's cylinders and crankcase were cast as a single unit, and it used hydraulic valve lifters for durability. This design allowed the creation of the mid-priced Series 60 line.
Modern cylinders, except for air-cooled engines and some V engines, are now universally cast as a single cylinder block, and modern heads are nearly always separate components.
Crankcase
As casting improved and cylinder blocks became a monobloc, it also became possible to cast both cylinders and crankcase as one unit. The main reason for this was to improve stiffness of the engine construction, reducing vibration and permitting higher speeds.
Most engines, except some V engines, are now a monobloc of crankcase and cylinder block.
Modern engines - Combined block, head and crankcase
Light-duty consumer-grade Honda GC-family small engines use a headless monobloc design where the cylinder head, block, and half the crankcase share the same casting, termed 'uniblock' by Honda. One reason for this, apart from cost, is to produce an overall lower engine height. Being an air-cooled OHC design, this is possible thanks to current aluminum casting techniques and lack of complex hollow spaces for liquid cooling. The valves are vertical, so as to permit assembly in this confined space. On the other hand, performing basic repairs becomes so time-consuming that the engine can be considered disposable. Commercial-duty Honda GX-family engines (and their many popular knock-offs) have a more conventional design of a single crankcase and cylinder casting, with a separate cylinder head.
Honda produces many other head-block-crankcase monoblocs under a variety of different names, such as the GXV-series. They may all be externally identified by a gasket which bisects the crankcase on an approximately 45° angle.
References
Engine technology
Piston engine configurations | Monobloc engine | Technology | 1,896 |
26,550,209 | https://en.wikipedia.org/wiki/Java%20KeyStore | A Java KeyStore (JKS) is a repository of security certificates either authorization certificates or public key certificates plus corresponding private keys, used for instance in TLS encryption.
In IBM WebSphere Application Server and Oracle WebLogic Server, a file with extension jks serves as a keystore.
The Java Development Kit maintains a CA keystore file named cacerts in folder jre/lib/security. JDKs provide a tool named keytool to manipulate the keystore. keytool has no functionality to extract the private key out of the keystore, but this is possible with third-party tools like jksExportKey, CERTivity, Portecle and KeyStore Explorer.
See also
Java Secure Socket Extension
Keyring (cryptography)
Public key infrastructure
References
External links
Javadoc for KeyStore
Public-key cryptography
Java development tools | Java KeyStore | Technology | 175 |
5,818,198 | https://en.wikipedia.org/wiki/BMPR2 | Bone morphogenetic protein receptor type II or BMPR2 is a serine/threonine receptor kinase encoded by the BMPR2 gene. It binds bone morphogenetic proteins, members of the TGF beta superfamily of ligands, which are involved in paracrine signaling. BMPs are involved in a host of cellular functions including osteogenesis, cell growth and cell differentiation. Signaling in the BMP pathway begins with the binding of a BMP to the type II receptor. This causes the recruitment of a BMP type I receptor, which the type II receptor phosphorylates. The type I receptor phosphorylates an R-SMAD, a transcriptional regulator.
Function
Unlike the TGFβ type II receptor, which has a high affinity for TGF-β1, BMPR2 does not have a high affinity for BMP-2, BMP-7 and BMP-4, unless it is co-expressed with a type I BMP receptor.
On ligand binding, a receptor complex is formed, consisting of two type II and two type I transmembrane
serine/threonine kinases. Type II receptors phosphorylate and activate type I receptors which autophosphorylate, then
bind and activate SMAD transcriptional regulators. They bind to BMP-7, BMP-2 and, less efficiently, BMP-4. Binding is weak but enhanced by the presence of type I receptors for BMPs. In TGF beta signaling all of the receptors exist in homodimers before ligand binding. In the case of BMP receptors only a small fraction of the receptors exist in homomeric forms before ligand binding. Once a ligand has bound to a receptor, the amount of homomeric receptor oligomers increase, suggesting that the equilibrium shifts towards the homodimeric form. The low affinity for ligands suggests that BMPR2 may differ from other type II TGF beta receptors in that the ligand may bind the type I receptor first.
Oocyte Development
BMPR2 is expressed on both human and animal granulosa cells, and is a crucial receptor for bone morphogenetic protein 15 (BMP15) and growth differentiation factor 9 (GDF9). These two protein signaling molecules and their BMPR2-mediated effects play an important role in follicle development in preparation for ovulation. However, BMPR2 can't bind BMP15 and GDF9 without the assistance of bone morphogenetic protein receptor 1B (BMPR1B) and transforming growth factor β receptor 1 (TGFβR1) respectively. There is evidence that the BMPR2 signaling pathway is disrupted in the case of polycystic ovary syndrome, possibly by hyperandrogenism.
It appears that the hormones estrogen and follicle stimulating hormone (FSH) have roles in regulating expression of BMPR2 in granulosa cells. Experimental treatment in animal models with estradiol with or without FSH increased BMPR2 mRNA expression while treatment with FSH alone decreased BMPR2 expression. However, in human granulosa-like tumor cell line (KGN), treatment with FSH increased BMPR2 expression.
Clinical significance
At least 70 disease-causing mutations in this gene have been discovered. An inactivating mutation in the BMPR2 gene has been linked to pulmonary arterial hypertension.
BMPR2 functions to inhibit the proliferation of vascular smooth muscle tissue. It functions by promoting the survival of pulmonary arterial endothelial cells, therefore preventing arterial damage and adverse inflammatory responses. It also inhibits pulmonary arterial proliferation in response to growth factors, which prevents the closing of arteries by proliferating endothelial cells. When this gene is inhibited, vascular smooth muscle proliferates and can cause pulmonary hypertension, which, among other things, can lead to cor pulmonale, a condition that causes the right side of the heart to fail. The dysfunction of BMPR2 can also lead to an elevation in pulmonary arterial pressure due to an adverse response of the pulmonary circuit to injury.
It is especially important to screen for BMPR2 mutations in relatives of patients with idiopathic pulmonary hypertension, for these mutations are present in >70% of familial cases.
There have been studies which has correlated BMPR2 with exercise induced elevation of PA pressure by measuring tricuspid regurgitation velocity by echocardiography.
References
External links
BMPR2 gene variant database
GeneReviews/NCBI/NIH/UW entry on Heritable Pulmonary Arterial Hypertension
OMIM entries on Heritable Pulmonary Arterial Hypertension
Bone morphogenetic protein
Developmental genes and proteins
TS domain
S/T kinase
Receptors
EC 2.7.11 | BMPR2 | Chemistry,Biology | 996 |
26,607,957 | https://en.wikipedia.org/wiki/Collinsella-1%20RNA%20motif | The Collinsella-1 RNA motif denotes a particular conserved RNA structure discovered by bioinformatics. Of the six sequences belonging to this motif that were originally identified, five are from uncultivated bacteria residing in the human gut, while only the sixth is in a cultivated species, Collinsella aerofaciens. The evidence supporting the stem-loops designated as "P1" and "P2" is ambiguous.
See also
Acido-Lenti-1 RNA motif
Bacteroidales-1 RNA motif
Chloroflexi-1 RNA motif
Flavo-1 RNA motif
References
External links
Non-coding RNA | Collinsella-1 RNA motif | Chemistry | 129 |
66,216,612 | https://en.wikipedia.org/wiki/Orthosomnia | Orthosomnia is a medical term for an unhealthy obsession with getting perfect sleep. The term was coined by researchers from Rush University Medical College and Northwestern University’s Feinberg School of Medicine in a case study published on February 15, 2017 in the Journal of Clinical Sleep Medicine titled "Orthosomnia: Are Some Patients Taking the Quantified Self Too Far?" in which the researchers noticed that the three patients having their sleep tracked spent excessive time in bed in order to increase their "sleep numbers", which might have actually made their Insomnia worse.
Dr. Sabra Abbott, an assistant professor of neurology at Northwestern University and one of the researchers involved in the study, first noticed Orthosomnia when she and her colleagues started having "a number of patients coming in with a phenomenon that didn't necessarily meet the classical description of insomnia, but that was still keeping them up at night". Abbott also noticed that because the people suffering from Orthosomnia became so dependent on their sleep tracking devices, "they were actually destroying their sleep" because they weren't measuring up to what their tracker considered a "good" amount of sleep.
See also
Orthorexia nervosa
References
Sleep disorders | Orthosomnia | Biology | 260 |
53,964,039 | https://en.wikipedia.org/wiki/Kilopondmetre | The Kilopondmetre is an obsolete unit of torque and energy in the gravitational metric system. It is abbreviated kp·m or m·kp, older publications often use mkg and kgm as well.
Torque is a product of the length of a lever and the force applied to the lever. One kilopond is the force applied to one kilogram due to gravitational acceleration; this force is exactly 9.80665 N.
This means 1 kp·m = 9.80665 kg·m/s2 = 9.80665 N·m.
A related unit is the kgf·cm, which is sometimes found in technical datasheets.
With the kgf being the same as the kp, and one meter being 100 centimeters, one kp·m equals 100 kgf·cm.
References
Units of torque
Units of energy
Non-SI metric units | Kilopondmetre | Mathematics | 188 |
45,465,104 | https://en.wikipedia.org/wiki/Penicillium%20erubescens | Penicillium erubescens is an anamorph species of the genus of Penicillium.
See also
List of Penicillium species
References
erubescens
Fungi described in 1968
Fungus species | Penicillium erubescens | Biology | 46 |
63,175,323 | https://en.wikipedia.org/wiki/Gaps%20in%20regulation%20of%20chemical%20agents | Despite the best efforts of the government, health, and environmental agencies, improper use of hazardous chemicals is pervasive in commercial products, and can yield devastating effects, from people developing brittle bones and severe congenital defects, to strips of wildlife laying dead by poisoned rivers.
Agriculture
Insecticides
Mevinphos
Mevinphos is a broad spectrum of insecticides that are used on a wide variety of crops, including apples, peaches, strawberries, nectarines, celery, and cucumbers. They belong to the chemical group known as organophosphates, which have neural toxic effects, not in only insecticides, but also in birds, fish, amphibians, and mammals. While not carcinogenic, mevinphos are potent via all means of exposure, including absorption, ingestion, and inhalation. Organophosphates inhibit acetylcholinesterase (AChE), an enzyme responsible for regulating levels of the muscle-stimulating neurotransmitter, acetylcholine (ACh). This results in high levels of acetylcholine levels in the body, which causes nearly every muscle in the body to be stimulated without cessation. Symptoms of organophosphate poisoning include violent convulsions, vomiting, miosis, lachrymation, sweating, salivation, diarrhea, and potentially death.
Most nerve gases, including sarin, soman, and tabun, are organophosphates, which are all banned by the Geneva Convention of 1925, as they are deemed a war crime.
From 1981 to 1984, 1,156 people consisting of field workers and agricultural officials in Salinas Valley, California were reported to have been exposed to mevinphos, developing insecticide-related illnesses. The exposure began on April 23, 1981, when the field was sprayed with mevinphos at 5:00 am that morning, despite a cancellation order having been given the day before. Later that morning at 7:00 am, 44 field workers began harvesting iceberg lettuce on the farm. Two hours later, many of these workers developed symptoms of dizziness, headaches, eye irritation, visual disturbances, and nausea.
Thirty-one farm workers along with three agricultural officials who were in the field that morning were sent to a local hospital to be tested for plasma cholinesterase, which looks at two substances levels that are necessary for the nervous system to work properly. Two workers were kept in the hospital for further observation and treatment due to respiratory complications. Two other people had levels of plasma cholinesterase below normal limit. The rest of the workers were disrobed, hosed down with water, asked to get dressed, go home, and wash their clothes at home. No one was told not to come to work the next day. However, due to ongoing symptoms, many of the workers were not able to report to work the next morning. A union representative arranged for 29 workers to be taken to a second hospital for further testing and evaluation. One person was hospitalized due to bradycardia.
The National Institute for Occupational Safety and Health (NIOSH) began an investigation on April 24 working closely with staff from the second hospital during this acute phase of this incident. The 29 workers reported the following signs and symptoms: “eye irritation (76%), headache (48%), visual disturbances (48%), dizziness (41%), nausea (38%), fatigue (28%), chest pain or shortness of breath (21%), skin irritation (17%), fasciculation of the eyelids (10%), fasciculation of muscles in the arm (7%), excessive sweating (7%), and diarrhea (7%), with twenty-two (76%) of the workers reporting three or more symptoms or signs.”
The workers were tested approximately every week over the course of 8 to 12 weeks. When initially tested the first week, everyone's plasm cholinesterase and red blood cell (RBC) cholinesterase was above normal levels. Test levels from the following week increase by 5% and the week after that by 14%. Over the course of time, their levels kept increasing. This is believed to be due to organophosphates inhibiting the enzyme, cholinesterase, resulting in toxic effects by allowing an increase of the neurotransmitter in the nervous system. It is not known how many other cases were not reported and followed from this incident.
Mevinphos is considered among the ten highest health risk posing pesticides and reported to have acute total illnesses in 1984–1990, low oral LD50, and a low Reference Dose (RfD). On February 28, 1994, the California Environmental Protection Agency, Pesticide and Environmental Toxicology Section, recommended the cancellation of mevinphos use in California due to the inability to implement safe mitigative measures and the inability to prevent unacceptable dietary and worker exposures.
Dichlorodiphenyltrichloroethane (DDT) and Organochlorines
Following the end of WWII, the production of organochlorines, such as DDT, Polychlorinated Biphenyls (PCBs), and other synthetic chemicals were developed for use in agriculture. These purposes included insecticides, fungicides, and in some cases, fire retardants; while effective for agriculture and forest services, these chemicals are known lipophiles (meaning that they attach to fat cells in organisms) and have been shown to bioaccumulate, passing from prey to predator and from mother to offspring throughout embryonic development and lactation. Studies have shown that exposure to organochlorines like DDT can lead to increased risks of pancreatic cancer, non-Hodgkin’s lymphoma, impaired lactation, possible male infertility and testicular cancers, and DDT poisoning in those who work to manufacture the chemicals.
Rachel Carson's exposé novel, Silent Spring, is largely accredited for spurring public awareness of the ecological and human health impacts of organochlorines such as DDT. Her work began a national movement to ban chemicals, such as DDT. However, chemical companies responded with backlash claiming that her work was falsified and DDT was not banned until 1972. Since then, many other organochlorines have been placed under similar restrictions and bans, yet there are few regulations in place for new organochlorines being produced in labs. This lack of regulation has raised concerns among members of the environmental community about the hazards of these unstudied pollutants not being monitored by the United States Environmental Protection Agency (EPA), especially in light of budget cuts and bureaucratic inefficacy.
General regulation of pesticides
All pesticides in the U.S. must be reviewed by the Environmental Protection Agency (EPA), under the regulations of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), at least once every 15 years.
The EPA defines “pesticides” in the FIFRA by four criteria:
Claims - The product is claimed or advertised by its distributor as a pesticide.
Composition - The compound contains at least one active ingredient that has no commercial value besides for pesticidal purposes.
Knowledge that the substances will be used as a pesticide - The distributor has "actual or constructive knowledge" that the product will be used as a pesticide, regardless if the product is not claimed or advertised as one.
Plant growth regulators - The product is intended to introduce physiological changes to plants that effect their growth in any way.
Commercial and industrial uses
Radium
Radium is a radioactive element that is naturally found at low levels in the environment from uranium and thorium decay. It can be found virtually everywhere, including the soil, water, rocks, and flora. As radium is naturally everywhere in the environment, all humans are almost always exposed to radium. At natural levels, radium is quite benign. However, at excessive levels, radium poisoning can occur. When the body takes in radium, it perceives it as calcium. Consequently, the body fills bones with radium, which can lead to brittle bones, collapsed spines, and teeth to fall out.
Radium was discovered in 1898 by Marie and Pierre Curie. In the early 1900s it was used as radiation treatment for cancer. Expansion of radium's use in medical practices was earnestly attempted in the treatment of rheumatism and mental disorders. However, these pursuits were unsuccessful.
Radium Girls
Due to radium's luminescent properties, American inventor William J. Hammer mixed radium with glue and zinc sulfide to make glow-in-the-dark paint. This paint was used by the U.S. Radium Corporation and named Undark. The paint was primarily used for wristwatch dials. Further application of the paint reached military equipment after the company accepted a contract with the U.S. government during WWI.
Starting in 1916, the U.S. Radium Corporation established factories in New Jersey that recruited dozens of women to paint the watch dials with Undark. No safety equipment were given to the women, nor were any precautions taken. The women were instructed to frequently lick the paint brushes, in order to keep them wet and shaped to a fine point. Throughout every day, the women's clothes and skin were covered in radium paint. This led to the women developing fatal radium poisoning.
By the mid-1920s dozens upon dozens of these women were falling ill and dying from prolonged, horrific deaths. The radium they ingested was dissolving their bones from the inside, causing severe pain and enormous deformities with pieces of their bodies easily breaking and falling off. By 1927, over 50 women died from radium poisoning.
A smaller number of these female workers tried suing the company, not only for financial compensation to pay for their large medical bills and to afford themselves income to be able to live out the rest of their lives, but also to expose the company to their wrongful doing. Dial painter, Grace Fryer, as well as four other women sued U.S. Radium for $250,000. However, this lawsuit was unsuccessful because of U.S. Radium's vast team of lawyers and contractual affiliation with the U.S. government. The women were so desperate to afford medical treatment and food, that they had to settle for $10,000 each and a $600 annual payment. Unfortunately, all five women passed away within two years after the settlement.
The fate of the Radium Girls raised a lot of national concern for workers’ health rights and conditions. After the incident, precautions and safety equipment were mandated to protect workers handling radioactive material in several occupations. The Manhattan Project notably used fume hoods, personal protective equipment, sanitation, and frequent checks for contamination, in order to prevent a repeat of the Radium Girls tragedy. In 1949, the U.S. Congress passed a law that compensates workers for occupational illnesses.
Environmental regulation
By the time World War II began, safety limits for handling radiation had been set by the federal government.
In 1934, the International Commission on Radiological Protection (ICRP) established a tolerance dose for workers of 0.2 roentgens of radiation exposure to workers per day. In 1936, the National Committee on Radiation Protection and Measurements (NCRP) reduced the limit to 0.1 roentgens per day which held through World War II. From 1936 through 1977 there were continual revisions by professional scientists and the government agencies as to what constituted safe doses. By the end of World War II, arguments between the U.S. military leaders and civilian officials ensued as to what were considered best practices to controlling nuclear energy and repressing fabrication of nuclear weapons by other nations. This led to the dispute in going to congress and resulting in congress passing the Atomic Energy Act (AEA) of 1946.
The EPA was created in 1970 to accept certain functions and responsibilities from other federal agencies and departments. Since its inception, the EPA has run environmental programs that address radioactive waste disposal sites, off-site monitoring around nuclear power plants, and keeps an eye on natural sources of radioactivity, such as radon.
The EPA has developed guidance on topics such as occupational radiation limits and exposures for federal agencies and members of the public. The EPA can offer recommendations on quality assurance programs for nuclear medicine under its FRC-derived authority.
Mercury
Mercury is a naturally occurring element in the Earth’s lithosphere that can be found in its elemental, inorganic, and organic-compounded forms. It is often found in coal deposits and regions rich in fossil fuels.
Mercury pollution
Like other heavy metals, mercury can be released from both natural and anthropogenic sources. Mercury from natural sources (such as soil/sedimentary erosion or volcanic eruptions) accounts for a small percentage of rising mercury levels. Meanwhile, the release of industrial mercury from mining and fossil fuel combustion has led to heightened mercury pollution in the atmosphere. In fact, fossil fuel combustion accounts for 45% of human mercury release.
Environmental and human health impacts
Mercury exposures are variant, relative to the degree of exposure, demographics of the individual exposed, and the mercury form or compound that they are exposed to. Because it is a neurotoxin, mercury can be particularly damaging to developing fetuses when exposed in vitro and young children. Inhalation of mercury gas is more deadly and can cause kidney failure and respiratory problems if not treated. From an ecological perspective, mercury is concerning because of its ability to bioaccumulate in food chains, particularly in marine environments. For this reason, the EPA advises against the regular consumption of fish and shellfish that are documented to contain high levels of mercury, especially for pregnant and nursing mothers and young children.
Environmental regulation
In recent years the United States Environmental Protection Agency has established policies to mitigate atmospheric mercury release from the combustion of fossil fuels and waste. These policies include the 2011 Mercury and Air Toxics Standards which require that power plants use controls and technologies that mitigate mercury pollution. Between 2011 and 2013, additional policies were applied to municipal and medical waste management facilities mandating that all sewage and waste containing mercury cannot be incinerated. Earlier standards from 1991 also established a maximum contaminant level, or MCM, of 0.002 mg of mercury per liter of municipal drinking water. Legal measures such as the Clean Air and Clean Water Acts, Safe Drinking Water Act, and the Resource Conservation and Recovery Act also set standards for pollutant release and clean-up for the United States.
Perfluorooctanoic acid (PFOA)
Perfluorooctanoic acid, a.k.a. PFOA or C8, is a synthetic chemical surfactant that is often used in the process of making non-stick cookware. PFOA is extremely bio-persistent, with a half-life of 8 years in humans. PFOA can stay in the environment and the human body over long periods of time, and can have harmful effects to people exposed in high doses.
Clean blood study
A worldwide study was conducted to compare “clean blood,” i.e., blood without C8 as a control with blood that contains C8, in order to illuminate the hazardous effects on humans. However, “clean blood,” could not be found from participants because 99% across the globe had derivatives of C8 found in their blood. Instead, samples of preserved blood from American Soldiers during the Korean War were used as the control. The blood was obtained in 1950, a year before Teflon was ever sold to the public. None of the preserved blood was found to contain C8, strongly suggesting that the worldwide use of Teflon caused a nigh-ubiquitous absence of “clean blood.”
DuPont and 3M
PFOA has been used to make Teflon, a non-stick cookware by the chemical company DuPont, since 1951. Fortunately for consumers, it does not exist in significant amounts in the final product of Teflon that could cause noticeable harm upon normal use. However, DuPont and 3M workers that handle PFOA, as well as people who live near the plants, are not as fortunate. Starting in the early 1950s, PFOA was released by DuPont into private wells and the Ohio River without disclosure to the public of EPA. Although both companies conducted independent studies demonstrating the harmful side effects of PFOA exposure, these results were hidden from the public as a result of the EPA’s self-reporting policy on chemical toxicology in manufacturing.
Health risks and birth defects
The 2012 C8 Science Panel conducted a survey using blood samples from approximately 69,000 residents of regions with heightened PFOA levels as a result of a class action lawsuit against DuPont to determine correlations between PFOA exposure and chronic illnesses. Those surveyed had a range of PFOA levels from 0.2-22,412 μg/L, with a median exposure of 28.2 μg/L. These levels were significantly higher than the levels detected in the general American population, which had a median exposure of 3.9μg/L. Results from the study concluded that PFOA exposure was linked to pancreatic cancer and testicular cancer, among other conditions, and possible correlations with kidney and prostate cancer. Other chronic conditions included high cholesterol, thyroid disease, ulcerative colitis, pre-eclampsia, and hypertension.
In 1981, two babies of female workers have been found to have eye-related defects. In 1986, Buck Bailey was born with a single nostril, a serrated eyelid, and a keyhole pupil, due to his mother being exposed to PFOA on a daily basis when she worked at DuPont.
Known as a “forever chemical” PFOAs do not biodegrade naturally and thus, are at a high risk for bioaccumulation in exposed populations if governmental regulators do not take action. The conclusions from the C8 Panel were used to justify medical monitoring among all residents affected by PFOA exposure at the expense of DuPont, however, some claims remain disputed by the chemical giant.
Contamination of drinking water in Parkersburg, WV
It was not until the early 1990s that the toxic effects of PFOA became a public concern. Wilbur Tennant, a farmer who lives on his own private land near the DuPont plant in Parkersburg, West Virginia, videotaped the calamitous effects of PFOA on his cattle and local wildlife. Calves were born with black teeth and opaque eyes. Several cows and deer were found dead by the stream. As it turned out, DuPont was dumping large amounts of waste PFOA into local streams that fed into Parkersburg's town water supply. So much waste PFOA was dumped that DuPont quickly lost count. Eventually, children were noticed to have black teeth, much like the calves in Tennant's farm.
Wilbur Tennant filed a lawsuit with environmental lawyer, Robert Billot, who also started a class-action lawsuit with nearly 80,000 plaintiffs in the same year as a result of the widespread impacts of PFOA chemicals across six water districts polluted by DuPont. The class-action lawsuit settled for $343 million in damages to residents and DuPont was ordered to pay for costs of medical monitoring.
Phasing out C8
In 2003, 3M phased out C8 for C4 in attempt to avoid public backlash. They urged DuPont to do the same. Instead, DuPont seized the opportunity to become the sole manufacturer of C8 and increased production. This lasted until the EPA banned the production of C8 in 2013. DuPont soon substituted C8 with Gen-X, which is a chemical that has not yet been researched or regulated. In order to detach their name from the toxic reputation of PFOA's, DuPont created the spin-off company, Chemours, to handle production and continued dumping of Gen-X.
After facing several class-action lawsuits, DuPont paid $43,000 residents of Ohio each $400 to participate in a study to determine whether C8 could be linked to any diseases. Participation in the study also required each person to waive their rights to sue DuPont if no links could be made. The study lasted over seven years and grew to almost 70,000 participants, including West Virginia residents. The results were concluded in 2012. Six diseases were linked: “testicular and kidney cancer, ulcerative colitis, thyroid disease, pre-eclampsia, and high cholesterol.”
DuPont's public relations with Parkersburg, WV
Part of the reason it has been difficult for residents of Parkersburg, West Virginia to challenge DuPont is because the chemical company makes large contributions to the local economy, education system, and local government. There are even buildings and a street named after DuPont.
Environmental regulation
The struggles to regulate and determine the toxicity of synthetic chemicals are strongly reflected in the case of DuPont’s PFOA pollution. Because the EPA only regulates chemicals that have been proven toxic and uses a self-reporting system, past uses of synthetic chemicals have gone unregulated until health risks are observed, as seen with the case of C8 and PFOA. This lack of communication between the EPA and potential polluters is one of the reasons that many policies aimed at widespread chemical regulation often fail. Current EPA policies are looking to increase involvement in reporting to reduce these breaches, however, cuts to the EPA budget limit the feasibility of these goals without resources to expand their workforce.
Legal fees, lawsuits, and governmental fines have been used to discourage companies from releasing untested chemicals into the environment, however, these are often insignificant when compared to the net worth of the company. So long as the market exists for these products (as seen with the successes of Teflon) companies are likely to continue valuing their profits over environmental and health concerns.
PFOA is a group of chemicals that are still being studied. To date EPA has not yet established statutory clean-up levels for PFOA. However, the agency has established health advisory levels for these substances based on EPA's assessment of the latest peer-reviewed science. This advisory is meant to provide states, tribal and local officials, and drinking water system operators, information on the health risks due to these chemicals in order to enable these people to take appropriate measures to protect their communities. EPA has established a health advisory number of 70 parts per trillion for PFOA in drinking water to provide a conservative margin of protection to the most sensitive populations, thus ensuring protection for everyone.
Recent settlements between the EPA and Dupont/Chemours have worked to improve the lives and environment of residents near the Washington Works plant in Parkersburg, WV by mandating that DuPont pay for clean-up efforts in the region under the Safe Drinking Water Act. Unfortunately, technology is not able to fully remove PFOAs and thus, the existing reparations include providing bottled water and installing filtration systems with partial removal abilities.
References
Regulation of chemicals | Gaps in regulation of chemical agents | Chemistry | 4,747 |
3,946,520 | https://en.wikipedia.org/wiki/ISBL | ISBL (Information Systems Base Language) is the relational algebra notation that was invented for PRTV, one of the earliest database management systems to implement E.F. Codd's relational model of data.
Example
OS = ORDERS * SUPPLIERS
LIST OS: NAME="Brooks" % SNAME, ITEM, PRICE
See also
IBM Business System 12 - An IBM industrial strength relational DBMS influenced by ISBL. It was developed for use by customers of IBM's time-sharing service bureaux in various countries in the early 1980s.
External links
Sample ISBL usage
Query languages | ISBL | Technology | 117 |
20,252,472 | https://en.wikipedia.org/wiki/Dexloxiglumide | Dexloxiglumide is a drug which acts as a cholecystokinin antagonist, selective for the CCKA subtype. It inhibits gastrointestinal motility and reduces gastric secretions, and despite older selective CCKA antagonists such as lorglumide and devazepide having had only limited success in trials and ultimately never making it into clinical use, dexloxiglumide is being investigated as a potential treatment for a variety of gastrointestinal problems including irritable bowel syndrome, dyspepsia, constipation and pancreatitis, and has had moderate success so far although trials are still ongoing.
References
Cholecystokinin antagonists
Chloroarenes
Benzamides
Ethers | Dexloxiglumide | Chemistry | 160 |
43,908,650 | https://en.wikipedia.org/wiki/Country%20bohemian%20style | Country bohemian style is a fashion style synthesizing rural elements with the bohemian style, creating a bohemian approach to life in the country. The country bohemian style can refer to both fashion and interior design.
Characteristics
The country bohemian style is a deliberate blending of two seemingly disparate styles, country and bohemian. It incorporates local, rural features into bohemian sensibilities, favoring sustainability and rustic features while also embracing modern contributions.
See also
Boho-chic
Bohemian style
Shabby chic
References
Footnotes
2010s fashion
Interior design | Country bohemian style | Engineering | 107 |
11,774,498 | https://en.wikipedia.org/wiki/Baum%E2%80%93Connes%20conjecture | In mathematics, specifically in operator K-theory, the Baum–Connes conjecture suggests a link between the K-theory of the reduced C*-algebra of a group and the K-homology of the classifying space of proper actions of that group. The conjecture sets up a correspondence between different areas of mathematics, with the K-homology of the classifying space being related to geometry, differential operator theory, and homotopy theory, while the K-theory of the group's reduced C*-algebra is a purely analytical object.
The conjecture, if true, would have some older famous conjectures as consequences. For instance, the surjectivity part implies the Kadison–Kaplansky conjecture for discrete torsion-free groups, and the injectivity is closely related to the Novikov conjecture.
The conjecture is also closely related to index theory, as the assembly map is a sort of index, and it plays a major role in Alain Connes' noncommutative geometry program.
The origins of the conjecture go back to Fredholm theory, the Atiyah–Singer index theorem and the interplay of geometry with operator K-theory as expressed in the works of Brown, Douglas and Fillmore, among many other motivating subjects.
Formulation
Let Γ be a second countable locally compact group (for instance a countable discrete group). One can define a morphism
called the assembly map, from the equivariant K-homology with -compact supports of the classifying space of proper actions to the K-theory of the reduced C*-algebra of Γ. The subscript index * can be 0 or 1.
Paul Baum and Alain Connes introduced the following conjecture (1982) about this morphism:
Baum-Connes Conjecture. The assembly map is an isomorphism.
As the left hand side tends to be more easily accessible than the right hand side, because there are hardly any general structure theorems of the -algebra, one usually views the conjecture as an "explanation" of the right hand side.
The original formulation of the conjecture was somewhat different, as the notion of equivariant K-homology was not yet common in 1982.
In case is discrete and torsion-free, the left hand side reduces to the non-equivariant K-homology with compact supports of the ordinary classifying space of .
There is also more general form of the conjecture, known as Baum–Connes conjecture with coefficients, where both sides have coefficients in the form of a -algebra on which acts by -automorphisms. It says in KK-language that the assembly map
is an isomorphism, containing the case without coefficients as the case
However, counterexamples to the conjecture with coefficients were found in 2002 by Nigel Higson, Vincent Lafforgue and Georges Skandalis. However, the conjecture with coefficients remains an active area of research, since it is, not unlike the classical conjecture, often seen as a statement concerning particular groups or class of groups.
Examples
Let be the integers . Then the left hand side is the K-homology of which is the circle. The -algebra of the integers is by the commutative Gelfand–Naimark transform, which reduces to the Fourier transform in this case, isomorphic to the algebra of continuous functions on the circle. So the right hand side is the topological K-theory of the circle. One can then show that the assembly map is KK-theoretic Poincaré duality as defined by Gennadi Kasparov, which is an isomorphism.
Results
The conjecture without coefficients is still open, although the field has received great attention since 1982.
The conjecture is proved for the following classes of groups:
Discrete subgroups of and .
Groups with the Haagerup property, sometimes called a-T-menable groups. These are groups that admit an isometric action on an affine Hilbert space which is proper in the sense that for all and all sequences of group elements with . Examples of a-T-menable groups are amenable groups, Coxeter groups, groups acting properly on trees, and groups acting properly on simply connected cubical complexes.
Groups that admit a finite presentation with only one relation.
Discrete cocompact subgroups of real Lie groups of real rank 1.
Cocompact lattices in or . It was a long-standing problem since the first days of the conjecture to expose a single infinite property T-group that satisfies it. However, such a group was given by V. Lafforgue in 1998 as he showed that cocompact lattices in have the property of rapid decay and thus satisfy the conjecture.
Gromov hyperbolic groups and their subgroups.
Among non-discrete groups, the conjecture has been shown in 2003 by J. Chabert, S. Echterhoff and R. Nest for the vast class of all almost connected groups (i. e. groups having a cocompact connected component), and all groups of -rational points of a linear algebraic group over a local field of characteristic zero (e.g. ). For the important subclass of real reductive groups, the conjecture had already been shown in 1987 by Antony Wassermann.
Injectivity is known for a much larger class of groups thanks to the Dirac-dual-Dirac method. This goes back to ideas of Michael Atiyah and was developed in great generality by Gennadi Kasparov in 1987.
Injectivity is known for the following classes:
Discrete subgroups of connected Lie groups or virtually connected Lie groups.
Discrete subgroups of p-adic groups.
Bolic groups (a certain generalization of hyperbolic groups).
Groups which admit an amenable action on some compact space.
The simplest example of a group for which it is not known whether it satisfies the conjecture is .
References
.
.
External links
On the Baum-Connes conjecture by Dmitry Matsnev.
C*-algebras
K-theory
Surgery theory
Conjectures
Unsolved problems in mathematics | Baum–Connes conjecture | Mathematics | 1,251 |
40,144,224 | https://en.wikipedia.org/wiki/StarLink%20corn%20recall | The StarLink corn recalls occurred in the autumn of 2000, when over 300 food products were found to contain a genetically modified corn that had not been approved for human consumption. It was the first-ever recall of a genetically modified food. The anti-GMO activist coalition Genetically Engineered Food Alert, which detected and first reported the contamination, was critical of the FDA for not doing its job. The recall of Taco Bell-branded taco shells, manufactured by Kraft Foods and sold in supermarkets, was the most publicized of the recalls. One settlement resulted in $60 million going to Taco Bell franchisees for lost sales due to the damage to the Taco Bell brand.
StarLink corn
StarLink is a brand of transgenic maize containing two modifications: a gene for resistance to glufosinate, and a variant of the Bacillus thuringiensis (Bt) protein called Cry9C. Cry9C had not been used in a GM crop prior to StarLink, causing heightened regulatory scrutiny. StarLink's creator, Plant Genetic Systems, which became Aventis CropScience during the time of the incident, had applied to the United States Environmental Protection Agency (EPA) to market StarLink for use in both animal feed and human foods. The Garst Seed Company (part of the Advanta group) was licensed by Aventis to produce and sell StarLink seed in the US.
However, because the Cry9C protein lingers in animal digestive systems before breaking down, the EPA had concerns about its allergenicity, and PGS did not provide sufficient data to prove that Cry9C was not allergenic. As a result, PGS split its application into separate permits for use in foods intended for human consumption and for use in animal feed only. StarLink was approved by the EPA for use in animal feed in May 1998. Following the recalls, PGS at first tried to get the application for human consumption approved, and then withdrew the product entirely from the market.
Product recalls
In 2000, Genetically Engineered Food Alert was launched by seven organizations (Center for Food Safety, Friends of the Earth, Institute for Agriculture and Trade Policy, National Environmental Trust, Organic Consumers Association, Pesticide Action Network North America, and The State PIRGs) to lobby the FDA, Congress and companies to ban or stop using GMOs. One of their activities was testing food for the presence of GMOs via a lab called Genetic ID, the vice president of which was Jeffrey M. Smith.
On September 18, 2000, Genetically Engineered Food Alert released a statement that Genetic ID had conducted tests on "Taco Bell Home Originals" brand taco shells, made by Kraft Foods that had been purchased in a grocery store near Washington, DC, and had detected StarLink; the story was reported on by The Washington Post. Kraft distributed the Taco Bell-branded taco shells under a 1996 license agreement with Taco Bell.
Kraft had bought the shells from a Sabritas plant in Mexicali which used flour supplied from an Azteca mill plant in Plainview, Texas. The Texas mill used flour from six states supplied by elevators that did not segregate their genetically modified and conventionally-grown corn at the time. Kraft also suspended production of the recalled products. "All of us—government, industry and the scientific community—need to work on ways to prevent this kind of situation from ever happening again," said Betsy Holden, Kraft's chief executive in September 2000. She also stated that food safety and legal compliance were Kraft's main priority.
Safeway later announced it would recall its store brand taco shells at the recommendation of a consumer group on October 12, 2000. This was done as a precaution, and no StarLink was confirmed to be found in any of the products On October 13 and 14, Mission Foods voluntary recalled about 300 products. On October 22, 2000, it was reported that Kellogg's had shut down a plant as a precaution because they couldn't guarantee that StarLink corn flour had not been supplied to the plant.
On October 26, 2000, StarLink corn was reported to be found in Japan and South Korea. The market and distribution network for corn in the US was thrown into disarray through 2001, as there were no existing means to segregate the grain; the disarray eventually eased due to the Aventis' testing and buyback program discussed below.
Aventis recall/buyback
In January 2001, under a written agreement with 17 US states, Aventis initiated a program called the StarLink Enhanced Stewardship (SES) program, under which StarLink corn, buffer corn, and any corn stored in grain elevators that had become mixed with StarLink, would be bought by Aventis and directed to animal feed and non-food industrial use (e.g. ethanol production); the program included free kits to test for StarLink, and covered costs of cleaning equipment, transport, and storage facilities, as well as increased transportation costs. Aventis estimated the cost would be between $100 million to $1 billion.
It was estimated that due to grain mixing StarLink corn could have existed in more than 50% of the US corn supply and that overall, the StarLink incident depressed the price of US corn about 7% for about a year.
Aftermath
Following the recalls, 51 people reported adverse effects to the FDA; these reports were reviewed by the US Centers for Disease Control (CDC), which determined that 28 of them were possibly related to StarLink. The CDC studied the blood of these 28 individuals and concluded there was no evidence the reactions these people experienced were associated with hypersensitivity to the StarLink Bt protein.
The EPA was criticized by Joseph Mendelson III of the Center for Food Safety, who said, "Clearly they didn't do anything here until they became embarrassed." The EPA and Aventis were also criticized for statements at the time of the recall that indicated they had no idea such a thing would happen. "If there has been a violation of our licensing process, then we would have a very great concern," was attributed to Stephen Johnson of the EPA. Margaret Gadsby of Aventis was quoted with her earlier statement, "We have difficulty imagining how our corn could end up in the human food supply."
The registration for the StarLink varieties was voluntarily withdrawn by Aventis in October 2000. In February 2001, it was announced that the president, general counsel, and vice president of market development for Aventis CropScience (US), had been fired in response to the recall.
In June 2001, Tricon Global Restaurants, which owned 20% of Taco Bell at the time, announced a $60 million settlement with some of the suppliers of the supermarket taco shells; under the terms of the settlement they could not disclose the identity of suppliers. Tricon stated that the settlement would go to Taco Bell franchisees and Tricon would not receive any of it. Tricon also announced that it, along with the suppliers and franchisees, would initiate litigation against the parties responsible for StarLink entering the food chain.
In September 2001, a group of about 5,000 Taco Bell franchisees and a handful of taco shell suppliers brought a class-action lawsuit against Aventis, Garst Seed Co.; Gruma Corp. ("the largest producer and distributor of corn flour and tortillas in the United States); and Azteca Milling seeking damages. This suit was voluntarily dismissed in December 2001.
In 2002, Aventis, Garst, Kraft Foods, Azteca Foods, Azteca Milling, and Mission Foods settled a lawsuit brought by two people, and the grandmother of a third, who claimed to have had allergic reactions to StarLink, for $9 million.
In 2002, nongovernmental organizations claimed that aid sent by the UN and the US to Central American nations also contained some StarLink corn. The nations involved, Nicaragua, Honduras, El Salvador, and Guatemala, refused to accept the aid.
In 2003, farmers who did not plant StarLink who had suffered economic losses due to depressed corn prices following the StarLink recalls settled a class-action lawsuit against Aventis and Advanta for $100 million.
GeneWatch UK and Greenpeace International set up the GM Contamination Register in 2005 citing these recalls as one of the "highlights" of the register.
The US corn supply was monitored by the Federal Grain Inspection Service for the presence of the StarLink Bt proteins from 2001 until 2010.
Later incidents
In August 2013, StarLink corn was reported to be found again contaminating some foods in Saudi Arabia.
See also
Genetically modified organism containment and escape
Fair Packaging and Labeling Act (US)
Genetically modified food controversies
Regulation of the release of genetically modified organisms
References
Food recalls
Food safety scandals
Food safety in the United States
Genetic engineering in the United States
Genetically modified organisms in agriculture
Product liability
Regulation of genetically modified organisms
Taco Bell
2000 in biotechnology | StarLink corn recall | Engineering,Biology | 1,831 |
39,561,850 | https://en.wikipedia.org/wiki/Unified%20framework | Unified framework is a general formulation which yields nth - order expressions giving mode shapes and natural frequencies for damaged elastic structures such as rods, beams, plates, and shells. The formulation is applicable to structures with any shape of damage or those having more than one area of damage. The formulation uses the geometric definition of the discontinuity at the damage location and perturbation to modes and natural frequencies of the undamaged structure to determine the mode shapes and natural frequencies of the damaged structure. The geometric discontinuity at the damage location manifests itself in terms of discontinuities in the cross-sectional properties, such as the depth of the structure, the cross-sectional area or the area moment of inertia. The change in cross-sectional properties in turn affects the stiffness and mass distribution. Considering the geometric discontinuity along with the perturbation of modes and natural frequencies, the initial homogeneous differential equation with nonconstant coefficients is changed to a series of non-homogeneous differential equations with constant coefficients. Solutions of this series of differential equations is obtained in this framework.
This framework is about using structural-dynamics based methods to address the existing challenges in the field of structural health monitoring (SHM). It makes no ad hoc assumptions regarding the physical behavior at the damage location such as adding fictitious springs or modeling changes in Young's modulus.
Introduction
Structural health monitoring (SHM) is a rapidly expanding field both in academia and research. Most of the literature on SHM is based on experimental observations and physically expected models. There are some mathematical models that give analytical theory to model the damage. Such mathematical models for structures with damage are useful in two ways. They allow understanding of the physics behind the problem, which helps in the explanation of experimental readings, and they allow prediction of response of the structure. These studies are also useful for the development of new experimental techniques.
Examples of models based on expected physical behavior of damage are by Ismail et al. (1990), who modeled the rectangular edge defect as a spring, by Ostachowicz and Krawczuk (1991), who modeled the damage as an elastic hinge and by Thompson (1949), who modeled the damage as a concentrated couple at the location of the damage. Other models based on expected physical behavior are by Joshi and Madhusudhan (1991), who modeled the damage as a zone with reduced Young’s modulus and by Ballo (1999), who modeled it as spring with nonlinear stiffness. Krawczuk (2002) used an extensional spring at the damage location, with its flexibility determined using the stress intensity factors KI. Approximate methods to model the crack are by Chondros et al. (1998), who used a so-called crack function as an additional term in the axial displacement of Euler–Bernoulli beams. The crack functions were determined using stress intensity factors KI, KII and KIII. Christides and Barr (1984) used the Rayleigh–Ritz method, Shen and Pierre (1990) used the Galerkin Method, and Qian et al. (1991) used a finite element method to predict the behavior of a beam with an edge crack. Law and Lu (2005) used assumed modes and modeled the crack mathematically as a Dirac delta function.
Wang and Qiao (2007) approximated the modal displacements using Heaviside’s function, which meant that modal displacements were discontinuous at the crack location.
Application to SHM
Primary shortcomings of the above methods were that:
They have been developed mostly for Euler–Bernoulli beam theory;
They were developed in a few cases for Timoshenko beam theory or plate theories with expressions provided only for particular boundary conditions and beam or plate shapes;
They did not include mass change when applicable; and
Only few damage shapes were considered, such as V-shaped or rectangular notches, even though damage can occur in a wide variety of shapes (for which stress intensity factors may not be readily available).
Observations in the literature survey regarding the different damage models are similar, i.e., they are not generic. In spite of considerable progress in the damage identification using vibration
based methods, there is still lack of a fairly successful algorithm to detect damage as concluded in all the reviews since 1995. In 1995, in the review published by Dimarogonas (1996), it is concluded “A consistent cracked beam vibration theory is yet to be developed”. In 2005, in another review about vibration based structural health monitoring, Carden and Fanning (2004) conclude, “There is no universal agreement as to the optimum method for using measured vibration data for damage detection, location or quantification”. Similarly in 2007, Montalvao et al. (2006) state as one of the conclusions, “There is no general algorithm that allows the resolution of all kinds of problems in all kinds of structures”. Similar trends regarding lack of generality of proposed models is seen in the latest review by Fan and Qiao (2010).
The lack of generality of damage models is addressed by proposing a ‘unified framework’ which is valid for self-adjoint systems using beam theories like Euler–Bernoulli beam theory, Timoshenko, plate theories like Kirchhoff and Mindlin and shell theories. The model was presented and verified for a damaged beam with notch type damage, using first-order perturbation only, for the Euler–Bernoulli beam theory in the paper by Dixit and Hanagud (2011) and using Timoshenko beam theory in the paper by Dixit and Hanagud (2009). Since the results are given for nth order, a computer program can be developed which will give the results for mode shapes and natural frequencies to the desired accuracy, preempting the need to go through the mathematically arduous task of deriving the higher order expressions algebraically.
Features
This Unified Framework involves a general analytical procedure, which yields nth-order expressions governing mode shapes and natural frequencies and for damaged elastic structures
such as rods, beams, plates and shells of any shape. Features of the procedure include the following:
Rather than modeling the damage as a fictitious elastic element or localized or global change in constitutive properties, it is modeled in a mathematically rigorous manner as a geometric discontinuity.
The inertia effect (kinetic energy), which, unlike the stiffness effect (strain energy), of the damage has been neglected by researchers, is included in it.
The framework is generic and is applicable to wide variety of engineering structures of different shapes with arbitrary boundary conditions which constitute self adjoint systems and also to a wide variety of damage profiles and even multiple areas of damage.
References
Structural analysis | Unified framework | Engineering | 1,392 |
397,175 | https://en.wikipedia.org/wiki/Hexamethylenetetramine | Hexamethylenetetramine (HMTA), also known as 1,3,5,7-tetraazaadamantane, is a heterocyclic organic compound with diverse applications. It has the chemical formula (CH2)6N4 and is a white crystalline compound that is highly water soluble in water and polar organic solvents. It is useful in the synthesis of other organic compounds, including plastics, pharmaceuticals, and rubber additives. The compound is also used medically for certain conditions. It sublimes in vacuum at 280°C. It has a tetrahedral cage-like structure similar to adamantane. The four vertices are occupied by nitrogen atoms, which are linked by methylene groups. Although the molecular shape defines a cage, no void space is available at the interior.
Synthesis, structure, reactivity
Hexamethylenetetramine was discovered by Aleksandr Butlerov in 1859.
It is prepared industrially by combining formaldehyde and ammonia:
The molecule behaves like an amine base, undergoing protonation and as a ligand. N-alkylation with chloroallyl chloride gives quaternium-15).
Applications
The dominant use of hexamethylenetetramine is in the production of solid (powder) or liquid phenolic resins and phenolic resin moulding compounds, in which it is added as a hardening component. These products are used as binders, e.g., in brake and clutch linings, abrasives, non-woven textiles, formed parts produced by moulding processes, and fireproof materials.
Medical uses
The compound is also used medically as a urinary antiseptic and antibacterial medication under the name methenamine or hexamine. It is used as an alternative to antibiotics to prevent urinary tract infections (UTIs) and is sold under the brand names Hiprex, Urex, and Urotropin, among others.
As the mandelic acid salt (methenamine mandelate) or the hippuric acid salt (methenamine hippurate), it is used for the treatment of urinary tract infections. In an acidic environment, methenamine is believed to act as an antimicrobial by converting to formaldehyde. A systematic review of its use for this purpose in adult women found there was insufficient evidence of benefit and further research was needed. A UK study showed that methenamine is as effective as daily low-dose antibiotics at preventing UTIs among women who experience recurrent UTIs. As methenamine is an antiseptic, it may avoid the issue of antibiotic resistance.
Methenamine acts as an over-the-counter antiperspirant due to the astringent property of formaldehyde. Specifically, methenamine is used to minimize perspiration in the sockets of prosthetic devices.
Histological stains
Methenamine silver stains are used for staining in histology, including the following types:
Grocott's methenamine silver stain, used widely as a screen for fungal organisms.
Jones' stain, a methenamine silver-Periodic acid-Schiff that stains for basement membrane, availing to view the "spiked" Glomerular basement membrane associated with membranous glomerulonephritis.
Solid fuel
Together with 1,3,5-trioxane, hexamethylenetetramine is a component of hexamine fuel tablets used by campers, hobbyists, the military and relief organizations for heating camping food or military rations. It burns smokelessly, has a high energy density of 30.0 megajoules per kilogram (MJ/kg), does not liquify while burning, and leaves no ashes, although its fumes are toxic.
Standardized 0.149 g tablets of methenamine (hexamine) are used by fire-protection laboratories as a clean and reproducible fire source to test the flammability of carpets and rugs.
Food additive
Hexamethylenetetramine or hexamine is also used as a food additive as a preservative (INS number 239). It is approved for usage for this purpose in the EU, where it is listed under E number E239, however it is not approved in the USA, Russia, Australia, or New Zealand.
Reagent in organic chemistry
Hexamethylenetetramine is a versatile reagent in organic synthesis. It is used in the Duff reaction (formylation of arenes), the Sommelet reaction (converting benzyl halides to aldehydes), and in the Delepine reaction (synthesis of amines from alkyl halides).
Explosives
Hexamethylenetetramine is the base component to produce RDX and, consequently, C-4 as well as octogen (a co-product with RDX), hexamine dinitrate, hexamine diperchlorate, HMTD, and R-salt.
From October 2023, sale of hexamethylenetetramine in the UK is restricted to licenced persons (as a "regulated precursor" under the terms of the Poisons Act 1972).
Pyrotechnics
Hexamethylenetetramine is also used in pyrotechnics to reduce combustion temperatures and decrease the color intensity of various fireworks. Because of its ash-free combustion, hexamethylenetetramine is also utilized in indoor fireworks alongside magnesium and lithium salts.
Historical uses
Hexamethylenetetramine was first introduced into the medical setting in 1895 as a urinary antiseptic. It was officially approved by the FDA for medical use in the United States in 1967. However, it was only used in cases of acidic urine, whereas boric acid was used to treat urinary tract infections with alkaline urine. Scientist De Eds found that there was a direct correlation between the acidity of hexamethylenetetramine's environment and the rate of its decomposition. Therefore, its effectiveness as a drug depended greatly on the acidity of the urine rather than the amount of the drug administered. In an alkaline environment, hexamethylenetetramine was found to be almost completely inactive.
Hexamethylenetetramine was also used as a method of treatment for soldiers exposed to phosgene in World War I. Subsequent studies have shown that large doses of hexamethylenetetramine provide some protection if taken before phosgene exposure but none if taken afterwards.
Producers
Since 1990 the number of European producers has been declining. The French SNPE factory closed in 1990; in 1993, the production of hexamethylenetetramine in Leuna, Germany ceased; in 1996, the Italian facility of Agrolinz closed down; in 2001, the UK producer Borden closed; in 2006, production at Chemko, Slovak Republic, was closed. Remaining producers include INEOS in Germany, Caldic in the Netherlands, and Hexion in Italy. In the US, Eli Lilly and Company stopped producing methenamine tablets in 2002. In Australia, Hexamine Tablets for fuel are made by Thales Australia Ltd. In México, Hexamine is produced by Abiya. Many other countries who still produce this include Russia, Saudi Arabia, and China.
References
Adamantane-like molecules
Antimicrobials
Corrosion inhibitors
E-number additives
Fuels
Heterocyclic compounds with 3 rings
Nitrogen heterocycles
Preservatives
Reagents for organic chemistry
Substances discovered in the 19th century
Tertiary amines | Hexamethylenetetramine | Chemistry,Biology | 1,599 |
3,545,648 | https://en.wikipedia.org/wiki/Probability%20current | In quantum mechanics, the probability current (sometimes called probability flux) is a mathematical quantity describing the flow of probability. Specifically, if one thinks of probability as a heterogeneous fluid, then the probability current is the rate of flow of this fluid. It is a real vector that changes with space and time. Probability currents are analogous to mass currents in hydrodynamics and electric currents in electromagnetism. As in those fields, the probability current (i.e. the probability current density) is related to the probability density function via a continuity equation. The probability current is invariant under gauge transformation.
The concept of probability current is also used outside of quantum mechanics, when dealing with probability density functions that change over time, for instance in Brownian motion and the Fokker–Planck equation.
The relativistic equivalent of the probability current is known as the probability four-current.
Definition (non-relativistic 3-current)
Free spin-0 particle
In non-relativistic quantum mechanics, the probability current of the wave function of a particle of mass in one dimension is defined as
where
is the reduced Planck constant;
denotes the complex conjugate of the wave function;
denotes the real part;
denotes the imaginary part.
Note that the probability current is proportional to a Wronskian
In three dimensions, this generalizes to
where denotes the del or gradient operator. This can be simplified in terms of the kinetic momentum operator,
to obtain
These definitions use the position basis (i.e. for a wavefunction in position space), but momentum space is possible.
Spin-0 particle in an electromagnetic field
The above definition should be modified for a system in an external electromagnetic field. In SI units, a charged particle of mass and electric charge includes a term due to the interaction with the electromagnetic field;
where is the magnetic vector potential. The term has dimensions of momentum. Note that used here is the canonical momentum and is not gauge invariant, unlike the kinetic momentum operator .
In Gaussian units:
where is the speed of light.
Spin-s particle in an electromagnetic field
If the particle has spin, it has a corresponding magnetic moment, so an extra term needs to be added incorporating the spin interaction with the electromagnetic field.
According to Landau-Lifschitz's Course of Theoretical Physics the electric current density is in Gaussian units:
And in SI units:
Hence the probability current (density) is in SI units:
where is the spin vector of the particle with corresponding spin magnetic moment and spin quantum number .
It is doubtful if this formula is valid for particles with an interior structure. The neutron has zero charge but non-zero magnetic moment, so would be impossible (except would also be zero in this case). For composite particles with a non-zero charge – like the proton which has spin quantum number s=1/2 and μS= 2.7927·μN or the deuteron (H-2 nucleus) which has s=1 and μS=0.8574·μN – it is mathematically possible but doubtful.
Connection with classical mechanics
The wave function can also be written in the complex exponential (polar) form:
where are real functions of and .
Written this way, the probability density is and the probability current is:
The exponentials and terms cancel:
Finally, combining and cancelling the constants, and replacing with ,
Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. If we take the familiar formula for the mass flux in hydrodynamics:
where is the mass density of the fluid and is its velocity (also the group velocity of the wave). In the classical limit, we can associate the velocity with which is the same as equating with the classical momentum however, it does not represent a physical velocity or momentum at a point since simultaneous measurement of position and velocity violates uncertainty principle. This interpretation fits with Hamilton–Jacobi theory, in which
in Cartesian coordinates is given by , where is Hamilton's principal function.
The de Broglie-Bohm theory equates the velocity with in general (not only in the classical limit) so it is always well defined. It is an interpretation of quantum mechanics.
Motivation
Continuity equation for quantum mechanics
The definition of probability current and Schrödinger's equation can be used to derive the continuity equation, which has exactly the same forms as those for hydrodynamics and electromagnetism.
For some wave function , let:
be the probability density (probability per unit volume, denotes complex conjugate). Then,
where is any volume and is the boundary of .
This is the conservation law for probability in quantum mechanics. The integral form is stated as:
whereis the probability current or probability flux (flow per unit area).
Here, equating the terms inside the integral gives the continuity equation for probability:and the integral equation can also be restated using the divergence theorem as:
.
In particular, if is a wavefunction describing a single particle, the integral in the first term of the preceding equation, sans time derivative, is the probability of obtaining a value within when the position of the particle is measured. The second term is then the rate at which probability is flowing out of the volume . Altogether the equation states that the time derivative of the probability of the particle being measured in is equal to the rate at which probability flows into .
By taking the limit of volume integral to include all regions of space, a well-behaved wavefunction that goes to zero at infinities in the surface integral term implies that the time derivative of total probability is zero ie. the normalization condition is conserved. This result is in agreement with the unitary nature of time evolution operators which preserve length of the vector by definition.
Transmission and reflection through potentials
In regions where a step potential or potential barrier occurs, the probability current is related to the transmission and reflection coefficients, respectively and ; they measure the extent the particles reflect from the potential barrier or are transmitted through it. Both satisfy:
where and can be defined by:
where are the incident, reflected and transmitted probability currents respectively, and the vertical bars indicate the magnitudes of the current vectors. The relation between and can be obtained from probability conservation:
In terms of a unit vector normal to the barrier, these are equivalently:
where the absolute values are required to prevent and being negative.
Examples
Plane wave
For a plane wave propagating in space:
the probability density is constant everywhere;
(that is, plane waves are stationary states) but the probability current is nonzero – the square of the absolute amplitude of the wave times the particle's speed;
illustrating that the particle may be in motion even if its spatial probability density has no explicit time dependence.
Particle in a box
For a particle in a box, in one spatial dimension and of length , confined to the region , the energy eigenstates are
and zero elsewhere. The associated probability currents are
since
Discrete definition
For a particle in one dimension on we have the Hamiltonian where is the discrete Laplacian, with being the right shift operator on Then the probability current is defined as with the velocity operator, equal to and is the position operator on Since is usually a multiplication operator on we get to safely write
As a result, we find:
References
Further reading
Quantum mechanics | Probability current | Physics | 1,502 |
33,406,736 | https://en.wikipedia.org/wiki/HECT%20domain | In molecular biology, the HECT domain is a protein domain found in ubiquitin-protein ligases. The name HECT comes from 'Homologous to the E6-AP Carboxyl Terminus'. Proteins containing this domain at the C terminus include ubiquitin-protein ligase, which regulates ubiquitination of CDC25. Ubiquitin-protein ligase accepts ubiquitin from an E2 ubiquitin-conjugating enzyme in the form of a thioester, and then directly transfers the ubiquitin to targeted substrates. A cysteine residue is required for ubiquitin-thioester formation. Human thyroid receptor interacting protein 12 (TRIP12), which also contains this domain, is a component of an ATP-dependent multisubunit protein that interacts with the ligand binding domain of the thyroid hormone receptor. It could be an E3 ubiquitin-protein ligase. Human E6AP ubiquitin-protein ligase interacts with the E6 protein of the cancer-associated Human papillomavirus type 16 and Human papillomavirus type 18. The E6/E6-AP complex binds to and targets the p53 tumour-suppressor protein for ubiquitin-mediated proteolysis.
References
Protein domains | HECT domain | Biology | 281 |
25,464,978 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20September%2022%2C%202052 | An annular solar eclipse will occur at the Moon's ascending node of orbit between Sunday, September 22 and Monday, September 23, 2052, with a magnitude of 0.9734. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. Occurring about 5.9 days before apogee (on September 28, 2052, at 20:25 UTC), the Moon's apparent diameter will be smaller.
The path of annularity will be visible from parts of southern Indonesia, East Timor, the northern tip of Queensland, Australia, and New Caledonia. A partial solar eclipse will also be visible for parts of Australia, Indonesia, the Philippines, Oceania, and Antarctica.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2052
A total solar eclipse on March 30.
A penumbral lunar eclipse on April 14.
An annular solar eclipse on September 22.
A partial lunar eclipse on October 8.
Metonic
Preceded by: Solar eclipse of December 5, 2048
Followed by: Solar eclipse of July 12, 2056
Tzolkinex
Preceded by: Solar eclipse of August 12, 2045
Followed by: Solar eclipse of November 5, 2059
Half-Saros
Preceded by: Lunar eclipse of September 19, 2043
Followed by: Lunar eclipse of September 29, 2061
Tritos
Preceded by: Solar eclipse of October 25, 2041
Followed by: Solar eclipse of August 24, 2063
Solar Saros 135
Preceded by: Solar eclipse of September 12, 2034
Followed by: Solar eclipse of October 4, 2070
Inex
Preceded by: Solar eclipse of October 14, 2023
Followed by: Solar eclipse of September 3, 2081
Triad
Preceded by: Solar eclipse of November 23, 1965
Followed by: Solar eclipse of July 25, 2139
Solar eclipses of 2051–2054
Saros 135
Metonic series
Tritos series
Inex series
References
External links
NASA graphics
2052 9 22
2052 in science
2052 9 22
2052 9 22 | Solar eclipse of September 22, 2052 | Astronomy | 625 |
31,422 | https://en.wikipedia.org/wiki/Talc | Talc, or talcum, is a clay mineral composed of hydrated magnesium silicate, with the chemical formula . Talc in powdered form, often combined with corn starch, is used as baby powder. This mineral is used as a thickening agent and lubricant. It is an ingredient in ceramics, paints, and roofing material. It is a main ingredient in many cosmetics. It occurs as foliated to fibrous masses, and in an exceptionally rare crystal form. It has a perfect basal cleavage and an uneven flat fracture, and it is foliated with a two-dimensional platy form.
The Mohs scale of mineral hardness, based on scratch hardness comparison, defines value 1 as the hardness of talc, the softest mineral. When scraped on a streak plate, talc produces a white streak, though this indicator is of little importance, because most silicate minerals produce a white streak. Talc is translucent to opaque, with colors ranging from whitish grey to green with a vitreous and pearly luster. Talc is not soluble in water, and is slightly soluble in dilute mineral acids.
Soapstone is a metamorphic rock composed predominantly of talc.
Etymology
The word talc derives from tālk. In ancient times, the word was used for various related minerals, including talc, mica, and selenite.
Formation
Talc dominantly forms from the metamorphism of magnesian minerals such as serpentine, pyroxene, amphibole, and olivine, in the presence of carbon dioxide and water. This is known as "talc carbonation" or "steatization" and produces a suite of rocks known as talc carbonates.
Talc is primarily formed by hydration and carbonation by this reaction:
+ → + +
Talc can also be formed via a reaction between dolomite and silica, which is typical of skarnification of dolomites by silica-flooding in contact metamorphic aureoles:
+ + → + +
Talc can also be formed from magnesium chlorite and quartz in blueschist and eclogite metamorphism by the following metamorphic reaction:
chlorite + quartz → kyanite + talc + water
Talc is also found as a diagenetic mineral in sedimentary rocks where it can form from the transformation of metastable hydrated magnesium-clay precursors such as kerolite, sepiolite, or stevensite that can precipitate from marine and lake water in certain conditions.
In this reaction, the ratio of talc and kyanite depends on aluminium content, with more aluminous rocks favoring production of kyanite. This is typically associated with high-pressure, low-temperature minerals such as phengite, garnet, and glaucophane within the lower blueschist facies. Such rocks are typically white, friable, and fibrous, and are known as
whiteschist.
Talc is a trioctahedral layered mineral; its structure is similar to pyrophyllite, but with magnesium in the octahedral sites of the composite layers. The crystal structure of talc is described as TOT, meaning that it is composed of parallel TOT layers weakly bonded to each other by weak van der Waals forces. The TOT layers in turn consist of two tetrahedral sheets (T) strongly bonded to the two faces of a single trioctahedral sheet (O). It is the weak bonding between TOT layers that gives talc its perfect basal cleavage and softness.
The tetrahedral sheets consist of silica tetrahedra, which are silicon ions surrounded by four oxygen ions. The tetrahedra each share three of their four oxygen ions with neighboring tetrahedra to produce a hexagonal sheet. The remaining oxygen ion (the apical oxygen ion) is available to bond with the trioctahedral sheet.
The trioctahedral sheet has the structure of a sheet of the mineral brucite. Apical oxygens take the place of some of the hydroxyl ions that would be present in a brucite sheet, bonding the tetrahedral sheets tightly to the trioctahedral sheet.
Tetrahedral sheets have a negative charge, since their bulk composition is Si4O104-. The trioctahedral sheet has an equal positive charge, since its bulk composition is Mg3(OH)24+ The combined TOT layer thus is electrically neutral.
Because the hexagons in the T and O sheets are slightly different in size, the sheets are slightly distorted when they bond into a TOT layer. This breaks the hexagonal symmetry and reduces it to monoclinic or triclinic symmetry. However, the original hexahedral symmetry is discernible in the pseudotrigonal character of talc crystals.
Occurrence
Talc is a common metamorphic mineral in metamorphic belts that contain ultramafic rocks, such as soapstone (a high-talc rock), and within whiteschist and blueschist metamorphic terranes. Prime examples of whiteschists include the Franciscan Metamorphic Belt of the western United States, the western European Alps especially in Italy, certain areas of the Musgrave Block, and some collisional orogens such as the Himalayas, which stretch along Pakistan, India, Nepal, and Bhutan.
Talc carbonate ultramafics are typical of many areas of the Archaean cratons, notably the komatiite belts of the Yilgarn Craton in Western Australia. Talc-carbonate ultramafics are also known from the Lachlan Fold Belt, eastern Australia, from Brazil, the Guiana Shield, and from the ophiolite belts of Turkey, Oman, and the Middle East.
China is the key world talc and steatite-producing country with an output of about 2.2M tonnes(2016), which accounts for 30% of total global output. The other major producers are Brazil (12%), India (11%), the U.S. (9%), France (6%), Finland (4%), Italy, Russia, Canada, and Austria (2%, each).
Notable economic talc occurrences include the Mount Seabrook talc mine, Western Australia, formed upon a polydeformed, layered ultramafic intrusion. The France-based Luzenac Group is the world's largest supplier of mined talc. Its largest talc mine at Trimouns near Luzenac in southern France produces 400,000 tonnes of talc per year.
Conflict mineral
Extraction in disputed areas of Nangarhar province, Afghanistan, has led the international monitoring group Global Witness to declare talc a conflict resource, as the profits are used to fund armed confrontation between the Taliban and Islamic State.
Uses
Talc is used in many industries, including paper making, plastic, paint and coatings (e.g. for metal casting molds), rubber, food, electric cable, pharmaceuticals, cosmetics, and ceramics. A coarse grayish-green high-talc rock is soapstone or steatite, used for stoves, sinks, electrical switchboards, etc. It is often used for surfaces of laboratory table tops and electrical switchboards because of its resistance to heat, electricity, and acids.
In finely ground form, talc finds use as a cosmetic (talcum powder), as a lubricant, and as a filler in paper manufacture. It is used to coat the insides of inner tubes and rubber gloves during manufacture to keep the surfaces from sticking. Talcum powder, with heavy refinement, has been used in baby powder, an astringent powder used to prevent diaper rash (nappy rash). The American Academy of Pediatrics recommends that parents avoid using baby powder because it poses a risk of respiratory problems, including breathing trouble and serious lung damage if inhaled. The small size of the particles makes it difficult to keep them out of the air while applying the powder. Zinc oxide-based ointments are a much safer alternative.
Soapstone (massive talc) is often used as a marker for welding or metalworking.
Talc is also used as food additive or in pharmaceutical products as a glidant. In medicine, talc is used as a pleurodesis agent to prevent recurrent pleural effusion or pneumothorax. In the European Union, the additive number is E553b. Talc may be used in the processing of white rice as a buffing agent in the polishing stage.
Due to its low shear strength, talc is one of the oldest known solid lubricants. Also, limited use is made of talc as a friction-reducing additive in lubricating oils.
Talc is widely used in the ceramics industry in both bodies and glazes. In low-fire art-ware bodies, it imparts whiteness and increases thermal expansion to resist crazing. In stonewares, small percentages of talc are used to flux the body and therefore improve strength and vitrification. It is a source of MgO flux in high-temperature glazes (to control melting temperature). It is also employed as a matting agent in earthenware glazes and can be used to produce magnesia mattes at high temperatures.
ISO standard for quality (ISO 3262)
Patents are pending on the use of magnesium silicate as a cement substitute. Its production requirements are less energy-intensive than ordinary Portland cement (at a heating requirement of around 650 °C for talc compared to 1500 °C for limestone to produce Portland cement), while it absorbs far more carbon dioxide as it hardens. This results in a negative carbon footprint overall, as the cement substitute removes 0.6 tonnes of CO2 per tonne used. This contrasts with a positive carbon footprint of 0.4 tonnes per tonne of conventional cement.
Talc is used in the production of the materials that are widely used in the building interiors such as base content paints in wall coatings. Other areas that use talc to a great extent are organic agriculture, the food industry, cosmetics, and hygiene products such as baby powder and detergent powder.
Talc is sometimes used as an adulterant to illegal heroin, to expand volume and weight and thereby increase its street value. With intravenous use, it may lead to pulmonary talcosis, a granulomatous inflammation in the lungs.
Sterile talc powder
Sterile talc powder (NDC 63256-200-05) is a sclerosing agent used in the procedure of pleurodesis. This can be helpful as a cancer treatment to prevent pleural effusions (an abnormal collection of fluid in the space between the lungs and the thoracic wall). It is inserted into the space via a chest tube, causing it to close up, so fluid cannot collect there. The product can be sterilized by dry heat, ethylene oxide, or gamma irradiation.
Safety
Suspicions have been raised that talc use contributes to certain types of disease, mainly cancers of the ovaries and lungs. According to the IARC, talc containing asbestos is classified as a group 1 agent (carcinogenic to humans), talc use in the perineum is classified as group 2B (possibly carcinogenic to humans), and talc not containing asbestos is classified as group 2A (probably carcinogenic to humans). Reviews by Cancer Research UK and the American Cancer Society conclude that some studies have found a link, but other studies have not.
The studies discuss pulmonary issues, lung cancer, and ovarian cancer. One of these, published in 1993, was a US National Toxicology Program report, which found that cosmetic grade talc containing no asbestos-like fibres was correlated with tumor formation in rats forced to inhale talc for 6 hours a day, five days a week over at least 113 weeks. A 1971 paper found particles of talc embedded in 75% of the ovarian tumors studied. In 2018, Health Canada issued a warning against inhaling talcum powder or women's using it perineally.
In contrast, however, research published in 1995 and 2000 concluded that, although it was plausible that talc could cause ovarian cancer, no conclusive evidence had been shown. Further, a 2008 European Journal of Cancer Prevention review of ovarian cancer and talc use studies pointed out that, although many of them examined the duration, frequency, and accumulation of hygienic talc use, few found a positive association among these factors and some found a negative one: “It may be argued that the overall null findings associated with talc-dusted diaphragms and condom use is more convincing evidence for a lack of a carcinogenic effect, especially given the lack of an established correlation between perineal dusting frequency and ovarian tissue talc concentrations and the lack of a consistent dose-response relationship with ovarian cancer risk." Instead, the authors credited powdered talc with "a high degree of safety.”
Similarly, in a 2014 article published in a leading cancer journal, the Journal of the National Cancer Institute, researchers reported the results of a survey of 61,576 postmenopausal women, more than half of whom had used talc powder perineally. The researchers compared the subjects’ reports of their own talc use with their reports of having had ovarian cancer diagnosed by their doctors, and found, regardless of subjects’ age and tubal ligation status, “Ever use of perineal powder ... was not associated with risk of ovarian cancer compared with never use,” nor was any greater individual cancer risk associated with longer use of talc powder. On this basis, the article concluded, “perineal powder use does not appear to influence ovarian cancer risk.” The Cosmetic Ingredient Review Expert Panel concluded in 2015 that talc, in the concentrations currently used in cosmetics, is safe.
In July 2024, the International Agency for Research on Cancer listed talc as "probably" carcinogenic for humans. The study is based on limited evidence it could cause ovarian cancer in humans.
Industrial grade
In the United States, the Occupational Safety and Health Administration and National Institute for Occupational Safety and Health have set occupational exposure limits to respirable talc dusts at 2 mg/m3 over an eight-hour workday. At levels of 1000 mg/m3, inhalation of talc is considered immediately dangerous to life and health.
Food grade
The United States Food and Drug Administration considers talc (magnesium silicate) generally recognized as safe (GRAS) to use as an anticaking agent in table salt in concentrations smaller than 2%.
Association with asbestos
One particular issue with commercial use of talc is its frequent co-location in underground deposits with asbestos ore. Asbestos is a general term for different types of fibrous silicate minerals, desirable in construction for their heat resistant properties. There are six varieties of asbestos; the most common variety in manufacturing, white asbestos, is in the serpentine family. Serpentine minerals are sheet silicates; although not in the serpentine family, talc is also a sheet silicate, with two sheets connected by magnesium cations. The frequent co-location of talc deposits with asbestos may result in contamination of mined talc with white asbestos, which poses serious health risks when dispersed into the air and inhaled. Stringent quality control since 1976, including separating cosmetic- and food-grade talc from that destined for industrial use, has largely eliminated this issue, but it remains a potential hazard requiring mitigation in the mining and processing of talc. A 2010 US FDA survey failed to find asbestos in a variety of talc-containing products. A 2018 Reuters investigation asserted that pharmaceuticals company Johnson & Johnson knew for decades that there was asbestos in its baby powder, and in 2020 the company stopped selling its baby powder in the US and Canada. There were calls for Johnson & Johnson's largest shareholders to force the company to end global sales of baby powder, and hire an independent firm to conduct a racial justice audit as it had been marketed to African American and overweight women. On August 11, 2022, the company announced it would stop making talc-based powder by 2023 and replace it with cornstarch-based powders. The company said the talc-based powder is safe to use and does not contain asbestos.
Litigation
In 2006 the International Agency for Research on Cancer classified talcum powder as a possible human carcinogen if used in the female genital area. Despite this, no federal agency in the US acted to remove talcum powder from the market or add warnings.
In February 2016, as the result of a lawsuit against Johnson & Johnson (J&J), a St. Louis jury awarded $72 million to the family of an Alabama woman who died from ovarian cancer. The family claimed that the use of talcum powder was responsible for her cancer.
In May 2016, a South Dakota woman was awarded $55 million as the result of another lawsuit against J&J. The woman had used Johnson & Johnson's Baby Powder for more than 35 years before being diagnosed with ovarian cancer in 2011.
In October 2016, a St. Louis jury awarded $70.1 million to a Californian woman with ovarian cancer who had used Johnson's Baby Powder for 45 years.
In August 2017, a Los Angeles jury awarded $417 million to a Californian woman, Eva Echeverria, who developed ovarian cancer as a "proximate result of the unreasonably dangerous and defective nature of talcum powder", her lawsuit against Johnson & Johnson stated. On 20 October 2017, Los Angeles Superior Court judge Maren Nelson dismissed the verdict. The judge stated that Echeverria proved there is "an ongoing debate in the scientific and medical community about whether talc more probably than not causes ovarian cancer and thus (gives) rise to a duty to warn", but not enough to sustain the jury's imposition of liability against Johnson & Johnson stated, and concluded that Echeverria did not adequately establish that talc causes ovarian cancer.
In July 2018, a court in St. Louis awarded a $4.7bn claim ($4.14bn in punitive damages and $550m in compensatory damages) against J&J to 22 claimant women, concluding that the company had suppressed evidence of asbestos in its products for more than four decades.
At least 1,200 to 2,000 other talcum powder-related lawsuits were pending .
In 2020 J&J stopped sales of its talcum-based baby powder, which it had been selling for 130 years. J&J created a subsidiary responsible for the claims in an effort to resolve the lawsuits in bankruptcy court. In 2023 J&J proposed a nearly $9bn settlement with 50,000 claimants saying the claims were "specious" but it wanted to move on from the issue, but judges blocked the plans, ruling that the subsidiary was not in financial distress and could not use the bankruptcy system to resolve the lawsuits.
In July 2023 J&J sued researchers who linked talc to cancer alleging they used junk science to disparage company's products, while defendants say the lawsuits are meant to silence scientists.
See also
References
Phyllosilicates
Magnesium minerals
Symbols of Vermont
Cosmetics chemicals
Excipients
IARC Group 3 carcinogens
Clay minerals group
Triclinic minerals
Minerals in space group 2
Monoclinic minerals
Minerals in space group 15
Luminescent minerals
E-number additives | Talc | Chemistry | 4,072 |
53,117,142 | https://en.wikipedia.org/wiki/Microbes%20and%20Man | Microbes and Man is a popularising book by the English microbiologist John Postgate FRS on the role of microorganisms in human society, first published in 1969, and still in print in 2017. Critics called it a "classic" and "a pleasure to read".
Book
Contents
The book is structured as follows:
1 Man and microbes
2 Microbiology
3 Microbes in society
4 Interlude: how to handle microbes
5 Microbes in nutrition
6 Microbes in production
7 Deterioration, decay and pollution
8 Disposal and cleaning-up
9 Second interlude: microbiologists and man
10 Microbes in evolution
11 Microbes in the future
Illustrations
The 4th edition has 32 illustrations, ranging from photographs of microscopic algae, protozoa, fungi, viruses and bacteria, to the macroscopic effects of microbes such as a sulphur-forming lake in Libya and fish killed by bacterial reduction of sulphate in water.
Editions
1st edition, Cambridge University Press, 1969
2nd edition, Cambridge University Press, 1986
3rd edition, Cambridge University Press, 1992
4th edition, Cambridge University Press, 2000
The book has been translated into nine languages: Arabic, Chinese, Czech, French, German, Japanese, Polish, Portuguese, and Spanish.
Reception
The Guardian described the book as "a passionate case for the importance of micro-organisms".
In his textbook Essential Microbiology, Stuart Hogg recommends the book to readers who want a general overview of microbes and their uses, stating "there can be no better starting point than John Postgate's classic".
New Scientist described the book as "a pleasure to read from first page to last. It is a literal statement. Start to read it and the first page, describing the astonishing dispersion of microbes, from the upper atmosphere to the depths of the sea, will provide any reader with enough wonder and excitement to take them through to the last page and the surface of Venus." The magazine commented that Postgate's "admirable, elegantly written and painlessly informative book" came close to losing its alliterative title, at the hands of "militant feminists" at Penguin Books editing the paperback version in 1986.
Dennis R. Schneider, reviewing the 3rd edition in 1992 for Cell, described the book as having "succinctly and carefully explained examples of how microorganisms affect our lives ... one of the classics of popular science", standing alongside classics like Rosebury's Life of Man and De Kruif's Microbe Hunters. Schneider wrote that the book's Britishness "'colours' the text", but Postgate's emphasis on the beneficial and not just the harmful effects of microbes was welcome and admirably explored. He noted few errors, but objected to Postgate's assertion that AIDS "originated by transmission from a primate", for which there was at that time no evidence. Schneider would have liked a "better and longer" account of molecular biology. His chief criticism, however, was that by the 1990s the book no longer had an audience, since "the Victorial ideal of the educated middle class has vanished into the wasteland of broken families, double digit unemployment and a damaged educational system". All the same, he found the book "of value and beauty (except perhaps to the publisher)".
Charles W. Kim, reviewing the 3rd edition for The Quarterly Review of Biology, stated that "If the author's intent was to present the impact of the ubiquitous microorganisms on the environment and humans, he has succeeded admirably", describing Postgate's style as "unique".
D. Roy Cullimore, in his Practical Atlas for Bacterial Identification, comments that all four editions were "easy reading", addressing the challenges that microbes presented to human society. He suggests that "ideally" all four books be read in sequence for an overview of the development of microbiology in half a century.
Notes
References
1969 non-fiction books
Microbiology
Popular science books | Microbes and Man | Chemistry,Biology | 816 |
33,938,046 | https://en.wikipedia.org/wiki/Boletus%20projectelloides | Boletus projectelloides is a species of bolete fungus in the family Boletaceae. Found in Belize, it was described as new to science in 2007.
See also
List of Boletus species
References
External links
projectelloides
Fungi described in 2007
Fungi of Central America
Fungus species | Boletus projectelloides | Biology | 60 |
25,164,535 | https://en.wikipedia.org/wiki/Scammonin%20I | Scammonin (also known as jalapin or scammonium) is a glycoside that has been isolated from the stems of Ipomoea purga (jalap plant) and from Convolvulus scammonia (scammony).
References
External links
Glycosides
Glycolipids
de:Skammonium | Scammonin I | Chemistry | 77 |
27,887,532 | https://en.wikipedia.org/wiki/Iota%20Herculis | Iota Herculis (ι Herculis, ι Her) is a fourth-magnitude variable star system in the constellation Hercules, consisting of at least four stars all about away. The brightest is a β Cephei variable, a pulsating star.
Visibility
Iota Herculis is dim enough that in cities with a lot of light pollution it is unlikely to be visible with the naked eye. In rural areas it will usually be visible, and for much of the Northern Hemisphere the star is circumpolar and visible year around.
Pole star
As a visible star, the proximity of Iota Herculis to the precessional path the Earth's North Pole traces across the celestial sphere makes it a pole star, a title currently held by Polaris. In 10,000 BCE it was the pole star, and in the future it will be again. While Polaris is only 0.5° off the precessional path Iota Herculis is 4° off.
Properties
Iota Herculis is a B-type subgiant star that is at the end of its hydrogen fusion stage. With a stellar classification B3IV, it is considerably larger than the Sun, having a mass that is 6.5 times solar and a radius 5.3 times. Though its apparent magnitude is only 3.80, it is 2,500 times more luminous than the Sun, yielding an absolute magnitude of −2.11, brighter in fact than most of the hot B stars in the Pleiades open star cluster. The Hipparcos satellite mission estimated its distance at roughly 152 parsecs (pc) from Earth, or 496 light-years (ly) away; an updated parallax measurement from Floor van Leeuwen in 2007, however, puts the distance at 455 ly with a much tighter error factor of only 8 ly.
Star system
Iota Herculis is a multiple star system. It is a spectroscopic binary having a 113.8-day period, indicating that its closest component is separated by about . Another companion can be found at approximately 30 AU from the main star, giving it an orbital period of about 60 years. Still another star has been identified with a common proper motion at an angular separation of 116 arcseconds and a visual magnitude of 12.1. This would place it approximately 18,000 AU away, giving it an orbit of about 1 million years.
Etymology
In Chinese, (), meaning Celestial Flail, refers to an asterism consisting of ι Herculis, ξ Draconis, ν Draconis, β Draconis and γ Draconis. Consequently, ι Herculis itself is known as (, ).
References
External links
Jim Kaler's Stars, University of Illinois: IOTA HER (Iota Herculis)
An Atlas of the Universe: Multiple Star Orbits
Hercules (constellation)
Herculis, Iota
4
Tiān Bàng wu
B-type subgiants
Spectroscopic binaries
Slowly pulsating B-type stars
Herculis, 085
086414
160762
6588
Durchmusterung objects | Iota Herculis | Astronomy | 647 |
2,504,255 | https://en.wikipedia.org/wiki/Social%20dominance%20orientation | Social dominance orientation (SDO) is a personality trait measuring an individual's support for social hierarchy and the extent to which they desire their in-group be superior to out-groups. SDO is conceptualized under social dominance theory as a measure of individual differences in levels of group-based discrimination; that is, it is a measure of an individual's preference for hierarchy within any social system and the domination over lower-status groups. It is a predisposition toward anti-egalitarianism within and between groups.
Individuals who score high in SDO desire to maintain and, in many cases, increase the differences between social statuses of different groups, as well as individual group members. Typically, they are dominant, driven, tough, and seekers of power. People high in SDO also prefer hierarchical group orientations. Often, people who score high in SDO adhere strongly to belief in a "dog-eat-dog" world. It has also been found that men are generally higher than women in SDO measures. A study of undergraduates found that SDO does not have a strong positive relationship with authoritarianism.
Social dominance theory
SDO was first proposed by Jim Sidanius and Felicia Pratto as part of their social dominance theory (SDT). SDO is the key measurable component of SDT that is specific to it.
SDT begins with the empirical observation that surplus-producing social systems have a threefold group-based hierarchy structure: age-based, gender-based and "arbitrary set-based", which can include race, class, sexual orientation, caste, ethnicity, religious affiliation, etc. Age-based hierarchies invariably give more power to adults and middle-age people than children and younger adults, and gender-based hierarchies invariably grant more power to one gender over others, but arbitrary-set hierarchies—though quite resilient—are truly arbitrary.
SDT is based on three primary assumptions:
While age- and gender-based hierarchies will tend to exist within all social systems, arbitrary-set systems of social hierarchy will invariably emerge within social systems producing sustainable economic surpluses.
Most forms of group conflict and oppression (e.g., racism, homophobia, ethnocentrism, sexism, classism, regionalism) can be regarded as different manifestations of the same basic human predisposition to form group-based hierarchies.
Human social systems are subject to the counterbalancing influences of hierarchy-enhancing (HE) forces, producing and maintaining ever higher levels of group-based social inequality, and hierarchy-attenuating (HA) forces, producing greater levels of group-based social equality.
SDO is the individual attitudinal aspect of SDT. It is influenced by group status, socialization, and temperament. In turn, it influences support for HE and HA "legitimating myths", defined as "values, attitudes, beliefs, causal attributions and ideologies" that in turn justify social institutions and practices that either enhance or attenuate group hierarchy. Legitimising myths are used by SDT to refer to widely accepted ideologies that are accepted as explaining how the world works—SDT does not have a position on the veracity, morality or rationality of these beliefs, as the theory is intended to be a descriptive account of group-based inequality rather than a normative theory.
Early development
While the correlation of gender with SDO scores has been empirically measured and confirmed, the impact of temperament and socialization is less clear. Duckitt has suggested a model of attitude development for SDO, suggesting that unaffectionate socialisation in childhood causes a tough-minded attitude. According to Duckitt's model, people high in tough-minded personality are predisposed to view the world as a competitive place in which resource competition is zero-sum. A desire to compete, which fits with social dominance orientation, influences in-group and outside-group attitudes. People high in SDO also believe that hierarchies are present in all aspects of society and are more likely to agree with statements such as "It's probably a good thing that certain groups are at the top and other groups are at the bottom".
Scale
SDO has been measured by a series of scales that have been refined over time, all of which contain a balance of pro- and contra-trait statements or phrases. A 7-point Likert scale is used for each item; participants rate their agreement or disagreement with the statements from 1 (strongly disagree) to 7 (strongly agree). Most of the research was conducted with the SDO-5 (a 14-point scale) and SDO-6. The SDO-7 scale is the most recent scale measuring social dominance orientation, which embeds two sub-dimensions: dominance (SDO-D) and anti-egalitarianism (SDO-E).
SDO-7 items
Dominance Sub-Scale
Some groups of people must be kept in their place.
It's probably a good thing that certain groups are at the top and other groups are at the bottom.
An ideal society requires some groups to be on top and others to be on the bottom.
Some groups of people are simply inferior to other groups.
Groups at the bottom are just as deserving as groups at the top. (reverse-scored)
No one group should dominate in society. (reverse-scored)
Groups at the bottom should not have to stay in their place. (reverse-scored)
Group dominance is a poor principle. (reverse-scored)
Anti-Egalitarianism Sub-Scale
We should not push for group equality.
We shouldn't try to guarantee that every group has the same quality of life.
It is unjust to try to make groups equal.
Group equality should not be our primary goal.
We should work to give all groups an equal chance to succeed. (reverse-scored)
We should do what we can to equalize conditions for different groups. (reverse-scored)
No matter how much effort it takes, we ought to strive to ensure that all groups have the same chance in life. (reverse-scored)
Group equality should be our ideal. (reverse-scored)
SDO-16 items
Some groups of people are just more worthy than others.
In getting what you want, it is sometimes necessary to use force against other groups.
It's OK if some groups have more of a chance in life than others.
To get ahead in life, it is sometimes necessary to step on other groups.
If certain groups stayed in their place, we would have fewer problems.
It's probably a good thing that certain groups are at the top and other groups are at the bottom.
Inferior groups should stay in their place.
Sometimes other groups must be kept in their place.
It would be good if groups could be equal. (reverse-scored)
Group equality should be our ideal. (reverse-scored)
All groups should be given an equal chance in life. (reverse-scored)
We should do what we can to equalize conditions for different groups. (reverse-scored)
Increased social equality is beneficial to society. (reverse-scored)
We would have fewer problems if we treated people more equally. (reverse-scored)
We should strive to make incomes as equal as possible. (reverse-scored)
No group should dominate in society. (reverse-scored)
Keying is reversed on questions 9 through 16, to control for acquiescence bias.
Criticisms of the construct
Rubin and Hewstone (2004) argue that social dominance research has changed its focus dramatically over the years, and these changes have been reflected in different versions of the social dominance orientation construct. Social dominance orientation was originally defined as "the degree to which individuals desire social dominance and superiority for themselves and their primordial groups over other groups" (p. 209). It then quickly changed to not only "(a) a...desire for and value given to in-group dominance over out-groups" but also "(b) the desire for nonegalitarian, hierarchical relationships between groups within the social system" (p. 1007). The most recent measure of social dominance orientation (see SDO-6 above) focuses on the "general desire for unequal relations among social groups, regardless of whether this means ingroup domination or ingroup subordination" (p. 312). Given these changes, Rubin and Hewstone believe that evidence for social dominance theory should be considered "as supporting three separate SDO hypotheses, rather than one single theory" (p. 22).
Group-based and individual dominance
Robert Altemeyer said that people with a high SDO want more power (agreeing with items such as "Winning is more important than how you play the game").
These observations are at odds with conceptualisations of SDO as a group-based phenomenon, suggesting that the SDO reflects interpersonal dominance, not only group-based dominance. This is supported by Sidanius and Pratto's own evidence that high-SDO individuals tend to gravitate toward hierarchy-enhancing jobs and institutions, such as law enforcement, that are themselves hierarchically structured vis-a-vis individuals within them.
Relations with other personality traits
Connection with right-wing authoritarianism
SDO correlates weakly with right-wing authoritarianism (RWA) (r ≈ .18). Both predict attitudes, such as sexist, racist, and heterosexist attitudes. The two contribute to different forms of prejudice: SDO correlates to higher prejudice against subordinate and disadvantaged groups, RWA correlates to higher prejudice against groups deemed threatening to traditional norms, while both are associated with increases in prejudice for "dissident" groups. SDO and RWA contribute to prejudice in an additive rather than interactive way (the interaction of SDO and RWA accounted, in one study, for an average of less than .001% variance in addition to their linear combination), that is the association between SDO and prejudice is similar regardless of a person's level of RWA, and vice versa. Crawford et al. (2013) found that RWA and SDO differentially predicted interpretations of media reports about socially threatening (for example, gays and lesbians) and disadvantaged groups (for example, African Americans), respectively. Subjects with high SDO, but not RWA, scores reacted positively to articles and authors that opposed affirmative action, and negatively to pro-affirmative-action article content. Moreover, RWA, but not SDO, predicted subjects' evaluations of same-sex relationships, such that high-RWA individuals favored anti-same-sex relationships article content and low-RWA individuals favorably rated pro-same-sex relationships content.
Correlation with Big Five personality traits
Studies on the relationship of SDO with the higher order Big Five personality traits have associated high SDO with lower openness to experience and lower agreeableness. Meta-analytic aggregation of these studies indicates that the association with low Agreeableness is more robust than the link to Openness to experience. Individuals low in Agreeableness are more inclined to report being motivated by self-interest and self-indulgence. They also tend to be more self-centred and are more 'tough-minded' compared to those who are high on Agreeableness, leading them to perceive the world to be a highly competitive place, where the way to success is through power and dominance – all of which predict SDO.
Low Openness, by contrast, aligns more strongly with RWA; thinking in clear and straightforward moral codes that dictate how society as a system should function. Being low in Openness prompts the individual to value security, stability and control: fundamental elements of RWA.
Facet-level associations
In case of SDO all five facets of Agreeableness significantly correlate (negatively), even after controlling for RWA. 'Tough-mindedness' (opposite of tender-mindedness' facet) is the strongest predictor of SDO. After the effect of SDO is controlled for, only one facet of agreeableness is predictive of RWA. Facets also distinguish SDO from RWA, with 'Dominators' (individuals high on SDO), but not 'Authoritarians' (individuals who score high on RWA), having been found to be lower in dutifulness, morality, sympathy and co-operation. RWA is also associated with religiosity, conservativism, righteousness, and, to some extent, a conscientious moral code, which distinguishes RWA from SDO.
Empathy
SDO is inversely related to empathy. Facets of Agreeableness that are linked to altruism, sympathy and compassion are the strongest predictors of SDO. SDO has been suggested to have a link with callous affect (which is to be found on the psychopathy sub-scale), the 'polar opposite' of empathy.
The relationship between SDO and (lack of) empathy has been found to be reciprocal – with equivocal findings. Some studies show that empathy significantly impacts SDO, whereas other research suggest the opposite effect is more robust; that SDO predicts empathy. The latter showcases how powerful of a predictor SDO may be, not only affecting individual's certain behaviours, but potentially influencing upstream the proneness to those behaviours. It also suggests that those scoring high on SDO proactively avoid scenarios that could prompt
them to be more empathetic or tender-minded. This avoidance decreases concern for other's welfare.
Empathy indirectly affects generalized prejudice through its negative relationship with SDO. It also has a direct effect on generalized prejudice, as lack of empathy makes one unable to put oneself in the other person's shoes, which predicts prejudice and antidemocratic views.
Some recent research has suggested the relationship between SDO and empathy may be more complex, arguing that people with high levels of SDO are less likely to show empathy towards low status people but more likely to show it towards high status people. Conversely, people with low SDO levels demonstrate the reverse behaviour.
Other findings and criticisms
Research suggests that people high in SDO tend to support using violence in intergroup relations while those low in SDO oppose it; however, it has also been argued that people low in SDO can also support (and those high in it oppose) violence in some circumstances, if the violence is seen as a form of counterdominance. For example, Lebanese people low in SDO approved more strongly of terrorism against the West than Lebanese people high in SDO, seemingly because it entailed a low-status group (Lebanese) attacking a high-status one (Westerners). Amongst Palestinians, lower SDO levels were correlated with more emotional hostility towards Israelis and more parochial empathy for Palestinians.
Low levels of SDO have been found to result in individuals possessing positive biases towards outgroup members, for example regarding outgroup members as less irrational than ingroup members, the reverse of what is usually found. Low levels of SDO have also been found to be linked to being better at detecting inequalities applied to low-status groups but not the same inequalities applied to high-status groups. A person's SDO levels can also affect the degree to which they perceive hierarchies, either over or underestimating them, although the effect sizes may be quite small. A person's SDO levels can also shift depending on their identification with their ingroup and low levels of SDO thus may reflect a more complex relationship to ideas of inequality and social hierarchy than just egalitarianism. While research has indicated that SDO is a strong predictor of various forms of prejudice, it has also been suggested that SDO may not be related to prejudice per se but rather be dependent upon the target, as SDO has been found to correlate positively with prejudice towards hierarchy-attenuating groups but negatively with prejudice towards hierarchy-enhancing groups.
In the contemporary US, research indicates that most people tend to score fairly low on the SDO scale, with an average score of 2.98 on a 7-point scale (with 7 being the highest in SDO and 1 the lowest), with a standard deviation of 1.19. This has also been found to apply cross-culturally, with the average SDO score being around 2.6, although there was some variation (Switzerland scoring somewhat lower and Japan scoring substantially higher). A study in New Zealand found that 91% of the population had low to moderate SDO levels (levels of 1–4 on the scale), indicating that the majority of variance in SDO occurs within this band. A 2013 multi-national study found average scores ranged from 2.5 to 4. Because SDO scales tend to skew towards egalitarianism, some researchers have argued that this has caused a misinterpretation of correlations between SDO scores and other variables, arguing that low-SDO scorers, rather than high-SDO scorers, are possibly driving most of the correlations. Thus SDO research may actually be discovering the psychology of egalitarianism rather than the reverse. Samantha Stanley argues that "high" SDO scorers are generally in the middle of the SDO scale and thus she suggests their score do not actually represent an endorsement of inequality but rather a greater tolerance or ambivalence towards it than low SDO scorers. Stanley suggests that true high-SDO scorers are possibly quite rare and that researchers need to make clearer what exactly they are defining high-SDO scores as, as prior studies did not always report the actual level of SDO endorsement from high-scorers. Some researchers have raised concerns that the trait is studied under an ideological framework of viewing group-based interactions as one of victims and victimisers (hence its label as social dominance orientation), and that research into SDO should instead look into social organisation rather than social dominance.
SDO has been found to be related to color-blindness as a racial ideology. For low-SDO individuals, color-blindness predicts more negative attitudes towards ethnic minorities but for high-SDO individuals, it predicts more positive attitudes. SDO levels can also interact with other variables. When assessing blame for the 2011 England riots, high-SDO individuals uniformly blamed ethnic diversity regardless of whether they agreed with official government discourse, whereas low-SDO individuals did not blame ethnic diversity if they disagreed with official government discourse but did blame ethnic diversity if they did agree, almost to the same degree as high-SDO individuals. Another study found that in a mock hiring experiment, participants high in SDO were more likely to favour a white applicant while those low in SDO were more likely to favour a black applicant, while in mock-juror research, high-SDO white jurors showed anti-black bias and low-SDO white jurors pro-black bias. Low-SDO individuals may also support hierarchy-enhancing beliefs (such as gender essentialism and meritocracy) if they believe this will support diversity.
SDO has also been found to relate to attitudes towards social class. Self-perceived attractiveness can also interact with a person's SDO levels (due to perceived effects on social class); changing a person's self-perceived level of attractiveness affected their self-perceived social class and thus their SDO levels.
A study report published by Nature in 2017 indicates there may be a correlation between FMRI scanned brain response to social ranks and the SDO scale. Subjects who tended to prefer hierarchical social structures and to promote socially dominant behaviors as measured by SDO exhibited stronger responses in the right anterior dorsolateral prefrontal cortex (right aDLPFC) when facing superior players. The French National Agency for Research funded study involved 28 male subjects and used FMRI measurements to demonstrate that response in the right aDLPFC to social ranks was strongly correlated with participant SDO scores measuring response to social ranks.
Correlation with conservative political views
Felicia Pratto and her colleagues have found evidence that a high social dominance orientation is strongly correlated with conservative political views, and opposition to programs and policies that aim to promote equality (such as affirmative action, laws advocating equal rights for homosexuals, women in combat, etc.).
There has been some debate within the psychology community on what the relation is between SDO and racism/sexism. One explanation suggests that opposition to programs that promote equality need not be based on racism or sexism but on a "principled conservatism", that is, a "concern for equality of opportunity, color-blindness, and genuine conservative values".
Some principled-conservatism theorists have suggested that racism and conservatism are independent, and only very weakly correlated among the highly educated, who truly understand the concepts of conservative values and attitudes. In an effort to examine the relationship between education, SDO, and racism, Sidanius and his colleagues asked approximately 4,600 Euro-Americans to complete a survey in which they were asked about their political and social attitudes, and their social dominance orientation was assessed. "These findings contradict much of the case for the principled conservatism hypothesis, which maintains that political values that are largely devoid of racism, especially among highly educated people." Contrary to what these theorists would predict, correlations among SDO, political conservatism, and racism were strongest among the most educated, and weakest among the least educated. Sidanius and his colleagues hypothesized this was because the most educated conservatives tend to be more invested in the hierarchical structure of society and in maintaining the inequality of the status quo in society in order to safeguard their status.
SDO levels can also shift in response to threats to political party identity, with conservatives responding to party identity threat by increasing SDO levels and liberals responding by lowering them.
Culture
SDO is typically measured as an individual personality construct. However, cultural forms of SDO have been discovered on the macro level of society. Discrimination, prejudice and stereotyping can occur at various levels of institutions in society, such as transnational corporations, government agencies, schools and criminal justice systems. The basis of this theory of societal level SDO is rooted in evolutionary psychology, which states that humans have an evolved predisposition to express social dominance that is heightened under certain social conditions (such as group status) and is also mediated by factors such as individual personality and temperament. Democratic societies are lower in SDO measures The more that a society encourages citizens to cooperate with others and feel concern for the welfare of others, the lower the SDO in that culture. High levels of national income and empowerment of women are also associated with low national SDO, whereas more traditional societies with lower income, male domination and more closed institutional systems are associated with a higher SDO. Individuals who are socialized within these traditional societies are more likely to internalize gender hierarchies and are less likely to challenge them.
Biology and sexual differences
The biology of SDO is unknown.
Plenty of evidence suggests that men tend to score higher on SDO than women, and this is true across different countries,
cultures, age-groups, classes, religions and educational levels, with the difference generally being an average of half a point on the scale. Researchers argue for an 'invariance' in the difference between men and women's SDO; suggesting that even if all other factors were to be controlled for, the difference between men and women's SDO would still remain – this however in some cases has been challenged, although exceptions may be due to complex and highly dependent factors.
From an evolutionary and biological perspective SDO facilitates men to be successful in their reproductive strategy through achieving social power and control over other males and becoming desired mating partners for the opposite sex.
Males are observed to be more socially hierarchical, as indicated by speaking time, and yielding to interruptions. Males higher average SDO levels has been suggested as an explanation for gender differences in support for policies; males are more likely to support military force, defence spending and the death penalty and less likely to support social welfare or minimum wage legislation, while females are more likely to believe in the reverse. This is because males, due to being more likely to have higher SDO scores, are more likely to view inequalities as the natural result of competition and thus are more likely to have a negative view of policies designed to mitigate or dilute the effects of competition.
Noting that males tend to have higher SDO scores than females, Sidanius and Pratto speculate that SDO may be influenced by hormones that differ between the sexes, namely androgens, primarily testosterone. Male levels of testosterone are much higher than those of females.
Taking a socio-cultural perspective, it is argued that the gap between women and men in SDO is dependent upon societal norms prescribing different expectations for gender roles of men and women.Men are expected to be dominant and assertive, whereas women are supposed to be submissive and tender.
Differences between male and female attributional cognitive complexity are suggested to contribute to the gender gap in SDO. Women have been found to be more attributionally complex compared to men; they use more contextual information and evaluate social information more precisely. It is proposed that lower social status prompts higher cognitive complexity in order to compensate for the lack of control in that social situation by processing it more attentively and evaluating it more in depth. The difference in cognitive complexity between high and low status individuals could contribute to the differences between male and female SDO.
Some evidence suggests that both the dominance and anti-egalitarianism dimensions of SDO are determined by genetic, rather than environmental, factors.
See also
References
Bibliography
Personality traits
Personality tests
Social inequality
Abuse
Anti-social behaviour
Barriers to critical thinking
Harassment and bullying
Injustice
Social psychology concepts
Moral psychology
Political psychology | Social dominance orientation | Biology | 5,339 |
26,402,563 | https://en.wikipedia.org/wiki/Foundations%20of%20Algebraic%20Geometry | Foundations of Algebraic Geometry is a book by that develops algebraic geometry over fields of any characteristic. In particular it gives a careful treatment of intersection theory by defining the local intersection multiplicity of two subvarieties.
Weil was motivated by the need for a rigorous theory of correspondences on algebraic curves in positive characteristic, which he used in his proof of the Riemann hypothesis for curves over finite fields.
Weil introduced abstract rather than projective varieties partly so that he could construct the Jacobian of a curve. (It was not known at the time that Jacobians are always projective varieties.) It was some time before anyone found any examples of complete abstract varieties that are not projective.
In the 1950s Weil's work was one of several competing attempts to provide satisfactory foundations for algebraic geometry, all of which were superseded by Grothendieck's development of schemes.
See also
Weil cohomology theory
References
External links
Extracts from the preface of Foundations of Algebraic Geometry
1946 non-fiction books
Algebraic geometry
Mathematics books
History of mathematics | Foundations of Algebraic Geometry | Mathematics | 209 |
56,439,506 | https://en.wikipedia.org/wiki/Plug-in%20electric%20vehicles%20in%20Europe | The adoption of plug-in electric vehicles in Europe is actively supported by the European Union and several national, provincial, and local governments in Europe. A variety of policies have been established to provide direct financial support to consumers and manufacturers; non-monetary incentives; subsidies for the deployment of charging infrastructure; and long term regulations with specific targets. In particular, the EU regulation that set the mandatory targets for average fleet emissions for new cars has been effective in contributing to the successful uptake of plug-in cars in recent years
Europe had about 5.6 million plug-in electric passenger cars and light commercial vehicles on the road at the end of 2021. The European stock of plug-in cars is the world's second largest after China, accounting for about 32% of the global stock in 2021.
Europe also has the world's second largest light commercial electric vehicle stock, 33% of the global fleet in 2020, , France listed as the European country with the largest stock of light-duty all-electric utility vans, with about 62,000 units, followed by Germany (29,500), and the UK (almost 15,000).
The plug-in passenger car segment had a market share of 1.3% of new car registrations in 2016, rose to 3.6% in 2019, and achieved 11.4% in 2020. Despite the segment's rapid growth, , only 1% of all passenger cars on European roads were plug-in electric.
, Germany led cumulative sales in Europe with 1.38 million plug-in cars registered since 2010, followed by France (786,274), the UK (~745,000), Norway (647,000), and the Netherlands (390,454). Norway has the highest market penetration per capita in the world, also has achieved the world's largest plug-in segment market share of new car sales, 86.2% in 2020, and 22% of all passenger cars on Norwegian roads were plug-ins by the end of 2021. Germany was the top selling European country market in terms of annual volume from 2019 to 2023, but was overtaken by the UK in 2024.
In 2020, and despite the strong decline in global car sales brought by the COVID-19 pandemic, annual sales of plug-in passenger cars in Europe surpassed the 1 million mark for the first time. Also, Europe outsold China in 2020 as the world's largest plug-in passenger car market for the first time since 2015.
Government incentives and policies
The European Union and several national, provincial, and local governments around Europe have introduced policies to support the mass market adoption of plug-in electric vehicles. A variety of policies have been established to provide direct financial support to consumers and manufacturers; non-monetary incentives; subsidies for the deployment of charging infrastructure; procurement of electric vehicle for government fleets; and long term regulations with specific targets.
Financial incentives
Financial incentives for consumers aim to make plug-in electric car purchase price competitive with conventional cars due to the still higher up front cost of electric vehicles. Among the financial incentives there are one-time purchase incentives such as tax credits, purchase grants, exemptions from import duties, and other fiscal incentives; exemptions from road, bridge and tunnel tolls, and from congestion pricing fees; and exemption of registration and annual use vehicle fees. There are also several non-monetary incentives such as allowing plug-in vehicles access to bus lanes, free parking and free charging.
, tax benefits and incentives for electrically chargeable passenger cars were available in 24 out of the then 28 European Union member states. Nevertheless, only 12 member states offered bonus or grant payments as purchase incentives, and most countries only grant tax reductions or exemptions for all-electric cars. Croatia, Estonia, Lithuania, and Poland offered no incentives.
French bonus–malus
France introduced in 2008 a bonus–malus based tax system that penalize fossil-fuel vehicle sales. This a revenue-neutral policy mechanism allows to balance government support with direct revenues from the taxes collected from sale of particularly polluting and/or greenhouse gas emitting cars. The bonus applies to private and company vehicles purchased on or after 5 December 2007, and is deducted from the purchase price of the vehicle. The malus penalty applies to all vehicles registered after 1 January 2008, and is added at the time of registration.
EU average fleet emissions
European Union Directive No 443/2009 set a mandatory average fleet emissions target for new cars, after a voluntary commitment made in 1998 by the auto industry had failed to reduce emissions by 2007. The regulation applies to new passenger cars registered in the European Union and EEA member states for the first time. A carmaker who fails to comply has to pay an "excess emissions premium" for each vehicle registered according with the amount of g/km of exceeded.
The 2009 regulation set a 2015 target of 130 g/km for the fleet average for new passenger cars. A similar set of regulations for light commercial vehicles was set in 2011, with an emissions target of 175 g/km for 2017. Both targets were met several years in advance. A second set of regulations, passed in 2014, established a new target of average emissions of new cars to fall to 95 g/km, scheduled to be phased-in in 2020 (95%), and fully apply from 2021 onward. The target for light-commercial vehicles was set to 147 g/km by 2020.
In April 2019, Regulation (EU) 2019/631 was adopted, which introduced emission performance standards for new passenger cars and new light commercial vehicles for 2025 and 2030. The new Regulation went into force on 1 January 2020, and has replaced and repealed Regulation (EC) 443/2009 and (EU) No 510/2011. The 2019 Regulation set new emission targets relative to a 2021 baseline, with a reduction of the average emissions from new cars by 15% in 2025 (81 g/km), and by 37.5% in 2030 (59 g/km). For light-commercial vehicles the new targets are a 15% reduction for 2025 and a 31% reduction for 2030.
The 2019 Regulation also introduced an incentive mechanism or credit system from 2025 onwards for zero- and low-emission vehicles (ZLEVs). A ZLEV is defined as a passenger car or a commercial van with emissions between 0 and 50 g/km. The regulation set ZLEV sales targets of 15% for 2025 and 35% for 2030, and manufacturers have some flexibility in how they achieve those targets. Carmakers that outperform the ZLEV sales targets will be rewarded with higher emission targets, but the target relaxation is capped at a maximum 5% to safeguard the integrity of the regulation.
Since 2018, European carmakers have been fully embracing electrification of their car models to further reduce emissions, and comply with the targets established by the EU. The EU regulations have resulted in a significant growth of sales of plug-in electric cars since 2019.
In 2020, despite a strong decline of overall car sales in all countries as a result of the COVID-19 pandemic, the plug-in car segment has increased significantly its market share. According to the European Automobile Manufacturers Association (ACEA), during the first quarter of 2020, and due to the COVID‐19 outbreak, the market share of new passenger plug-in electric cars in the 27 EU countries was 6.8%, up from 2.5% in the same period in 2018. In April 2020 the European plug-in market share rose to 11%.
Norwegian case
In order to reduce Norway's greenhouse gas emissions, its government pledged in 2012, among other measures, a target for the average fleet emission rate of new passenger cars of 85 g/km by 2020, 10 g/km lower than the European Commission's targets for 2021.
As a result of its fast growing EV market penetration, average fleet emissions have been falling in Norway every year. Norway achieved in 2016 the European target set for 2021, with average emissions for all new passenger cars registered in 2016 of 93 g/km, down 7 g/km from 2015. Average emissions for all new passenger cars registered in 2017 was 82 g/km, down from 93 g/km in 2016, and below the government's target of 85 grams set for 2020. Norway achieved its transportation emissions target three years before the pledged deadline.
Annual average new passenger car fleet emissions reached an all-time low in 2019 with 60 g/km, 11 g/km lower than in 2018. Nevertheless, the average for gasoline-powered cars declined only 1 g/km from 2018 to 93 g/km, while diesel-powered cars increase their average emissions from 131 g/km in 2018 to 134 g/km in 2019. The net gain in the overall reduction of average fleet emissions is the result of the large market share of 42.4% achieved by the all-electric segment in 2019.
Phase-out of fossil fuel vehicles
Several European governments have made long term pledges with compliance targets within a specific timeframe such as ZEV mandates and the phase out of internal combustion engine vehicle sales. For example, Norway set a national goal that all new car sales by 2025 should be zero emission vehicles (electric or hydrogen).
Some cities are planning to establish a partial or total ban on internal combustion engine vehicles or to implement zero-emission zones (ZEZ) restricting traffic access into an urban cordon area or city center where only zero-emission vehicles (ZEVs) are allowed access. In such areas, all internal combustion engine vehicles are banned.
, cities planning to gradually introduce ZEZ, or a partial or total ban fossil fuel powered vehicles include, among others, Amsterdam (2030), Athens (2025), Barcelona (2030), Brussels (2030/2035), Copenhagen (2030), London (2020/2025), Madrid (2025), Milan (2030), Oslo (2024/2030), Oxford (2021–2035), Paris (2024/2030), and Rome (2024/2030).
Other policies
There are also measures to promote efficient vehicles in the Directive 2009/33/EC of the European Parliament and of the Council of 23 April 2009 on the promotion of clean and energy-efficient road transport vehicles, and in the Directive 2006/32/EC of the European Parliament and of the Council of 5 April 2006 on energy end-use efficiency and energy services.
The 2009 Directive applies to contracting entities under the obligation to apply the procurement procedures set out in Directives 2004/17/EC and 2004/18/EC, as well as operators of public passenger transport services by rail and by road under a public service contract within the meaning of Regulation (EC) No 1370/2007 of the European Parliament and of the Council of 23 October 2007. These entities and operators should take into account lifetime energy and environmental impacts, including energy consumption and emissions of , and of certain pollutants, when purchasing road transport vehicles with the objectives of promoting and stimulating the market for clean and energy efficient vehicles.
Markets and sales
Europe had about 5.6 million plug-in electric passenger cars and light commercial vehicles in circulation at the end of 2021, consisting of 2.9 million fully electric passenger cars, 2.5 million plug-in hybrid cars, and about 220,000 light commercial all-electric vehicles. , the European stock of plug-in passenger is the world's second largest market after China, accounting for 32% of the global car stock in 2020 Europe outsold China in 2020 as the world's largest plug-in passenger car market for the first time since 2015.
Europe also has the second largest electric light commercial vehicle stock, 33% of the global stock in 2020. , France listed as the European country with the largest stock of light-duty electric commercial vehicles, with about 62,000 utility vans in use, and also ranks as the world's second after China.
Since 2016 the plug-in passenger car segment has experienced rapid growth, with annual registrations increasing 33% in 2018, 45% in 2019, and 137% in 2020. The plug-in segment had a market share of 1.3% of new car registrations in 2016, rose to 3.6% in 2019, and achieved 11.4% in 2020. Despite the segment's rapid growth, , only 1.0% of passenger cars on European roads were plug-in electric vehicles, and just 0.3% of light commercial vehicles on the EU roads were fully electric in 2019.
Cumulative sales of light-duty plug-in electric vehicles in Europe passed the 500,000 unit mark in May 2016, the one million milestone in June 2018, and the two million mark in April 2020. Norway passed the 100,000th registered plug-in unit milestone in April 2016, France passed the same milestone in September 2016, and the Netherlands in November 2016. The UK achieved the 100,000 unit mark in March 2017.
Norway was the top selling plug-in country market in terms of annual sales from 2016 to 2018. In 2019, Germany surpassed Norway as the best selling plug-in market, leading both sales of the all-electric and the plug-in hybrid segments, and again in 2020, Germany listed as the top selling European country market, with a record of over 394,000 units sold. The only country that outsold Germany in 2020 was China, and France and the UK ranked among the top five best selling countries.
, Germany is the European country with the largest stock of plug-ins in the continent, with 1.38 million plug-in passenger cars registered since 2010. Ranking next is France with 786,274 light-duty plug-in electric vehicles, followed by the UK with about 745,000 plug-in cars, and Norway with 647,000 light-duty plug-in electric vehicles. Norway continues to have the highest market penetration per capita in Europe and the world, and in 2021 achieved the world's largest annual plug-in segment market share of new car sales ever, 86.2%. Also, Norway has the highest share of plug-in cars in use in the world, with 22.1% of all passenger cars on the road by the end 2021.
The following table summarizes annual registrations of light-duty plug-in electric vehicles in the region, including the European Union, the UK, and three EFTA countries, from 2010 to 2021:
2010–2015
A total of 1,614 all-electric cars and 1,305 light-utility vehicles were sold in 2010. Sales jumped from 2,919 units in 2010 to 13,779 in 2011, consisting of 11,271 pure electric cars and 2,508 commercial vans. In addition, over 300 plug-in hybrids were sold in 2011, mainly Opel Amperas. Light-duty plug-in vehicle sales totaled 34,333 units in 2012, consisting of 24,713 all-electric cars and vans, and 9,620 plug-in hybrids. The Opel/Vauxhall Ampera plug-in hybrid was Europe's top selling plug-in electric car in 2012 with 5,268 units, closely followed by the all-electric Nissan Leaf with 5,210 units.
The plug-in segment sales more than double to 71,943 units in 2013. Pure electric passenger and light commercial vehicles sales increased by 63.9% to 40,496 units. In addition, a total of 31,477 extended-range cars and plug-in hybrids were sold in 2013. Registrations reached 104,746 light-duty plug-in electric vehicles in 2014, up 45.6% from 2013. A total of 65,199 pure electric cars and light-utility vehicles were registered in Europe in 2014, up 60.9% from 2013. All-electric passenger cars represented 87% of the European all-electric segment registrations. Extended-range cars and plug-in hybrid registrations totaled 39,547 units in 2014, up 25.8% from 2013.
During 2013 took place a surge in sales of plug-in hybrids in the European market, particularly in the Netherlands, with 20,164 PHEVs registered during the year. Out of the 71,943 highway-capable plug-in electric passenger cars and utility vans sold in the region during 2013, plug-in hybrids totaled 31,447 units, representing 44% of the plug-in electric vehicle segment sales that year. This trend continued in 2014. Plug-in hybrids represented almost 30% of the plug-in electric drive sales during the first six months of 2014, and with the exception of the Nissan Leaf, sales of the previous European best selling models fell significantly, while recently introduced models captured a significant share of the segment sales, with the Mitsubishi Outlander P-HEV, Tesla Model S, BMW i3, Renault Zoe, Volkswagen e-Up!, and the Volvo V60 Plug-in Hybrid (available as a diesel–electric hybrid) ranking among the top ten best selling models.
In 2014 Norway was the top selling country in the light-duty all-electric market segment, with 18,649 passenger cars and utility vans registered, more than doubling its 2013 sales. France ranked second with 15,046 units registered, followed by Germany with 8,804 units, the UK with 7,730 units, and the Netherlands with 3,585 car and vans registrations. In the plug-in hybrid segment, the Netherlands was the top selling country in 2014 with 12,425 passenger cars registered, followed by the UK with 7,821, Germany with 4,527, and Sweden 3,432 units. Five European countries achieved plug-in electric car sales with a market share higher than 1% of new car sales in 2014, Norway (13.84%), the Netherlands (3.87%), Iceland (2.71%), Estonia (1.57%), and Sweden (1.53%).
In 2013 the top selling plug-in was the Leaf with 11,120 units sold, followed by the Outlander P-HEV with 8,197 units. The Mitsubishi Outlander plug-in hybrid was the top selling plug-in electric vehicle in Europe in 2014 with 19,853 units sold, surpassing of the Nissan Leaf (14,658), which fell to second place. Ranking third was the Renault Zoe with 11,231 units.
For a second year running, the Mitsubishi's Outlander P-HEV was the top selling plug-in electric car in Europe with 31,214 units sold in 2015, up 57% from 2014. The Renault Zoe ranked second among plug-in electric cars, with 18,727 registrations, and surpassed the Nissan Leaf to become best selling pure electric car in Europe in 2015. Ranking next were the Volkswagen Golf GTE plug-in hybrid (17,300), followed by the all-electric Tesla Model S (15,515) and the Nissan Leaf (15,455), the BMW i3, including its REx variant, (12,047), and the Audi A3 e-tron plug-in hybrid (11,791).
The Netherlands was the top selling country in the European light-duty plug-in electric market segment, with 43,971 passenger cars and utility vans registered in 2015. Norway ranked second with 34,455 units registered, followed by the UK with 28,188 units, France with 27,701 car and vans registrations, and Germany with 23,464 plug-in cars. Eight European countries achieved plug-in electric car sales with a market share higher than 1% of new car sales in 2015, Norway (22.4%), the Netherlands (9.7%), Iceland (2.9%), Sweden (2.6%), Denmark (2.3%), Switzerland (2.0%), France (1.2%) and the UK (1.1%). , almost 25% of the European plug-in stock was registered in the Nordic countries, with over 100,000 units registered. In 2015, combined registrations in the four countries were up 91% from 2014.
For the first time in the region, in 2015 plug-in hybrids (95,140) outsold all-electric cars (89,640) in the passenger car segment, however, when light-duty plug-in utility vehicles are accounted for, the all-electric segment totaled 97,687 registrations in 2015, up 65,199 in 2014, and ahead of the plug-in hybrid segment. Also in 2015, the European market share of plug-in electric cars passed the 1% mark for the first time, with a 1.41% share of new car sales that year. This trend continue during 2016. Since April 2016 plug-in hybrids have outsold all-electric cars, and the gap has continued to widen. Accounting for passenger plug-in car sales in Western Europe between January and July 2016, plug-in hybrids captured almost 54% of the region's plug-in market sales. During 2016 the all-electric car segment ended with a market share of 0.57% of new car sales, while plug-in hybrids reached a market share of 0.73%.
2016–2017
European sales of plug-in electric cars passed 200,000 units for the first time in 2016. The plug-in segment achieved a market share of 1.3% of total new car sales in 2016. Norway was the top selling plug-in car country in Europe in 2016 with 45,492 plug-in cars and vans registered, followed by the UK with about 36,907 units, France with 33,774, Germany with 25,154, the Netherlands with 24,645, and Sweden with 13,454. France was the top selling market in the light-duty all-electric segment with 27,307 units registered, up 23% from 2015. The plug-in car segment of ten European countries achieved a market share of new car sales above 1%, led by Norway with 29.1%, followed by the Netherlands with 6.4%, Sweden with 3.5%, and Switzerland with 1.8%.
The Renault Zoe was the best-selling all-electric car in Europe in 2016 with 21,735 units delivered, and also topped European sales in the broader plug-in electric car segment, ahead of the Outlander P-HEV, the top selling plug-in in the previous two years. The Mitsubishi Outlander PHEV with 21,446 units sold was the second best-selling plug-in car, followed by the Nissan Leaf with 18,718. The Outlander PHEV has been Europe's best-selling plug-in hybrid vehicle for four years in a row, from 2013 to 2016. The top selling all-electric commercial van was the Nissan e-NV200 with 4,319 units registered.
Registrations totaled 302,383 units in 2017, of which, 149,086 (49.3%) were all-electric cars and vans, and 153,297 (50.7%) were plug-in hybrid cars. The segment market share achieved a record 1.74% in 2017. Accounting for new registrations of plug-in passenger cars, Norway was Europe's top selling country in 2017 with 62,313 units, followed by Germany with 54,617, which more than doubled in 2017 and moved ahead of French and the British markets for the first time ever. Ranking next were the UK with 47,298, France with 36,835, and Sweden with 19,678 units. Norway also led the all-electric car segment with 33,025 new units registered, up 36.3% from 2016, and the UK led the plug-in hybrid car segment with 31,154 registrations, up 25.1% from 2016.
In 2017, sales in the Netherlands fell by 51.7% from 2016 due to changes in tax rules, and as a result, it was overtaken by both Sweden (+48.4%) and Belgium (+59.2%). Denmark was the only other significant plug-in car market with weaker sales in 2017, down 30.1% from 2017 with the fall also due to a change in taxes. In addition to Germany, plug-in car sales also doubled in Spain (+104.6%) and Portugal (+121.2%), and sales also increased sales significantly in Italy (+71.2%).
Among all-electric cars, the top selling model was the Renault Zoe with 31,302 units, followed by the Nissan Leaf with 17,293. Combined sales of BMW i3 pure electric and REx models totaled 20,855 units, making the i3 Europe's second best selling plug-in car in 2017 after the Zoe. The best selling plug-in hybrids were the Outlander P-HEV with 19,189 units, the VW Passat GTE with 13,599 and the Mercedes Benz GLC 350e with 11,249.
, the Mitsubishi Outlander P-HEV continues to rank as the all-time top selling plug-in electric car in the region with 100,097 units delivered since its launch in 2013, followed by the Renault Zoe with 91,927 units, the Nissan Leaf with 84,947 units, The Renault Kangoo Z.E. is the all-time top selling all-electric utility van with 29,150 units sold through December 2017.
2018–2019
Plug-in passenger car registrations totaled 558,649 units in 2019, up from 385,293 in 2018. The plug-in segment market share rose form 2.5% in 2018 to 3.7% in 2019. Registrations in 2019 consisted of 359,796 all-electric cars (64.4%) and 198,853 plug-in hybrid cars (35.6%). Registrations of all-electric light-duty commercial vehicles totaled 28,704 in 2019, representing a market share of 1.2% of the segment new registrations, and was led by France with more than 8,000 units.
The new long-range Nissan Leaf was the top selling plug-in car in Europe in 2018 with over 40,000 units registered, and for the sixth consecutive year (2013–2018), the Mitsubishi Outlander PHEV was best selling plug-in hybrid in Europe. Sales in 2018 were led by Norway with 72,689 new passenger car registrations, followed by Germany with 67,658 units. During 2019 Germany, with 108,839 units registered, surpassed Norway (79,640) for the first time as the best selling country market in the European region.
The Tesla Model 3, launched in the European market in February 2019, ranked as the best selling plug-in car in Europe in 2019, with over 95,000 units delivered in its first year in that market, and outselling other key premium models. The Model 3 also set records in Norway and the Netherlands, listing in both countries not only as the top selling plug-in car but also as the best selling passenger car model in the overall market.
The sales volume achieved by the Model 3 in 2019 (15,683) is the third largest in Norwegian history, exceeded only by the Volkswagen Bobla (Beetle) in 1969 (16,706), and Volkswagen Golf in 2015 (16,388). The Model 3 set a new record in the Netherlands for the highest registrations in one month (22,137) for any single plug-in vehicle in Europe.
2020
As a result of the COVID-19 pandemic, diesel and gasoline car sales in the European Union fell over 32% during the first quarter of 2020. Despite the overall decline caused by the outbreak, registrations of plug-in electric cars totaled 167,132 units across the EU, more than doubled (up 100.7%) compared to the same period in 2019. When plug-in car sales are accounted for combined registrations in the EU, EFTA countries and the UK, registrations were up 81.7% from the first quarter of 2019, consisting of 130,297 all-electric cars (up 58.2%) and 97,913 plug-in hybrids (up 126.5%).
Plug-in electric car sales in Europe skipped the overall car market decline for a variety of reasons. The impacts of the electric car incentives introduced in Italy in 2019 began to take effect in the market; Germany increased electric car purchase subsidies in February; and 2020 is the target year of the European Union's emissions standards, which limit average emissions per km driven of new car sales. As a result, in the first four months of 2020 plug-in car sales in the largest European car markets combined, France, Germany, Italy and the United Kingdom, were about 90% higher than in the same period the previous year.
As a result of the stimulus packages introduced by several government due to global economic recession caused by COVID-19, combined with the phase in of the EU regulations, plug-in car sales surged during the fourth quarter of 2020, with battery electric cars growing 207.4% compared with the same quarter of 2019, and plug-in hybrids up 262.3%.
In 2020, total plug-in car registrations in the European Union, three EFTA countries and the UK passed the one million mark for the first time ever, totaling 1,364,813 units, up 143.8% from 2019. Registrations of fully electric cars totaled 745,684 units, up 107.0% from 2019, and plug-in hybrid cars a total of 619,129, up 210.0% from 2019. The region's plug-in market share achieved a record 11.4% in 2020. The surge in plug-in car sales allowed Europe to outsell China in 2020 as the world's largest plug-in passenger car market for the first time since 2015.
The top selling plug-in electric car in the region in 2020 was the Renault Zoe with 100,815 units registered. The Tesla Model 3 (85,713) ranked next, followed by the new Volkswagen ID.3 (56,118), Hyundai Kona EV (47,796), and the VW e-Golf (33,650). The top selling plug-in hybrid was the Mercedes-Benz A250e, with 29,427 units, followed by the Mitsubishi Outlander (26,673), which led the PHEV segment in 2019. , the fully electric Renault Zoe ranks as the all-time best selling plug-in electric car in Europe, with more than 284,000 units registered since its inception in 2012. , the plug-in light commercial vehicle segment is led by the Renault Kangoo Z.E. with 57,595 all-electric vans sold in Europe since 2010.
Top selling plug-in models
By country
Austria
Belgium
Czech Republic
Denmark
Estonia
Finland
France
, a total of 786,274 light-duty plug-in electric vehicles have been registered in France since 2010, consisting of 512,178 all-electric passenger cars and commercial vans, and 274,096 plug-in hybrids. Of these, over 60,000 were fully electric light commercial vehicles.
The market share of all-electric passenger cars increased from 0.30% of new car registered in 2012, to 0.59% in 2014. After the introduction of the super-bonus for the scrappage of old diesel-power cars in 2015, sales of both pure electric cars and plug-in hybrids surged, rising the market share that year to 1.17%, 1.40% in 2016, 2.11% in 2018, 11.2% in 2020, and achieved a record market share of 18.3% in 2021.
, France is the European country with the largest market for light-duty electric commercial vehicles or utility vans, with a stock of almost 50,000 units. The large share of the light commercial market is the result of a national purchase incentive scheme, which French companies have embraced. The market share of all-electric utility vans reached a market share of 1.22% of new vans registered in 2014, 1.30% in 2015, and 1.77% in 2018.
Germany
The stock of plug-in electric vehicles in Germany is the largest in Europe, there were 1,184,416 plug-in cars in circulation on January 1, 2022, representing 2.5% of all passenger cars on German roads, up from 1.2% the previous year. , cumulative sales totaled 1.38 million plug-in passenger cars since 2010. Germany had a stock of 21,890 light-duty electric commercial vehicles in 2019, the second largest in Europe after France.
The plug-in electric car segment market share was 1.58% in 2017, and rose to 3.10% in 2019. Despite the strong global decline in car sales brought by the COVID-19 pandemic, the uptake rate achieved a record 13.6% in 2020. In spite of the continued global decline in car sales due to the global chip shortage, a record 681,410 plug-in electric passenger cars were registered in Germany in 2021, and the segment's market share to surge to 26.0%.
Germany topped plug-in car sales in the European continent in 2019, overtaking Norway as the best selling plug-in market, and with a record volume of 394,632 plug-in passenger cars registered in 2020, up 263% from 2019, Germany listed for a second year in-a-row as the best selling European country market. The German market topped both the fully electric and plug-in hybrid segments. The only country that outsold Germany in 2020 was China.
The top selling electric models in 2020 were the Renault Zoe (30,376), VW e-Golf (17,438), and the Tesla Model 3 (15,202). The top selling all-electric model in 2021 was the Tesla Model 3 (35,262), and the best selling plug-in hybrid was the Mercedes GLK, GLC (33,719).
Greece
Hungary
Iceland
Ireland
Italy
Luxembourg
Malta
Netherlands
, there were 390,454 highway-legal light-duty plug-in electric vehicles in use in the Netherlands, consisting of 137,663 fully electric cars, 243,664 plug-in hybrid cars, and 9,127 light duty plug-in commercial vehicles. The fleet in circulation of plug-in electric passenger cars represented 4.3% of all passenger cars in Dutch roads at the end of 2021, up from 3.1% in 2020. , the Netherlands listed as the European country with the largest charging infraestructure per plug-in vehicle (EVSE/EV), with over 85,000 public charging points nationwide.
A distinct feature of the Dutch plug-in market was the dominance of plug-in hybrids until 2016. PHEVs represented 67% of the country's stock of passenger plug-in electric cars and vans registered at the end of December 2018, down from 81% in 2017. The shift to focus incentives on battery electric vehicles was due to a change in the tax rules in 2016 after it became apparent many users rarely charged their plug-in hybrids and only bought the cars for their tax advantage. Afterwards, fully electric cars led plug-in sales from 2019 to 2021.
The Netherlands listed as the world's third best-selling country market for light-duty plug-in vehicles in 2015, however, plug-in sales fell sharply in 2016 due to changes in tax rules, and as a result, the Netherlands was surpassed by both Norway and France during 2016. As a result of this change in the government incentives, the plug-in market share declined from 9.9% in 2015, to 6.7% in 2016, and fell to 2.6% in 2017. The intake rate rose to 6.3% in 2018 due to another change in tax rules from January 2019. The market share reached 14.9% in 2019, climbed to 24.8% in 2020 and achieved a record 29.8% in 2021.
Norway
Norway is the country with the largest electric vehicle ownership per capita in the world. , the stock of light-duty plug-in electric vehicles in Norway totaled 647,000 units in use, consisting of 470,309 all-electric passenger cars and vans (including used imports), and 176,691 plug-in hybrids. Until December 2019, Norway listed as the European country with the largest stock of plug-in cars and vans, and the third largest in the world. Norway was the top selling plug-in country market in Europe for three consecutive years, from 2016 to 2018.
The Norwegian plug-in electric vehicle market share of new car sales has been the highest in the world for several years, achieving 29.1% in 2016, 39.2% in 2017, 49.1% in 2018, 55.9% in 2019 and 74.7% in 2020. In September 2021, the combined market share of the plug-in car segment achieved a new record of 91.5% of new passenger car registrations, 77.5% for all-electric cars and 13.9% for plug-in hybrids, becoming the world's highest-ever monthly plug-in car market share attained by any country. The segment market share in 2021 was 86.2%.
The Norwegian fleet of plug-in electric cars is one of the cleanest in the world because 98% of the electricity generated in the country comes from hydropower. In March 2014, Norway became the first country where one in every 100 registered passenger cars was a plug-in electric. The plug-in car market penetration reached 5% at the end of 2016, 10% in October 2018, and by the end of 2021, plug-in electric cars were 22.1% of all passenger cars on Norwegian roads.
The Nissan Leaf, with 12,303 units registered in 2018, listed as the Norway's best selling new passenger car model, marking the first time an electric car topped annual sales of the passenger car segment in any country. The following year, the Tesla Model 3 also topped annual passenger car sales, with 15,683 units registered. The sales volume achieved by the Model 3 in 2019 is the third largest in Norwegian history, exceeded only by the Volkswagen Bobla (Beetle) in 1969 (16,706), and Volkswagen Golf in 2015 (16,388). , the Leaf continues to be the all-time best selling plug-in electric car in Norway, with over 70,000 cumulative registrations since inception.
Poland
Romania
Russia
Spain
Sweden
, a total of 355,737 light-duty plug-in electric vehicles have been registered since 2011, consisting of 226,731 plug-in hybrids, 120,343 all-electric cars and 8,663 all-electric utility vans. Sweden has ranked among the world's top ten best-selling plug-in markets since 2015, listed through 2019 as the ninth largest country market. , the Swedish stock of plug-in passenger cars listed as the sixth largest in Europe.
The Swedish plug-in electric market is dominated by plug-in hybrids, representing 75.1% of the Swedish light-duty plug-in electric vehicle registrations through 2018, but began to slightly decline afterwards, reaching 70.3% in 2020.
The plug-in passenger car segment had a market share of 5.2% of new registrations in 2017, rose to 11.3% in 2019, and achieved a record take rate of 32.2% in 2020.
In September 2011 the Swedish government approved a program, effective starting in January 2012, to provide a subsidy of per car for the purchase of 5,000 electric cars and other "super green cars" with ultra-low carbon emissions, defined as those with emissions below 50 grams of carbon dioxide () per km. After renewing appropriations for the super green car rebate several times, from 2016, only zero emissions cars are entitled to receive the full premium, while other super green cars, plug-in hybrids, receive half premium. Registrations of super clean cars increased five-fold in July 2014 driven by the end of the quota of 5,000 new cars eligible for the super clean car subsidy.
United Kingdom
About 745,000 light-duty plug-in electric vehicles had been registered in the UK up until December 2021, consisting of 395,000 all-electric vehicles and 350,000 plug-in hybrids. The adoption of plug-in electric vehicles in the United Kingdom is actively supported by the British government through the plug-in car and van grants schemes and other incentives. Since the launch of the Plug-In Car Grant in January 2011, a total of 176,962 eligible cars have benefited with the government's subsidy through September 2018, and, the number of claims made through the Plug-in Van Grant scheme, , totaled 5,218 units since the launch of the programme in 2012.
A surge of plug-in car sales took place in Britain beginning in 2014. Total registrations went from 3,586 in 2013, to 37,092 in 2016, and rose to 59,911 in 2018. Sales climbed to 72,834 plug-in cars in 2019, to 175,082 units in 2020, The market share of the plug-in segment went from 0.16% in 2013 to 0.59% in 2014, and achieved 2.6% in 2018. The segment market share was 3.1% in 2019, surged to 10.7% in 2020, and achieved a record 18.6% in 2021, despite the global strong decline in car sales brought by the COVID-19 pandemic.
, the Mitsubishi Outlander P-HEV is the all-time top selling plug-in car in the UK over 46,400 units registered, followed by the Nissan Leaf more than 31,400 units.
International trade
European Union trade
In 2019, the European Union, 27 members, exported 8.2 billion euros of electric cars and imported 7.1 billion euros.
The 8.2 billion euros of electric cars were exported to United Kingdom (26%), Norway (22%), and the United States (19%).
The 7.1 billion euros of imported electric cars come from the United States (43% of imports in terms of value), South Korea (23%) and the United Kingdom (17%).
See also
Electric car use by country
Government incentives for plug-in electric vehicles
List of modern production plug-in electric vehicles
References
Electric vehicles
Europe vehicles
Plug-in hybrid vehicles | Plug-in electric vehicles in Europe | Engineering | 8,811 |
2,648,492 | https://en.wikipedia.org/wiki/Digital%20mockup | A digital mockup (or digital mock-up) is the digital description of a product, usually in three dimensions. The product design engineers, the manufacturing engineers, and the support engineers work together to create and manage the mock-up. As an extension it is also frequently referred to as digital prototyping or virtual prototyping. Digital mock-ups allows engineers to design and configure complex product prototypes and validate their designs without ever needing to build a physical model.
Mockups can be convincing and closely resemble the final product in appearance, allowing early revisions rather than changes much later in the production stage.
Types of Mockups
Static Mockups: These types of mockups are just images, and you cannot interact with them
Interactive Mockups: These mockups can be clicked through and are interactive, and can show how a user would interact with a product
See also
3D modeling
3D printing
Computer-aided design
Digital twin
Product structure modeling
Virtual prototyping
References
Computer-aided design
Product lifecycle management | Digital mockup | Engineering | 204 |
76,723,510 | https://en.wikipedia.org/wiki/Ultrasound-enhanced%20chemiluminescence | Chemiluminescence is the emission of light through a chemical reaction. It contrasts with fluorescence, which is excited by a light source. During chemiluminescence, the vibrationally excited product of an exoergic chemical reaction relaxes to its ground state with the emission of photons. Since the process does not require excitation light, problems in its application caused by light scattering or source instability are absent, and there is no concern about autofluorescence in the background, which can lead to highly sensitive deep tissue imaging.
Recently, many advances have been made in deep tissue optics regarding ultrasound modulated fluorescence and ultrasound switchable fluorescence. With its greater potential in medical imaging, ultrasound-enhanced chemiluminescence (UECL) has been developed as a second generation of chemiluminescence, and overcomes several limitations of chemiluminescence in deep tissue imaging. The simultaneous use of ultrasound and chemiluminescence imaging techniques helps accurately visualize the tissue of interest in dual imaging. Additionally, ultrasound can serve as an efficient tool to enhance the intensity of chemiluminescence by reducing light scattering while increasing spatial resolution.
Chemiluminescent materials
Luminol
Luminol (5-amino-2,3-dihydrophthalazine-1,4-dione) exhibits strong chemiluminescent properties. Usually found as a solid or powder, luminol appears as a white to yellowish crystalline solid. It is soluble in water and relatively stable at room temperature without luminescence. Luminol must be activated with an oxidant to produce luminescence; hydroxide ions or hydrogen peroxide(H2O2) usually serve as activators. Laboratory settings often use potassium ferricyanide or potassium periodate for the catalyst. The catalyst can be the iron in hemoglobin in blood forensic detection, while enzymes in biological systems can also serve as catalysts in tissue imaging. However, the quantum yield of luminol does not exceed 1.5% in aqueous systems, and 5% in dimethylsulfoxide. The peak wavelength of luminescence emission varies across different solvent environments. Specifically, in aqueous solutions, the peak wavelength of luminescence emission is measured at 425 nm.
Coelenterazine
Coelenterazine is derived from coelenterate. It possesses a superoxide anion in its structure, which enables it to produce chemiluminescence. Unlike luminol, coelenterazine does not require any catalyst to trigger luminescence. Analogs of coelenterazine such as CLA (2-methyl-6-phenyl-3,7-dihydroimidazo[1,2-a]pyrazin-3-one) and MCLA (2-methyl-6-(4-methoxyphenyl)-3,7-dihydroimidazo[1,2-a]pyrazin-3-one) have been prepared and used in many research works. In contrast to luminol, MCLA is cell impermeable and is more useful in the detection of superoxides outside the cell. Moreover, coelenterazine and its analogs could be applied as prosthetic groups of various photoproteins like mnemiopsin, aequorin, and beroverin. Coelenterazine has been used often in cancer imaging. In recent works, coelenterazine is used to estimate the elevated levels of ROS produced by cancer cells. Coelentarizine is also used in the detection and imaging of chronic inflammation associated with inflammatory bowel disease.
Acridinium esters
Acridinium esters are a class of compounds with an acridinium structure, which emit strong light signals in chemiluminescent reactions. They are commonly used in biomedical detection and analysis as luminophores or substrates in chemiluminescence assays. Acridinium does not require a catalyst to produce chemiluminescence. Hydrogen peroxide (H2O2) and a strong base are sufficient to cause acridinium esters to produce chemiluminescence. Acridinium phenyl esters display greater luminescence than simple acridinium alkyl esters. Compared to other chemiluminescent materials, acridinium derivatives have the advantage of displaying quick light emission with simple chemical triggers. The main disadvantage of acridinium (phenyl) esters is their instability in the aqueous medium, as the ester bond between the acridinium ring and the phenol undergoes hydrolysis.
Mechanism of UECL
A chief concern for chemiluminescence is that the scattering light increases the noise of the detection. Consider the redox reaction of H2O2:
It can be inferred that, with the appearance of oxidizing agents in tissues, free HO• radical is also produced. One could locally increase the production of H2O2 or free HO• of the target tissue while reducing those in the nearby medium to increase signal-to-noise ratios.
A study conducted at 10–3 molar luminol and 10–4 molar H2O2 showed that the intensity of sonochemiluminescence (ISCL) was linearly increased with the increase of ultrasound power up to 100 W.
It is proposed that water and oxygen molecules were freed and more local free radicals were created at the air-liquid interface of the bubbles' cavitation caused by ultrasound. The process can be described in equations as:
In further studies, it has been shown that focused ultrasound creates periodic compression and rarefaction of tissues, which can lead to changes in the local refractive index in tissues and allow less optical absorption and scattering. One can also modulate the laser light with a different frequency of ultrasound. Tissues will oscillate with different ultrasound frequencies and consequently produce harmonic interference with the laser. Simultaneously, photon-photon interaction will modulate the frequency of the transmitted laser, making the laser transverse deeper into tissues with less reflection.
Effects of ultrasound on chemiluminescent signals
As discussed under "Mechanism", the intensity of sonochemiluminescence (ISCL) was correlated to free radical HO• concentration and is linearly proportional to the ultrasound power. More interestingly, the effective distances are also strongly dependent on the alignment of ultrasound waves, which means that the more focused the delivered ultrasound, the less scattered the luminescent light would be, which therefore leads to a higher resolution. Although the mechanism behind this is still unclear, research has shown that temperature increases could enhance the sensitivity of chemiluminescence. This might be because the temperature of focal points is increased locally by the ultrasound.
It is also reported that the chemiluminescent signals could be greatly enhanced at a distance of 2–8 mm depending on the power of ultrasound. The proposed mechanism is that ultrasonication produces H2O2 which subsequently stabilizes the short-lived free radicals HO•.
Concerns
The major concern regarding ultrasound-enhanced chemiluminescence is the local heat generated by the ultrasound. Although the local heat could enhance the intensity and resolution of chemiluminescence images, producing heat beyond the endurance of cells can lead to tissue damage. To avoid the risks of overheating the tissues, the duration of exposure should be optimized. However, the primary cells in tissues are vulnerable to heat, and focused ultrasound-enhanced chemiluminescence has very limited ability to control local heat, there needs to be a detailed investigation into methods to overcome this limitation.
See also
Chemiluminescence
Eclox
luminescence
Bioluminescence
References
Chemiluminescence | Ultrasound-enhanced chemiluminescence | Chemistry | 1,605 |
26,788,131 | https://en.wikipedia.org/wiki/Clinical%20and%20Translational%20Science | Clinical and Translational Science is a bimonthly peer-reviewed open-access medical journal covering translational medicine. It is published by Wiley-Blackwell and is an official journal of the American Society for Clinical Pharmacology and Therapeutics. The journal was established in 2008 and the editor-in-chief is John A. Wagner (Cygnal Therapeutics).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, its 2023 impact factor is 3.1.
References
External links
American Society for Clinical Pharmacology and Therapeutics
General medical journals
Wiley-Blackwell academic journals
Academic journals established in 2008
Bimonthly journals
English-language journals
Translational medicine
Creative Commons Attribution-licensed journals | Clinical and Translational Science | Biology | 154 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.